![]() |
by Glyn Moody on (#4ACXT)
As Techdirt has reported over the years, the move to open access, whereby anyone can read academic papers for free, is proving a long, hard journey. However, the victories are starting to build up, and here's another one that could have important wider ramifications for open access, especially in the US:
|
Techdirt
Link | https://www.techdirt.com/ |
Feed | https://www.techdirt.com/techdirt_rss.xml |
Updated | 2025-08-21 21:31 |
![]() |
by Glyn Moody on (#4AC9K)
As Techdirt has reported over the years, the move to open access, whereby anyone can read academic papers for free, is proving a long, hard journey. However, the victories are starting to build up, and here's another one that could have important wider ramifications for open access, especially in the US:
|
![]() |
by Mike Masnick on (#4AC3T)
Scott Burroughs, one half of the named partners at the law firm Doniger Burroughs, seems to want to build up a reputation as one of the go to voices for pushing the most absurd copyright theories out there. You may recall Doniger Burroughs as the law firm that, representing Playboy, sued BoingBoing for linking to images, making an argument that was so absurd that a court completely tossed it in just about three months (which is crazy fast).Of course, as we noted, Doniger Burroughs after years of copyright trolling clothing designs appeared to be branching out into more traditional copyright trolling. As part of that, Burroughs has been publishing complete and utter nonsense in a regular column over at Above The Law.Recently, he decided to do a bizarre attack on "fair use" in which he conveniently leaves out a few important facts in order to suggest that fair use has gone too far and needs to be pared back. He entitled the series, "The Tyranny of Fair Use" with the first part purporting to explain "How A Once-Humble Copyright Doctrine Tormented A Generation Of Litigants," and the second part unfairly tarnishing the legacy of judge Pierre Leval whose seminal paper, Towards a Fair Use Standard certainly has had tremendous weight on how the courts view fair use. Of course, the reason for that is because it's thoughtful and well-argued, but we'll get to that.The "premise" (if you can call it that) of the first piece is that the concept of fair use today has strayed a great distance from where it began, going all the way back to the Statute of Anne. I'd quote his nonsense, but I get the feeling Burroughs might not think that's fair use. However, to summarize it, Burroughs claims that from nearly the beginning of time until just recently, "fair use" was very, very limited and required a very high bar to meet its standard. The second piece, then lays the blame on the more modern, more expansive view of fair use on Leval's famous paper, which, among other things, promotes the concept of "transformative works" as being a key factor in determining fair use. Burroughs claims that Leval's take on things "contravene[s] over 150 years of jurisprudence."Of course -- perhaps because of his own fear of violating the copyright of Leval's piece -- Burroughs barely quotes from it at all. Perhaps that's because it debunks nearly everything that Burroughs says. Whereas Burroughs insists, repeatedly, that the fair use standards were well settled and clearly applied before Leval came on the scene, Leval's own paper points out that this is not even close to true:
|
![]() |
by Karl Bode on (#4ABVN)
We've noted for years that there's a certain segment of the media and entertainment industry that despises Netflix. Some of this is based on a disdain for Netflix coming to town and throwing oodles of cash around, but a larger chunk is driven by those who simply don't like change but can't admit as much. A good example of that later motivator has been the Cannes film festival, which recently banned Netflix from participating in the awards.When asked to explain why, festival head Thierry Fremaux couldn't really provide a solid answer, but did infer that what Netflix does can't be considered good because it doesn't adhere to traditional and often counterproductive business tactics (like antiquated release windows):
|
![]() |
by Tim Cushing on (#4ABPW)
The DOJ is still moving ahead with its plan to attack free speech protections. More than eight years in the making, the attempted prosecution of Julian Assange for publishing leaked documents forges ahead slowly, threatening every journalist in its path.Wikileaks isn't the only entity to publish leaked documents or shield their source. Multiple US press entities have done the same thing over the years. It seems the DOJ feels it's ok to go after Assange and Wikileaks because it's not a US newspaper. But once you set foot on a slope this slippery, it's pretty tough to regain your footing -- especially when the Executive Branch has housed people hellbent on eliminating leakers and whistleblowers for most of the last 20 years.It appears the government wants Chelsea Manning to testify about her relationship with Wikileaks and Julian Assange. The demand Manning received may be deliberately vague, but it's pretty easy to connect the dots, as Charlie Savage does for the New York Times.
|
![]() |
by Daily Deal on (#4ABPX)
The Ultimate Cisco Certification Super Bundle will help you prepare to gain certifications necessary to work with Cisco Networking Systems. The 9 courses cover interconnecting Cisco networking devices, LAN switching technologies, IPv4 and IPv6 routing technologies, WAN technologies, infrastructure services and maintenance, network security, and much more. Each course is designed to help you prepare to take various Cisco certification exams. This bundle is on sale for $49.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
|
![]() |
by Mike Masnick on (#4ABHY)
On Friday, we wrote about the cartoonishly evil decision by producer Scott Rudin, who is producing a big Broadway reboot of To Kill A Mockingbird, written by Aaron Sorkin, to shut down local community theater versions of the earlier play version of the story, written by Christopher Sergel. Apparently, the contract with the Harper Lee estate for a stage adaptation of her book involved some odd clause that said if there was a showing on Broadway of Mockingbird, there couldn't be any stagings near a city. And Rudin then had his lawyers threaten a whole bunch of small community theaters with cease-and-desist notices, claiming they may be on the hook for $150,000 in damages. All for small community theater operations which had paid their $100 license for the rights to perform the old Sergel version of the play.As we noted in that post, rather than say this was about lawyers getting out of hand, Rudin doubled down on the idea, bizarrely making it sound like he had to block the productions:
|
![]() |
by Karl Bode on (#4AB3Q)
We've noted time and time again how the US broadband industry's biggest problem is a lack of healthy competition. In countless markets consumers either have the choice of a terrible phone company or a cable giant. The nation's phone companies have spent the last decade refusing to upgrade (or in some cases even repair) their aging DSL lines, because they don't see residential broadband as worth their while. That in turn is giving giants like Comcast and Spectrum an ever greater monopoly in many markets, reducing the already muted incentive to compete on price or shore up historically terrible customer service.It's a weird problem that's widely ignored by both parties, and it just keeps getting worse. This week, US telco Windstream filed for bankruptcy protection, partially thanks to a dispute with one of the company's creditors, netting a $310 million settlement Windstream couldn't swallow. More specifically, hedge fund Aurelius Capital Management had argued that a two-year-old spinoff of the company's fiber-optic cable network violated the covenants on one of its bonds, prohibiting "sale-leaseback transactions." The court agreed.Windstream, for its part, issued a statement insisting that none of this was the company's fault, and that the bankruptcy protection wouldn't impact customers:
|
![]() |
by Mike Masnick on (#4AAS5)
The Australian concept of free speech still boggles the mind -- as it appears they're not very big on supporting it. Yesterday we had our story about how journalists were finally able to report on the conviction of Cardinal George Pell, the former Vatican CFO (and often described as the 3rd most powerful person in the Vatican), over some fairly horrific child sexual abuse claims. The conviction had happened back in December, and we were among those who wrote about it at the time, focusing on the ridiculousness of the Australian court's "suppression order," barring any of the reporters who were covering the trial from writing about either the conviction or the existence of the suppression order. The ostensible reason was that there was a second trial still necessary for Pell. However, as I've noted in earlier posts, the US handles this in a much better, and less speech-suppressing manner: by (1) asking potential jurors about their familiarity with the case, and (2) forbidding just the jury pool from further researching the case. It may not be perfect, but the system does work pretty well, and avoids a massive speech suppressing blanket order from a court that would appear to violate any concept of a "free press" in Australia.The situation with the Pell suppression order was even worse, actually, because it impacted many news organizations outside of Australia, out of fear that breaking the suppression order might lead to consequences for local staff in Australia (or even possibly abroad). Some publications in Australia certainly found ways to express their displeasure about the suppression order, such as the Herald Sun, which had the following cover page:However, it impacted other publications as well. Many international news organizations refused to cover the situation at all, and the NY Times took the surprising step of refusing to publish any stories about the case online, but it did run stories in its print edition -- but not in the Australian print edition. This is somewhat frightening if you support freedom of the press.But, now things are even more frightening. Apparently, Australia has started sending "notices" out to journalists who in some way covered either the case or the gag order before it was lifted:
|
![]() |
by Leigh Beadon on (#4A9ND)
This week, our first place winner on the insightful side is That One Guy, who suspected that a particular detail about the theatre that cancelled its production of To Kill A Mockingbird under copyright pressure was hardly arbitrary:
|
![]() |
by Leigh Beadon on (#4A83R)
Last week, we took a closer look at the winner of Best Digital Game in our public domain game jam, Gaming Like It's 1923. Today, we continue our winner spotlight series with the game that won Best Remix for its combination of material from multiple sources: Will You Do The Fandango? by Lari Assmuth.Fandango is a tabletop roleplaying game with an overall structure that will be familiar to anyone who's played Dungeons & Dragons or its ilk — but where D&D builds worlds by drawing on material from across the fantasy genre, Fandango uses very different source material: the world of Comedia dell'Arte, starting with the 1923 movie Scaramouche that entered the public domain this year. Instead of grand heroism and the battle between good and evil, Fandango aims to create a story of "swashbuckling romance" and big, bombastic melodrama.In standard fashion, playing requires a Gamemaster and a group of players, each of whom creates a character with an array of stats (Action, Passion and Wit). The setting is revolutionary-era France, the characters are members of a traveling troupe of Comedia dell'Arte players, and the GM leads them on an adventure through towns and cities where civil unrest and class struggle are bubbling up. In each location they will meet notable characters, and get into social conflicts — instead of combat mechanics, the game uses rules and dice for witty repartee and dueling insults. At the end of their time in each location, the players put on a performance, and then deal with the fallout.And one of the most intriguing features? Every character has both a "Personage" (the person they are) and a "Mask" (the role they play in the performances) — and while personage is fixed, masks can be traded throughout the game. Also, they are literal masks:You can download the rules (and printable masks) for the game from its page on Itch, and all you need to get started is a quick read, a couple dice, a pair of scissors, and a few enthusiastic friends. If you get a game going, we'd love to hear how it plays out, and I suspect the creator would too!Next week, we'll be back with another spotlight on one of our winners — and don't forget to check out the full list of entries to spot some of the hidden gems that didn't quite make the final cut. Happy gaming!
|
![]() |
by Joe Mullin and Daniel Nazer on (#4A716)
What if we allowed some people to patent the law and then demand money from the rest of us just for following it?As anyone with a basic understanding of democratic principles can see, that is a terrible idea. In a democracy, elected representatives write laws that apply to everyone, ideally, based on the public interest. We shouldn't let private parties "own" legal principles or use technical jargon to re-cast those principles as "inventions."But that's exactly what the U.S. Patent Office has allowed two inventors, Nicholas Hall and Steven Eakin, to do. Last September, the government proclaimed that Hall and Eakin are the inventors of "Methods and Systems for User Opt-In to Data Privacy Agreements," U.S. Patent No. 10,075,451.The owner of this patent, a company called "Veripath," is already filing lawsuits against companies that make privacy compliance software. With Congress and many states actively engaged in debates over consumer privacy laws, Veripath might soon be using this patent to extract licensing cash from U.S. companies as well.Privacy-For-Functionality isn't an "Invention," it's a Policy DebateClaim 1 of the '451 patent describes a basic data privacy agreement. An API provides personal information from a software application; then the user is asked for a "required permission" for the use of that information. There's one add-on to the privacy deal: in exchange for the permission, the user gets access to "at least one enhanced function."The next several claims go on to describe minor variations on this theme. Claim 2 specifies that the "enhanced function" won't be available to other users. Claim 3 describes the enhanced function as being fewer advertisements; Claim 4 describes offering the enhanced function in exchange for a monetary payment.To say this "method" is well-known is a major understatement. The idea of exchanging privacy for enhanced functionality or better service is so widespread that it has been codified in law. For example, last year's California Consumer Privacy Act (CCPA) specifically allows a business to offer "incentives" to a user to collect and sell their data. That includes "financial incentives," or "a different price, rate, level, or quality of goods or services." The fact that state legislators were familiar enough with these concepts to write them into law is a sign of just how ubiquitous and uninventive they are. This is not technology this is policy.(An important aside: EFF strongly opposes pay-for-privacy, and is working to remove it from the CCPA. Pay-for-privacy undermines the law's non-discrimination provisions, and more broadly, creates a world of privacy "haves" and "have-nots." We've long sought this change to the CCPA.)Follow the Law, Infringe this PatentVeripath has already sued two companies that help website owners comply with Europe's General Data Protection Regulation, or GDPR, saying they infringe its patent. Netherlands-based Faktor was sued [PDF] on Feb. 15, and France-based Didomi was sued [PDF] on Feb. 22Some background: Venpath, Inc., a company with a New York address that appears to be a virtual office, assigned the rights in the '451 patent to VeriPath just days before the patent issued in September last year. As it happens, the FTC began enforcement proceedings against VenPath last September. The FTC's complaint [PDF] alleged that VenPath's website represented that "VenPath participates in and has certified its compliance with the EU-U.S. Privacy Shield Framework." The FTC alleged a count of "privacy misrepresentation." It claimed that VenPath "did not complete the steps necessary to renew its participation in the EU-U.S. Privacy Shield framework after that certification expired in October 2017." The FTC issued a Decision and Order [PDF] requiring VenPath to remove the misrepresentations.An exhibit [PDF] attached to the complaint shows that one of the named inventors on the patent, Nick Hall, contacted Faktor to ask what its prices were. Hall identified himself as the CEO of VenPath. Once Faktor responded, Veripath sued Faktor in federal court in New York.In its lawsuits, Veripath claims that basic warnings about cookies on websites, a now-common method of complying with the GDPR, violate its patent. The lawsuit against Faktor notes that Faktor's own website "might not work properly" unless a user consents to having her browser accept cookies.Veripath and its legal team argue that this simple deal—accepting cookie use, in order to visit websites—is enough to infringe the patent. They also claim that Faktor's Privacy Manager software infringes at least Claim 1 of the patent, and facilitates infringement by others.The '451 patent should never have been granted. In our view, its claims are clearly ineligible for patent protection under Alice v. CLS Bank. In Alice, the Supreme Court held that an abstract idea (like privacy-for-functionality) doesn't become eligible for a patent simply because it is implemented using generic technology. Courts have struck down similar claims, like a patent on the idea of conditioning access to content on viewing ads.Even when a patent is invalid, defendants face pressure to settle. Patent litigation is expensive and it can cost tens or hundreds of thousands of dollars just to get through the early stages. To really protect innovation we have to ensure that patents like the '451 patent are never issued in the first place. The fact that this patent was granted shows the Patent Office is failing to apply the law.We are currently urging the public to tell the Patent Office to stop issuing abstract software patents. You can use our Action Center to submit comments.Republished from the EFF's Stupid Patent of the Month series.
|
![]() |
by Timothy Geigner on (#4A6S0)
With rates of copyright infringement fluctuating year by year, and country by country, the end result is a debate that goes on as how to best keep rates trending downward. One side of this argument urges a never ending ratcheting up of enforcement efforts, with penalties and repercussions for infringement becoming more and more severe. The other side of the argument suggests that when content is made available in a way that is both convenient and reasonably priced, piracy rates will drop. A decent number of studies have been done that show the latter is the actual answer in this argument, including a study done last summer, which showed innovative business models fare far better than enforcement efforts.Yet it seems it's going to take a compounding series of these studies to get the point across, so it's worth highlighting yet another study that has come out of New Zealand that concludes that piracy rates are a function of pricing and ease of access to content.
|
![]() |
by Tim Cushing on (#4A6JZ)
The Seventh Circuit Appeals Court has issued a dubious ruling [PDF] on cell tower dumps -- one that appears to ignore the Supreme Court's decision declaring warrants are needed to obtain cell site location info. The criminal conduct leading to this questionable finding clearly shows robbing cellphone stores is a particularly bad idea. (h/t Orin Kerr)
|
![]() |
by Karl Bode on (#4A6B5)
Just about a year ago we noted how Facebook was taking some heat on the security and privacy fronts for pitching a "privacy protecting" VPN to consumers that actually violated consumer privacy. Based on the Onavo platform acquired by Facebook back in 2013, the company's "Onavo Protect – VPN Security" app informed users that the product would "keep you and your data safe when you browse and share information on the web" and that the "app helps keep your details secure when you login to websites or enter personal information such as bank accounts and credit card numbers."It didn't take long before many began to notice those claims weren't, well, true.A wide variety of different news outlets were quick to point out that Facebook was actually using the "privacy" app to track users around the internet when they wandered away from Facebook, then using that data to its own competitive advantage:
|
![]() |
by Mike Masnick on (#4A66N)
If you wanted to think of a children's story style "evil" character who must be stopped, you can't get much better than the evil rich out of towner going around the country trying to kill off local community theater productions of a beloved play, so that he can stage a massive Broadway reboot. So, step on up, Hollywood producer Scott Rudin, to the role of evil villain. Rudin is producing a new Broadway adaptation of Harper Lee's classic To Kill A Mockingbird. Of course, there already is a play based on the book, written by Christopher Sergel, that is widely performed around the country. Rudin, however, is producing a totally new version, written by famed Hollywood writer Aaron Sorkin.Now, a normal, thinking, kind person would immediately recognize that local community theater productions can easily co-exist with this giant Broadway production with all the big Hollywood names behind it. But, that's apparently not Scott Rudin. Rudin's lawyers claim that part of the contract between the company that holds the rights to the earlier play -- a company called Dramatic Publishing Company -- if there is any version of Mockingbird on Broadway, there cannot be any other Mockingbird performances within 25 miles of any city that had a population of 150,000 or more in 1960. First off: what a weird contract. Second: this still seems like something any person with the slightest bit of emotional empathy would ignore in putting on the new Broadway show. But not Scott Rudin:
|
![]() |
by Daily Deal on (#4A66P)
The Complete UI and UX Design Master Class Bundle contains 8 courses to help you learn how to design easy to use websites and mobiles apps. You will learn the basic tools of Photoshop specific to UI Design, the essential principles and concepts behind creating a simple and intuitive user experience, how to improve your designs with Modular Grid and Baseline Grid, and much more. Other courses cover Typography, freelancing, app button design, and a variety of Adobe tools. The bundle is on sale for $39.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
|
![]() |
by Mike Masnick on (#4A5XA)
You may have heard the story recently of how the band REM got a video in a tweet taken down after Donald Trump had retweeted the video. CNBC has the details:
|
![]() |
by Karl Bode on (#4A5EN)
We'll apparently have to keep making this point until it sinks in.For years now, streaming video providers like HBO and Netflix have taken a relatively-lax approach to password sharing. Netflix CEO Reed Hastings has gone so far as to say he "loves" password sharing, and sees it as little more than free advertising. Execs at HBO have similarly viewed password sharing in such a fashion, saying it doesn't hurt their business. If anything, it results in folks signing up for their own accounts after they get hooked on your product, something you'll often see with kids who leave home, or leave college and college friends behind.In other words, the actual streaming providers consistently say they see password sharing as a form of marketing. And most of these services have built in limits on the number of simultaneous streams per account that can operate at any one time, deflating much of the utility for heavy users looking to piggyback on others' accounts. That caps the phenomenon from operating at any scale that could prove truly harmful (say by 20 users sharing one Netflix account).That doesn't stop folks from conflating password sharing with "piracy." You'll see older cable executives occasionally whine about this subject, not understanding how any of this works. You'll also see stories like this one pop up every so often, insisting these companies are losing "X" millions because they're not cracking down harder on password streams:
|
![]() |
by Mike Masnick on (#4A550)
What the hell is going on in the EU these days? The EU Commission put out a Medium post that literally mocked the public as an "angry mob" for raising legitimate concerns about the EU's proposed Copyright Directive. And that followed a bizarrely incoherent "Q & A" page put out by the EU Parliament's Legislative Affairs Committee (JURI), spewing pure nonsense about the Copyright Directive. In both cases, those involved were publicly mocked for this (to the point that the Commission even took down its post, but blaming others for misunderstanding it as the reason).But now the EU Parliament is doubling down on this absurdity. Its official Twitter feed posted this bit of pure propaganda:
|
![]() |
Strike 3's Lawyer Sanctioned By Court, Excuses His Actions By Claiming He Can't Make Technology Work
by Timothy Geigner on (#4A4RC)
When it comes to the art of copyright trolling, part of that art necessarily pretends that all potential victims of the trolling effort are assumed to be masters of both technology and copyright law, such that they are both responsible for what goes on with their internet connections and that no action they take could possibly be a forgivable accident. These assumptions operate across the victim spectrum without regard to the the victim being of advanced age or incredibly young, or even whether the victim is sick or lacks the mental capacity to carry out the supposed infringement. The assumption in just about every case is that the accused is fully responsible.Which is the standard that then should be applied to Strike 3 Holding's lawyer, Lincoln Bandlow, who had to go to court to explain why he and his firm failed to provide a status update on 25 cases, despite the court ordering he do so, and was forced to explain why he thinks the court shouldn't just sanction him. Barlow attempts to explain this all away as a simple matter of he and his firm not being able to make their technology work.
|
![]() |
by Mike Masnick on (#4A4BR)
Back in December, we wrote about the insane attack on free speech perpetrated by the Australian court system, barring anyone from reporting on the fact that "third most powerful person in the Vatican," its CFO, George Pell, had been convicted of molesting choir boys in Australia in the 1990s. Only a very small number of news sites reported on this at all, out of fear of the Australian government going after them. Even the NY Times (of all sites) only published the story in its physical paper, and not online, to avoid the possibility that readers down under might see the story. We even got some pushback from some people for publishing the story, with them saying it was necessary to make sure Pell's second trial on similar charges was "fair." Of course, we've handled these issues differently in the US for decades, in a way that seems to work just fine: the press is free to report, but jurors are restricted from researching or reading about the case. That system inconveniences the fewest number of people, retains a system of fairness, and does not stifle a free and open press.Either way, on Tuesday, the Australian court system finally lifted the gag order allowing official reports to finally be written. As for why the gag order was finally lifted? Apparently that all important second trial? It's been called off.The Washington Post story above has many more details about the case that were kept secret, including the fairly graphic and horrifying details of what Pell did to some choir boys in the 1990s. It remains an insult to the work of the media that so many were forced to stay silent over these details. I recognize that not everywhere else has a First Amendment like the US does, and that protections for freedom of expression and freedom of the press vary from country to country, but Australia's press gag here is notable for keeping such important details secret and for scaring the media in other nations, including the US, from publishing their stories as well.
|
![]() |
by Jeffrey Westling on (#4A444)
With recent focus on disinformation and “fake news,†new technologies used to deceive people online have sparked concerns among the public. While in the past, only an expert forger could create realistic fake media, deceptive techniques using the latest research in machine-learning allow anyone with a smartphone to generate high-quality fake videos, or “deep fakes.â€Like other forms of disinformation, deep fakes can be designed to incite panic, sow distrust in political institutions, or produce myriad other harmful outcomes. Because of these potential harms, lawmakers and others have begun expressing concerns about deep-fake technology.Underlying these concerns is the superficially reasonable assumption that deep fakes represent an unprecedented development in the ecosystem of disinformation, largely because deep-fake technology can create such realistic-looking content. Yet this argument assumes that the quality of the content carries the most weight in the trust evaluation. In other words, people making this argument believe that the highly realistic content of a deep fake will induce the viewer to trust it — and share it with other people in a social network — thus hastening the spread of disinformation.But there are several reasons to be suspicious of that assumption. In reality, deep-fake technology operates similarly to other media that people use to spread disinformation. Whether content will be believed and shared may not be derived primarily from the content’s quality, but from psychological factors that any type of deceptive media can exploit. Thus, contrary to the hype, deep fakes may not be the techno-boogeyman some claim them to be.Deceiving with a deep fake.When presented with any piece of information — be it a photograph, a news story, a video, etc. — people do not simply take that information at face value. Instead, individuals in today’s internet ecosystem rely heavily on their network of social contacts when deciding whether to trust content online. In one study, for example, researchers found that participants were more likely to trust an article when it had been shared by people whom the individual already trusted.This conclusion comports with an evolutionary understanding of human trust. In fact, humans likely evolved to believe information that comes from within their social networks, regardless of its content or quality.At a basic level, one would expect such trust would be unfounded; individuals usually try to maximize their fitness (the likelihood they will survive and reproduce) at the expense of others. If an individual sees an incoming danger and fails to alert anyone else, that individual may have a better chance of surviving that specific interaction.However, life is more complex than that. Studies suggest that in repeated interactions with the same individual, a person is more likely to place trust in the other individual because, without any trust, neither party would gain in the long term. When members of a group can rely on other members, individuals within the group gain a net benefit on average.Of course, a single lie or selfish action could help an individual survive an individual encounter. But if all members of the group acted that way, the overall fitness of the group would decrease. And because groups with more cooperation and trust among their members are more successful, these traits were more likely to survive on an aggregate level.Humans today, therefore, tend to trust those close to them in a social network because such behavior helped the species survive in the past. For a deep fake, then, the apparent authenticity of the video may be less of a factor in deciding whether to trust that information than whether the individual trusts who shares it.Further, even the most realistic, truthful-sounding information can fail to produce trust when the individual holds beliefs that contradict the presented information. The theory of cognitive dissonance contends that when an individual’s beliefs contradict his or her perception, mental tension — or cognitive dissonance — is created. The individual will attempt to resolve this dissonance in several ways, one of which is to accept evidence that supports his or her existing beliefs and dismissing evidence that does not. This leads to what is known as confirmation bias.One fascinating example of confirmation bias in action came in the wake of President Donald Trump’s press secretary claiming that more people watched Trump’s inauguration than any other inauguration in history. Despite the video evidence and a side-by-side photo comparison of the National Mall indicating the contrary, many Trump supporters claimed that a photo depicting turnout on Jan. 20, 2017, showed a fuller crowd than it actually did because they knew it was a photo of Trump’s inauguration (Sean Spicer later clarified that he was including the television audience as well as the in-person audience, but the accuracy of that characterization is also debatable.) In other words, the Trump supporters either convinced themselves that the crowd size was larger despite observable evidence to the contrary, or they knowingly lied to support — or confirm — their bias.The simple fact is that it does not require much convincing to deceive the human mind. For instance, multiple studies have shown that rudimentary disinformation can generate inaccurate memories in the targeted individual. In one study, researchers were able to implant fake childhood memories in subjects by simply providing a textual description of an event that never occurred.According to these theories, then, when it comes to whether a person believes a deep fake is real, the quality matters less than whether an individual has pre-existing biases or trusts the person who shared it. In other words, existing beliefs, not the perceived “realness†of a medium, drives whether new information is believed. And, given the diminished role that the quality of a medium plays in the believability calculus, more rudimentary methods — like using Photoshop to alter photographs — can achieve the same results as a deep fake in terms of spreading disinformation. Thus, while deep fakes present a challenge generally, deep fakes as a class of disinformation do not present an altogether new problem as far as believability is concerned.Sharing Deep Fakes Online.With the rise of social media and the fundamental change in how we share information, some worry that the unique characteristics of deep fakes could make them more likely to be shared online regardless of whether they deceive the target audience.People share information — whether it be in written, picture or video form — online for many different reasons. Some may share it because it is amusing or pleasing. Others may do so because it offers partisan political advantage. Sometimes the sharer knows the information is false. Other times, the sharer does not know whether the information is accurate but simply does not care enough to correct the record.People also tend to display a form of herd behavior in which seeing others share content drives the individual to share the content themselves. This allows disinformation to spread across larger platforms like Facebook or Twitter as the content builds up a base of sharing. The number of people who receive disinformation, then, can grow exponentially at a very rapid pace. As the popularity of a given piece of content increases, so too does its credibility as it reaches the edges of a network, exploiting the trust that individuals have in their social networks. And even if the target audience does not believe a given deep fake, widespread propagation of the content can still cause damage; simply viewing false content can reinforce beliefs that the user already has, even if the individual knows that the content is an exaggeration or a parody.Deep fakes, in particular, present the audience with rich sound and video that engage the viewer. A realistic deep fake that can target the user’s existing beliefs and exploit his or her social ties, therefore, may spread rapidly online. But so, too, do news articles and simple image-based memes. Even without the richness of a deep fake, still images and written text can target the psychological factors that drive content-sharing online. In fact, image-based memes already spread at alarming rates due to their simplicity and the ease with which they convey information. And while herd-behavior tendencies will drive more people to share content, this applies to all forms of disinformation, not just deep fakes.Currently, a video still represents an undeniable record of events for many people. But as this technology becomes more commonplace and the limitations of video become more apparent, the psychological factors above will drive trust and sharing. And the tactics that bad actors use to deceive will exploit these social patterns regardless of medium.When viewed in this context, deep fakes are not some unprecedented challenge society cannot adapt to; they are simply another tool of disinformation. We should of course remain vigilant and understand that deep fakes will be used to spread disinformation. But we also need to consider that deep fakes may not live up to the hype.Jeffrey Westling (@jeffreywestling) is a Technology and Innovation Research Associate at the R Street Institute.
|
![]() |
by Glyn Moody on (#4A3VZ)
Despite unanimous warnings from experts that it was a really bad idea, the Australian government went ahead and passed its law enabling compelled access to encrypted devices and communications. Apparently, the powers have already been used. Because of the way the Australian government rammed the legislation through without proper scrutiny, the country's Parliamentary Joint Committee on Intelligence and Security has commenced a review of the new law. That's the good news. The bad news is that Andrew Hastie, the Chair of the Committee, still thinks fairy tales are true:
|
![]() |
by Mike Masnick on (#4A3QC)
For years, we've talked about bullshit Hollywood Accounting, in which the big studios make boatloads of money on films and TV shows while declaring publicly that those works never made a dime in profit. As we've discussed, in its simplest terms, the studios set up a separate "corporation" for the film or TV project, which then it charges massive fees -- and the sole purpose of those fees seem to be to send all the money to the studio, while claiming that the film or TV project is "losing money" and thus they don't have to pay out any profits to the actual creative people.Remember this the next time the MPAA goes around talking about how its mission is to "protect creators."Over at the Hollywood Reporter, Eriq Gardner has the latest bombshell example of Hollywood Accounting, which has resulted in Fox being told by an arbitrator to pay $179 million for repeated and obviously intentional dishonesty in reporting on the profits of the TV show Bones:
|
![]() |
by Daily Deal on (#4A3QD)
Clip Studio Paint PRO, the successor to Manga Studio, is used by more than 4 million illustrators, comic artists, and creators around the world to create groundbreaking work. This new and improved software offers better-specialized features for drawing comics and cartoons and has improved coloring features to offer a complete suite of creative tools. You have access to more than 10,000 free downloadable brushes, tones, 3D models and other content, and it has Photoshop integration. It's on sale for $30.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
|
![]() |
by Mike Masnick on (#4A3HZ)
For years, now, we've been writing about the general impossibility of moderating content at scale on the internet. And, yet, lots of people keep demanding that various internet platforms "do more." Often those demands to do more come from politicians and regulators who are threatening much stricter regulations or even fines if companies fail to wave a magic wand and make "bad" content disappear. The big companies have felt compelled to staff up to show the world that they're taking this issue seriously. It's not difficult to find the headlines: Facebook pledges to double its 10,000-person safety and security staff and Google to hire thousands of moderators after outcry over YouTube abuse videos.Most of the demands for more content moderation come from people who claim to be well-meaning, hoping to protect innocent viewers (often "think of the children!") from awful, awful content. But, of course, it also means making these thousands of employees continuously look at highly questionable, offensive, horrific or incomprehensible content for hours on end. Over the last few years, there's been quite a reasonable and growing concern about the lives of all of those content moderators. Last fall, I briefly mentioned a wonderful documentary, called The Cleaners,focused on a bunch of Facebook's contract content moderators working out of the Philippines. The film is quite powerful in showing not just how impossible a job content moderation can be, but the human impact on the individuals who do it.Of course, there have been lots of other people raising this issue in the past as well, including articles in Inc. and Wired and Gizmodo among other places. And these are not new issues. Those last two articles are from 2014. Academics have been exploring this issue as well, led by Professor Sarah Roberts at UCLA (who even posted a piece on this issue here at Techdirt). Last year, there was another paper at Harvard by Andrew Arsht and Daniel Etcovitch on the Human Cost of Online Content Moderation. In short, none of this is a new issue.That said, it's still somewhat shocking to read through a big report by Casey Newton at the Verge, about the "secret lives" of Facebook content moderators. Some of the stories are pretty upsetting.
|
![]() |
by Karl Bode on (#4A33D)
This week the FTC turned a few heads by announcing that the agency would be creating a new "task force" designed specifically to monitor large technology companies. According to the agency's announcement, this new task force will feature approximately 17 staff attorneys whose mandate will be to more closely examine the competitive tech landscape, and take action if necessary. According to the FTC, this includes taking a closer look at both pending mergers, and large technology mergers that have already happened.
|
![]() |
by Tim Cushing on (#4A2PD)
Pissed Consumer has uncovered more fraudulent behavior by companies hoping to scrub critical reviews from its site. The site first uncovered the use of bogus court orders to delist content -- something Eugene Volokh and Paul Levy have turned into a small-time crusade. These fraudulent court documents resulted in some genuine legal action. Questionable reputation management firms are now facing lawsuits from Pissed Consumer and the attorney general of Texas.The latest twist in reputation management also includes forged legal documents. The stakes are a bit lower because no one's directly defrauding a court or forging a judge's signature. But the underlying tactic is still comparable: the misuse of fake legal documents to remove criticism from the internet.
|
![]() |
by Timothy Geigner on (#4A237)
Over the course of the last year or so, coverage of copyright trolling stories turned up a common movie multiple times. That film was The Hitman's Bodyguard, and the outfits contracted to push for fees via settlement letters were both prolific and devious in trying to manipulate the settlement offer amounts to achieve the highest conversion rates. Whatever the level of intelligence that goes into these operations, however, there will almost always be a misfire, with a wrong target picked in the wrong court in such a way that makes the troll look like, well, a troll.Such appears to be the case when Bodyguard Productions went after Ernesto Mendoza in court, claiming that he downloaded the film via bittorrent. The problem with the case is that Mendoza is both very, very insistent on his innocence and also manages to cast about as sympathetic a figure as one might be able to find. Mendoza is in his 70s and has end-stage cancer. When Bodyguard Productions attempted to voluntarily dismiss the case when it became clear that Mendoza wasn't going to settle, he tried to push the court to force the case to go forward so that he could recover his legal expenses. Sadly, the court refused.
|
![]() |
by Tim Cushing on (#4A1TM)
The Houston Police Department has a huge problem. A recent no-knock drug raid ended with two "suspects" killed and four officers wounded. The PD says no-knock entrances are safer for officers, not that you'd draw that conclusion from this raid.The problem the PD has is its drug warriors are dirty. The raid was predicated on a tip from a confidential informant who doesn't appear to exist. The warrant contained sworn statements about a heroin purchase that never happened and a large quantity of heroin packaged for sale that was not among the things seized from the dead couple's residence. The heroin central to the raid appears to have been taken from the console of an officer's squad car and run to the lab for some very unnecessary testing.Houston police officer Gerald Goines is the person behind this completely avoidable chain of events. After initially backing his officers, Police Chief Art Acevedo has reversed course in the face of contrary evidence he's unable to ignore. His initial defense of officers who participated in a drug raid that only turned up personal use amounts of cocaine and marijuana was perhaps understandable, given his position. But it went against the image he'd made for himself as a reformer -- someone who would clean up the department and repair its reputation.A leaked recording of Acevedo speaking to officers after the killing of an unarmed, mentally ill man seemed to make it clear there was zero tolerance for the usual cop bullshit. Acevedo criticized his officers for needlessly escalating interactions, bullying citizens for failing to show the respect officers feel is owed to them, and teaming up on post-incident paperwork to ensure most bad deeds went unpunished.But in the three years since that recording leaked, it appears little has changed. Officer Goines' willingness to fabricate a story to engage in a no-knock drug raid -- a narrative that included a nonexistent informant and drugs not purchased from the raided residence -- shows he had little worry of being outed by other officers, much less criticized for his lawless behavior. Here's how defense lawyer Mark Bennett phrased it after it was discovered Goines used a fictional informant and drugs from his own vehicle to craft a search warrant:
|
![]() |
by Mike Masnick on (#4A1K0)
Over the years, there have been a few attempts -- usually by companies that most of us would call patent trolls -- to argue that calling a company a patent troll is defamatory. These arguments rarely get very far, because they completely misunderstand how defamation works. However, a company with some questionable patents around bank ATMs, called ATL, tried a few years back to sue a bunch of its critics over the "patent troll" name. Thankfully, the local court in New Hampshire correctly noted that calling someone a patent troll is protected speech under the First Amendment and is not defamatory.ATL decided it was going to keep trying. Tim Lee, over at Ars Technica, recently wrote about oral arguments in front of the New Hampshire Supreme Court in this case.
|
![]() |
by Cathy Gellis on (#4A1AF)
Ever since SESTA was a gleam in the Senate’s eye, we’ve been warning about the harmful effects it stood to have on online speech. The law that was finally birthed, FOSTA, has lived up to its terrifying billing. So, last year, EFF and its partners brought a lawsuit on behalf of several plaintiffs – online speakers, platforms, or both – to challenge its constitutionality. Unfortunately, and strangely, the district court dismissed the Woodhull Freedom Foundation et al v. U.S. case for lack of standing. It reached this decision despite the chilling effects that had already been observed and thanks to a very narrow read of the law that found precision and clarity in FOSTA's language where in reality there is none. The plaintiffs then appealed, and last week I filed an amicus brief on behalf of the Copia Institute, Engine, and Reddit in support of the appeal.The overarching point we made is that speech is chilled by fear. And FOSTA replaced the statutory protection platforms relied on to be able to confidently intermediate speech with the fear of it. Moreover, it didn't just cause platforms to have only a little bit of fear of only a little bit of legal risk: thanks to the vague and overbroad terms of the statutory language, it stoked fear of nearly unlimited scope. And not just a fear of civil liability but now also criminal liability and liability subject to the disparate statutory interpretations of every state authority.We have often praised the statutory wisdom of Section 230 before it was modified by FOSTA. By going with an approach that was “all carrot, no stick,†Congress was able to conscript platforms into meeting the basic objectives it listed in the opening sections of the statute: get the most good content online, and the least bad. But FOSTA replaced these carrots with sticks, and left platforms, instead of incented to allow the most good speech and restrict the most bad, now afraid to do either.As a result of this fear, platforms have become vastly less accommodating towards the speech they allow on their systems, which has led to the removal (if not outright prohibition) of plenty of good (and perfectly lawful) speech. They have also been deterred from fully policing their forums, which has led to more of the worst speech persisting online. Notably, nothing in FOSTA actually modified the stated objectives of Section 230 to get the most good and least bad expression online, yet what it did modify nevertheless made achieving either goal impossible.And in a way that the Constitution does not allow. What we learned in Reno v. ACLU, where the Supreme Court found much of the non-Section 230 parts of Communications Decency Act unconstitutional, is that online speech is just as protected as offline speech. Congress does not get to pass laws affecting speech in ways that don’t meet the exacting standards the First Amendment requires. In particular, if speech is impacted, it can only be by a law that is narrowly tailored to the problem it is trying to solve. Yet, as fellow amici wrote, speech is indeed being affected, deliberately, and, as we've seen from the harm that has accrued, by a law poorly tailored to the problem it is ostensibly intended to solve. As we explained in our brief, FOSTA has led to this result by creating a real and palpable fear of significant liability for platforms, and thus already driven them to make choices that have harmed online speakers and their speech.
|
![]() |
by Tim Cushing on (#4A15S)
California journalists legally obtained a document no law enforcement agency wants them to have. Naturally, the state's best friend to bad cops, Attorney General Xavier Becerra, is claiming it's illegal for these journalists to possess a document handed to them in response to a public records request.
|
![]() |
by Daily Deal on (#4A15T)
Destress, reduce distractions, and improve your relationships with MindFi, the innovative app that helps bring mindfulness into your busy life. Created by top meditation teachers and neuroscientists, MindFi helps you stay mindful with open eyes, so you can recharge wherever and whenever you want. It suggests 4 different mindfulness modes based on your local time of day. You can take a quick break with a silent breathing session, practice short, relevant meditations to make rough days better, focus on your to-dos with the Pomodoro timer, or, if you have time, decompress with 10 minutes of closed-eye meditation. MindFi is on sale for $39.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
|
![]() |
by Mike Masnick on (#4A113)
Here's another one in the "be careful what you wish for" category. Over the last few years, under tons of pressure from politicians and many users, various internet platforms have gotten more and more aggressive in removing content and accounts that were credibly accused of spreading disinformation and propaganda. Most people cheered over this, and you can completely understand why. But, that doesn't mean it doesn't create some consequences that might not all be good. J.A. Guerrero-Saade points out that all of this content removal can make things harder for researchers and investigators.
|
![]() |
by Karl Bode on (#4A0PE)
One of the fundamental cornerstones of disinformation and propaganda is repetition. As in, if you state something often enough, the idea gets lodged in the recipient's head and becomes truth by an act of sheer force and repetition. It's called the “illusory truth effect,†and it's been essential across most of the Trump administration as it attempts to convince the public that up is down, and black is white. Its been absolutely essential at the Trump FCC, where the agency has worked tirelessly to convince the nation's gullible that kissing the ass of the biggest telecom operators is intelligent policy.You'll of course recall that one of the FCC's key justifications for killing consumer protections like net neutrality is that the relatively modest rules stifled industry investment. Objective data from a litany of different sources has confirmed that's simply not true, including SEC filings, earnings reports, and the statements of countless industry CEOs. That hasn't stopped Ajit Pai, major telecom providers, or the litany of dollar-per-hollar consultants and think tankers employed to create the illusion of widespread support for sucking up to the nation's entrenched broadband monopolies.As required by Congress, the FCC periodically releases a Broadband Deployment Report indicating whether affordable, fast broadband is being deployed in a "reasonable and timely" fashion. This week, the FCC circulated a draft order of the initial report among Commissioners. It wasn't made public, but a statement by the agency (pdf) offered up a few choice statistics to imply that its widespread assault on basic consumer protections is having a near-miraculous impact on the telecom market. As usual, the FCC's chosen metrics are very specific:
|
![]() |
by Tim Cushing on (#4A073)
The EU Commission made a lot of noise about protecting the data of European citizens, resulting in the passage of a law that's almost impossible to avoid breaking. I guess those protections won't be extended to anyone a number of governments consider to be threats to national security. Even worse, this data will be shared with governments known for executing their critics. (h/t War On Privacy)
|
![]() |
by Glyn Moody on (#49ZMT)
News that China is extending its censorship to new domains barely provokes a yawn these days, since it's such a common occurrence. But even for those jaded by constant reports of the Chinese authorities trying to control what people see and hear, news that it is now actively censoring books written by Australian authors for Australian readers is pretty breath-taking. The Chinese government has done this before for single books whose message it disliked, but now it seems to be part of a broader, general policy:
|
![]() |
by Karl Bode on (#49ZC7)
Last year AT&T defeated the DOJ's challenge to the company's $86 billion merger with Time Warner thanks to a comically narrow reading of the markets by U.S. District Court Judge Richard Leon. At no point in his original 172-page ruling (which approved the deal without a single condition) did Leon show the faintest understanding that AT&T intends to use vertical integration synergistically with the death of net neutrality and neutered FCC oversight to dominate smaller competitors and tilt the entire internet ecosystem in its favor.While the DOJ lost its original case, it was quick to appeal late last year, highlighting how within weeks of the deal AT&T had jacked up prices on consumers and competitors like Dish Networks, which says it was forced to pull HBO from its lineup because it could no longer afford the higher rates. Those rate hikes were directly courtesy of the huge debt AT&T incurred from both its 2015 merger with DirecTV (which eliminated a direct pay TV competitor from the market), and last year's Time Warner merger.None of this apparently mattered to a three-judge panel from the US Court of Appeals for the DC Circuit, which ruled this morning (pdf) that AT&T's latest merger would be allowed to stand. According to the Judges, the DOJ's claims that Leon failed to understand basic economic realities in the broadband and video markets were "unpersuasive." Much like the initial Leon ruling, the cornerstone of the Judges ruling centers around the idea that because there's more and more streaming competition, any anti-competitive problems from the deal would be mystically mitigated:
|
![]() |
by Leigh Beadon on (#49Z67)
It's no secret that journalism outfits are struggling, and have been for some time. There are lots of competing ideas about why this is the case, and who to blame, but the ultimate question is the same: how do we fund good journalism going forward? This week, Mike is joined on the podcast by someone whose opinions on this question differ significantly from his own — Columbia Journalism professor and former online editor-in-chief of the Guardian Emily Bell — to talk about whether journalism can survive the free market, and what the alternatives are.Follow the Techdirt Podcast on Soundcloud, subscribe via iTunes or Google Play, or grab the RSS feed. You can also keep up with all the latest episodes right here on Techdirt.
|
![]() |
by Mike Masnick on (#49YX4)
When we last checked in with UK Parliament Member Damian Collins, he was creating fake news at a hearing he set up to scold Facebook for enabling fake news. If you don't recall, Collins held a very theatrical hearing, in which his big reveal was that Facebook had actually become aware of Russians hacking its API with billions of questionable requests back in 2014, years before anyone thought they were doing anything. Except, as became clear hours later, Collins completely misrepresented what actually happened. It wasn't Russians. It was Pinterest. And it wasn't billions of requests for data. It was millions. And it wasn't abusive or hacking. It was something going a little haywire on Pinterest's end. But, to Collins it was a smoking gun.It appears that that little incident has not deterred Collins from his war on Facebook, in which he's using moral panic and fear mongering over "fake news" to try to censor content he doesn't like. Recently, Collins' committee -- the Digital, Culture, Media and Sport Committee -- published its big report on fake news, in which it calls for new regulatory powers to "oversee" what content goes on sites like Facebook. With the report, Collins put out quite the bombastic comment about all of this. Here's just a snippet:
|
![]() |
by Tim Cushing on (#49YR7)
The FBI -- late to the party -- proudly announces it's the first guest to arrive. (via Axios)
|
![]() |
by Daily Deal on (#49YR8)
Slow typer? Well, no more excuses! Typesy Typing Trainer is the easy-to-use typing software engineered by real touch typing experts to help you push your 50 WPM to over 100. You'll play 16 games and activities designed to eliminate specific weaknesses or hone certain skills. It's on sale for $20.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
|
![]() |
by Mike Masnick on (#49YM2)
Last week we had a story about a bunch of Pokemon YouTubers discovering their accounts were dropped after YouTube confused their discussion of "CP" (Combat Points), thinking it might actually refer to a different "CP": child porn. The accounts were reinstated after a bit of an outcry.It appears that this was part of a larger effort by YouTube to deal with a variety of problems on its platform (and, yes, its platform has problems). But some of the resulting stories suggest that YouTube is so focused on "demonetization" as a tool that it's losing sight of alternatives. The Pokemon story appears to have actually been part of a larger effort to respond to claims that child predators were using the comments of certain, perfectly normal videos of kids to, well, do bad stuff. The whole thing is pretty sickening and horrifying and I have no interest in going into the details.As the controversy over this -- quite reasonably -- gained attention, some pointed out that these videos with exploitative comments were, in many cases, being monetized with big brand name ads appearing next to them. This type of complaint is... not new. People have been complaining about brand names appearing in ads next to "bad" content or "infringing" content for many years. Of course, it's pretty much all matched by algorithm, and part of the problem is that because people are gaming the system, the algorithm (and YouTube) hadn't quite caught on to what was happening. Of course, the outcry from the public -- especially about the monetization -- then resulted in advertisers pulling all their ads from YouTube. And, whether it was the advertisers leaving or the general public outcry (it was almost certainly both, but I'm sure most people will assume it was the advertisers bailing that really made the difference), YouTube went on a big new effort to pull ads from lots of videos.And in doing so, it created a brand new controversy. Just as this started, a mother named Jessica Ballinger complained on Twitter that YouTube had demonetized videos of her 5 year old son. YouTube responded on Twitter, noting that it was because of the comments on the video, obliquely referencing the stories discussed above.Of course, this immediately created a new kind of backlash as people (rightfully!) pointed out that disabling monetization of a video based on the comments on that video just seems to empower and encourage trolls. Want to harm a YouTuber you don't like? Just post a sketchy comment on their videos and, boom, you can take away their money.And, to be clear, this is not a new thing for Google. Just last month we noted that the company has a similarly silly policy with Adsesnse and blogs that have comments. If Adsense decides that some of your user-generated comments are "bad" they might demonetize the page that hosts those comments. As with the stories above, this is mostly to appease advertisers and avoid the sort of (slightly misguided) screaming about "big brand ad appearing next to awful comments." We found that policy to be silly in that situation, but it's even more ridiculous here.As Dan Bull noted in response to all of this it seems that a much simpler solution compared to demonetizing such videos is to just remove the sketchy, awful, predatory comments.
|
![]() |
by Karl Bode on (#49Y50)
Scandal after scandal after scandal has resulted in many finally realizing that the United States is likely going to have to craft at least some basic privacy guidelines moving forward. The problem: with so many justly wary of Congressional Luddites screwing it up, and so many wealthy industries lobbying jointly to try and weaken the potential guidelines, this isn't going to be a pretty process. If we come out of it with anything even closely resembling a decent privacy law for the digital age (one that doesn't make things worse) we'll be very fortunate.But it's hard to craft much of any meaningful privacy rules when Congress is so grotesquely beholden to the industries they're supposed to be holding accountable. That was made pretty obvious when the telecom sector lobbied to kill some basic FCC privacy guidelines the FCC had approved before they could even take effect back in 2017. Those rules simply required that ISPs be transparent about what data is being collected and who it would be sold to, something that could have proven extremely useful in the wake of these revelations of carriers selling your location data to any and every nitwit in America.And as Congress begins holding public hearings to contemplate what privacy rules should look like, telecom giants like AT&T are again likely to have an outsized influence on what those laws will likely look like. For example, AT&T and other telecom giants will be holding a fundraiser for Senator Roger Wicker, chairman of the Senate Commerce Committee, the night before a major hearing on privacy regulations:
|
![]() |
by Karl Bode on (#49XS6)
So, while there's really no denying that Chinese smartphone and network gearmaker Huawei engages in some clearly sketchy behavior, it's not anything that can't be matched by our own, home-grown sketchy telecom companies. And while the Trump administration has been engaged in a widespread effort to blackball Huawei gear from the American market based on allegations of spying on Americans, nobody's been able to provide a shred of public evidence that this actually occurs. At the same time, we tend to ignore the fact that the United States broke into Huawei to steal code and implant backdoors as early as 2007.In short, this subject is more complicated that the blindly-nationalistic U.S. press coverage tends to indicate, and a not-insubstantial portion of this hand-wringing is driven by good old-fashioned protectionism.Throughout this whole thing, Huawei executives have been right to note that in the decade-plus of these allegations and hand-wringing, you'd think some security researcher would have been able to prove that Huawei gear is spying on Americans wholesale. And last week, as news emerged that the Trump administration was finally considering a full domestic ban on using Huawei gear, our closest surveillance allies in the UK made it clear that the Huawei threat is likely being overstated by the United States:
|
![]() |
by Glyn Moody on (#49X58)
Techdirt has been following the regrettable story of African governments imposing taxes and levies on Internet use. For example, Uganda has added a daily fee of 200 Ugandan shillings ($0.05) that its citizens must pay to access social media sites and many common Internet-based messaging and voice applications (collectively known as "OTT services"). It has also imposed a tax on mobile money transactions. When people started turning to VPNs as a way to avoid these charges, the government tried to get ISPs to ban VPN use. As we pointed out, these kind of taxes could discourage the very people who could benefit the most from using the Internet. And in news that will surprise no one, so it has turned out, according to official data from the Uganda Communications Commission, summarized here by an article on the Quartz site:
|
![]() |
by Tim Cushing on (#49WWC)
Last fall, the Florida Department of Corrections decided it needed to enrich itself at the literal expense of its inmates. It signed a new contract with new provider of jailhouse entertainment, instantly making $11.3 million in purchased digital goods worthless. You don't own what you paid for, even at inflated prison prices.The new contract with JPay rendered the purchased content unusable. Even the players purchased by inmates aren't technically theirs and must be returned to the vendor. Not that keeping the players would help much. The licensing agreements covering the content are only valid if the previous contractor is providing the service. Since it's not, the mp3s and ebooks can't be transferred to JPay devices.Unsurprisingly, inmates are furious. So are their families -- the ones who paid extortionate rates to give their imprisoned family members a little music to enjoy. The DOC recognized this was a problem and created a portal for the filing of complaints related to this disappearance of purchased digital goods. That portal is going to be very useful in the upcoming lawsuit against the Florida DOC for screwing inmates out of their purchases. (h/t Boing Boing)
|
![]() |
by Mike Masnick on (#49WPE)
For nearly fifteen years, we've written about how patent trolls love East Texas, and spent years building an entire industry in some small Texas towns centered around patent trolling, and dragging companies from all over into east Texas to face lawsuits. Almost two years ago, we were pretty thrilled to see the Supreme Court slam the door on the most blatant jurisdiction shopping by patent trolls in the TC Heartland case. And while some of the East Texas judges have tried to come up with creative ways to reinterpret the Supreme Court's ruling, so far that hasn't worked that well.Still, a key aspect of the TC Heartland case was that the proper venue was the judicial district where a company actually "resides," which the Supreme Court suggested is where the company was incorporated and not just where it sold products. However, in that appeals court ruling mentioned above, interpreting part of the TC Heartland ruling, the Federal Circuit noted that there "must be a physical place in the district" as one part of a larger test. For lots of companies, that's no big deal, but Apple (a company that faces more than its fair share of patent troll lawsuits) realized it had a problem: while it's certainly not incorporated or headquartered in East Texas, it does have retail stores there. Or did. The company is shutting those stores down to prevent trolls from using that retail presence to argue East Texas is an appropriate venue.Oh and just to put an exclamation point on the reason why it did so: it's opening a new store juuuuuuuuuuuuuuust over the border into the Northern District of Texas. Apple's not officially commenting on the reason for the closures/openings but... come on. MacRumors created this lovely "rough visualization" that drives the point home pretty clearly:Not only did the overreach by judges in East Texas lead to the big industry of patent trolling they helped build up start to deflate, some of the blowback may be lots of companies -- especially in the tech world -- will simply refuse to have any presence in all of East Texas to avoid being an easy target for patent trolls.
|