Feed techdirt Techdirt

Favorite IconTechdirt

Link https://www.techdirt.com/
Feed https://www.techdirt.com/techdirt_rss.xml
Updated 2025-10-04 23:47
Disney Defeats Lawsuit Brought By Company Owning Evel Knievel's Rights Over 'Toy Story 4' Character
Roughly a year ago, we discussed a lawsuit brought by K&K Promotions, the company that holds the trademark and publicity rights for the now-deceased stuntman Evel Knievel, against Disney. At issue was a character in Toy Story 4 named Duke Caboom, a toy version of a motorcycle stuntman that certainly had elements of homage to Knievel. But not just Knievel, which is important. Instead, a la several lawsuits Rockstar Games has faced over characters appearing in the Grand Theft Auto series, Caboom was an amalgam of retro-era stuntmen, not a faithful depiction of any one of them, including Knievel. And, while some who worked on the film even mentioned that Knievel was one of the inspiration points for the character, they also noted that Knievel's routine, garb, and mannerisms were hardly unique for stuntmen in that era. Despite that, K&K insisted that Caboom was a clear ripoff and appropriation of Knievel.Well, Disney moved to dismiss the case, claiming essentially the above: Duke Caboom is based on a compilation of retro-era stuntmen. And the court has now ruled, siding with Disney and dismissing the case.
Tesla 'Self-Driving' NDA Hopes To Hide The Reality Of An Unfinished Product
There isn't a day that goes by where Tesla hasn't found itself in the news for all the wrong reasons. Like last week, when Texas police sued Tesla because one of the company's vehicles going 70 miles per hour in self-driving mode failed to function properly, injuring five officers.
Reminder: Our Techdirt Tech Policy Greenhouse Live Workshop Is Happening This Wednesday!
Over the last few weeks we've been running pieces for our latest Techdirt Greenhouse discussion on questions around content moderation at the infrastructure layer. This time we're also doing a live workshop event to go with it, in which some of the authors of the pieces will present, leading into "table discussions" from attendees to explore some of the tradeoffs and challenges regarding content moderation. This will be happening this Wednesday, October 6th, from 9am PT to 12pm PT. If you're interested in taking part, please register to attend.We look forward to seeing you there!
Right-Wing Commentator Dan Bongino Runs Into Florida Anti-SLAPP Law, Now Owes Daily Beast $32,000 In Legal Fees
Venue selection matters, as right-wing political commentator/defamation lawsuit loser Dan Bongino is now discovering. He sued the Daily Beast over an article about his apparent expulsion from the National Rifle Association's video channel, NRATV. After trying (and failing) to get a comment from Bongino about this ouster, reporter Lachlan Markay published his article, updating it later when Bongino decided he did actually want to talk about it.
Infrastructure And Content Moderation: Challenges And Opportunities
The signs were clear right from the start: at some point, content moderation would inevitably move beyond user-generated platforms down to the infrastructure—the place where services operate the heavy machinery of the Internet and without which user-facing services cannot function. Ever since the often-forgotten incident when Amazon stopped hosting Wikileaks after US political pressure took place in 2010, there has been a steady uneasiness regarding the role infrastructure providers could end up playing in the future of content moderation.A glimpse of what this would look like came in 2017, when companies like Clouldflare and GoDaddy took affirmative action against content they considered problematic for their business models, in this case white supremacist websites that had been the subject of massive public opprobrium. Since then, that future has become the present reality as the list of infrastructure companies performing content moderation functions keeps growing.Content moderation has two inherent qualities that provide important context.First, content moderation is generally complex in real-world process design and implementation. There are a host of conflicting rights, diverse procedural norms and competing interests that come into play every time content is posted on the Internet; each case is unique and on some level so it should be treated.Second, content moderation is messy because the world is messy: the global nature of the Internet, economies of scale, societal realities and cultural differences create a multi-layered set of considerations that are difficult to reconcile.The bright spot in all this messiness and complexity is the hope of due process and the rule of law. The theory is that, in healthy and competitive markets, users have choice and therefore it becomes more difficult for any mistakes to scale. So, if a user’s post gets deleted on one platform, the user should have the option of posting it someplace else.Of course, such markets are difficult to accomplish and the current Internet market is certainly not in this category. But, the point here is that it is one thing to have one of your postings removed from Facebook and it is another to go completely offline if Cloudflare stops providing you their services. The stakes are completely different.For a long time, infrastructure providers were smart enough to stay out of the content business. The argument was that the actors who are responsible for the pipes of the Internet should not concern themselves with the kind of water that runs through them. Their agnosticism was encouraged because their main focus was to provide other services, including security, network reliability and performance.However, as the Internet evolved, so did the infrastructure providers’ relationship with content.In the early days of content moderation, what constituted infrastructure was more discernible and structured. People would usually refer to the Open System Interconnection (OSI) model as a useful analogy, especially with policy makers who were trying to identify the role and responsibilities various companies held in the Internet ecosystem.The Internet of today, however, is very different from those days. The layers of the Internet are not distinguishable any longer and, in many cases, participating actors are not just operating at the infrastructure or the application layers. At the same time, and as applications in the Internet were gaining in popularity and use, innovation started moving upstream.“Infrastructure” is now being nested on top of other “infrastructure” all within just layer 7 of the OSI stack. Things are not as clear-cut.This indicates that, in some ways, we should not be surprised that the content moderation conversations seem to gradually be moving downstream. A cloud provider that provides support to a host of different websites, platforms, news outlets or businesses, will inevitably have to deal with issues of content.A content delivery network (CDN) will unquestionably face, at some point, the moral dilemma of providing its services to businesses that walk a tightrope with harmful or even illegal content. It really comes down to a simple equation: if user-generated platforms don’t do their job, infrastructure providers will have to do it for them. And, they do. Increasingly often.If this is the reality, the question becomes what is the best way for infrastructure providers to do moderation considering current practices of content moderation, the significant chilling effects, and the often-missed trade-offs.If we are to follow the “framework, tools, principles” triad, we should be mindful to not reinvent any existing ecosystem. Content moderation is not new and, over the years, a combination of laws and self-regulatory norms ensures a relatively consistent, predictable and stable environment—at least most of the time.Section 230 of the CDA in the US, the eCommerce Directive in Europe, Marco Civil in Brazil and other laws around the world have succeeded in creating a space where users and businesses could manage their affairs effectively and know that judicial authorities would treat their cases equally.For content moderation at the infrastructure level, a framework based on certainty and consistency is even more of a priority. Legal theory instructs that lack of consistency can diminish the development of norms or it can undermine the way existing ones can manifest themselves. In a similar vein, lack of certainty means the inability to get organized in such a way that complies with the law. For infrastructure providers that support basic and day-to-day functions of the Internet, such a framework becomes indispensable.I often say that the Internet is not a monolith. This is not only to demonstrate how the Internet was never meant to perform one single thing, but also to show the importance of designing a legal framework that behaves the same. When we talk about predictability and certainty, we must be conscious of putting in place requirements of clarity, stability and intelligibility so that participating actors can make calculated and informed decisions about the legal consequences of their actions. That’s what made Section 230 a success for more than two decades.Frameworks without appropriate tools to implement and assess them, however, mean little. Tools are important as they can help maximize the benefits of processes, ultimately increasing flexibility, reducing complexity, and ensuring clarity. Content moderation has consistently been suffering from lack of tools that could clearly exhibit the effects of moderation. Think, for instance, all these times content is taken down and there is no way to say what the true effect is on free speech and on users.In this context, we need to think of tools as things that would allow us to better understand the scale and chilling effect that content moderation in the infrastructure causes. Here is what I wrote about this last year:
There May Be A New Boss At The DOJ, But The Government Still Loves Its Indefinite Gag Orders
Despite the DOJ recently drawing heat for its targeting of journalists during internal leak investigations, a lot still hasn't changed about the way demands for data are handled by the feds. Over the past couple of decades, the DOJ and its components have been asking for and obtaining data from service providers, utilizing subpoenas and National Security Letters that come with indefinite gag orders attached.These orders swear recipients like Microsoft and Google to secrecy, forbidding them from notifying targeted customers and users. (Even Techdirt has been hit with one.) Unlike regular search warrants, where the target is made aware of the rummaging by the physical presence of law enforcement officers, warrants, subpoenas, and NSLs allow the government to go about its rummaging unnoticed.Reforms to surveillance powers by the USA Freedom Act have at least forced the government to perform periodic reviews of ongoing gag orders. It has also given companies a way to challenge gag orders and demands for data, but that's only useful if the companies have some idea who is being targeted. As this report on the ongoing abuse of gag orders by Jay Greene and Drew Harwell for the Washington Post points out, it's not always clear who the government is seeking information about. (Alternative link here.)
Daily Deal: TREBLAB Z2 Bluetooth 5.0 Noise-Cancelling Headphones
The Z2 headphones earned their name because they feature twice the sound, twice the battery life, and twice the convenience of competing headphones. This updated version of the original Z2s comes with a new all-black design and Bluetooth 5.0. Packed with TREBLAB's most advanced Sound2.0 technology with aptX and T-Quiet active noise-cancellation, these headphones deliver goose bump-inducing audio while drowning out unwanted background noise. These headphones are on sale for $79.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
In Josh Hawley's World, People Should Be Able To Sue Facebook Both For Taking Down Stuff They Don't Like AND Leaving Up Stuff They Don't Like
Last year, Josh Hawley introduced one of his many, many pathetic attempts at changing Section 230. That bill, the "Limiting Section 230 Immunity to Good Samaritans Act" would create a private right of action allowing individuals to sue any social media company if they were unhappy that some of their content was removed, and to seek a payout. The obvious implication, as with a ton of bad faith claims by populists who pretend to be "conservative" is that websites shouldn't do any moderation at all.However, this week, Hawley introduced another bill to attack Facebook and to create another private right of action against basically any website -- except this time the private right of action is for anyone who feels their "mental health" was harmed by content on that website. Contrary to what Hawley-loving propagandist rag "The Daily Caller" falsely claims, this bill doesn't actually "amend" Section 230, it simply uses the definition of an interactive computer service from 230, and introduces a weird new liability regime that is in total conflict with 230 (and with Hawley's previous bill -- but when you're culture warrioring and trying to be the face of the new insurrectionists, who has time for little things like consistency?). The Federal Big Tech Tort Act is a bunch of silly performative nonsense.It used to be that Republicans were the party that was dead set against opening up new private rights of action and giving tort lawyers new ways to drag people and companies into court. No longer, I guess. Amusingly, Hawley's bill shares its DNA with Senator Amy Klobuchar's equally silly bill to hold social media companies liable for misinformation. The key part in the Hawley bill:
South Korean ISP Somehow Thinks Netflix Owes It Money Because Squid Game Is Popular
We've noted for a while how the world's telecom executives have a fairly entrenched entitlement mindset. As in, they often tend to jealously eye streaming and online ad revenues and assume they're inherently owed a cut of those revenues just because at some point they traveled on their networks. You saw this hubris at play during AT&T's claims that "big tech" gets a "free ride" on their networks, which insisted that companies like Google should pay them significant, additional troll tolls "just because" (which triggered the entire net neutrality fight in the States).AT&T pretty solidly established this entitlement mindset domestically, and I've watched it slowly exported overseas. Like this week in South Korea, where South Korean broadband provider SK Broadband sued Netflix simply because its new TV show, Squid Game, is popular. Basically, the lawsuit argues, because the show is so popular and is driving a surge in bandwidth consumption among South Koreans watching it, Netflix is somehow obligated to pay the ISP more money:
Funniest/Most Insightful Comments Of The Week At Techdirt
This week, our first place winner on the insightful side is sumgai with a comment about the disastrous new bill regulating online commerce:
This Week In Techdirt History: September 26th - October 2nd
Five Years AgoThis week in 2016, we looked at how the internet of things was fueling an unprecedented rise in DDoS attacks, while the DHS was offering its unsolicitied (and likely unhelpful) assistance in securing it, and we also learned more about the likely reason for the NSA's trove of hacking tools being discovered and published. The CFAA emerged at the center of a political dispute, the California Supreme Court agreed to hear an important Section 230 case, and the DOJ decided that copyright infringement could be grounds for deportation, while the RIAA was going around acting as though SOPA had passed, even though it didn't. Also, in an extremely silly move, four state AGs filed a lawsuit to block the IANA transition, which was quickly tossed out by a judge.Ten Years AgoThis week in 2011, the Senate let the copyright lobby set up shop in the Senate building during the PROTECT IP debate, while the House version of the bill added in a provision covering cyberlockers, an "analyst" from Disney was cheerleading for the bill. Canadian politicians were pushing for their own terrible copyright reform law, while we looked at how the EU's copyright extension was harming classical music. Multiple countries were getting ready to sign ACTA on the weekend, until it turned out that some weren't actually going to do it, even though the US planned to use its signing statement to defend the unconstitutional aspects of the agreement. Meanwhile, Righthaven suffered another huge loss, and continued trying to avoid paying legal fees, though it only succeeded in getting a brief reprieve.Fifteen Years AgoThis week in 2006, the fight between Google and European newspapers continued with the papers trying to reinvent robots.txt, new companies were trying to find a way to charge money for social media, and we wondered if it was possible to see the actual FCC data on broadband penetration. Microsoft was going after the anonymous person who cracked their copy protection system, the MPAA was touting its bizarre use of DVD-sniffing dogs, and Hollywood was raising the stakes in its claims of the damages from piracy. Meanwhile, a judge sadly agreed with the RIAA that Morpheus had induced infringement, while Limewire was hitting back hard against the RIAA with a lawsuit alleging antitrust and consumer fraud.
PS4 Battery Time-Keeping Time-Bomb Silently Patched By Sony; PS3 Consoles Still Waiting
Over the past several months, there have been a couple of stories that certainly had owners of Sony PlayStation 4 and PlayStation 3 consoles completely wigging out. First came Sony's announcement that it was going to shut down support for the PlayStation Store and PlayStation Network on those two consoles. This briefly freaked everyone out, the thinking being that digitally purchased games would be disappeared. Sony confirmed that wouldn't be the case, but there was still the question of game and art preservation, given that no new purchases would be allowed and that in-game purchases and DLC wouldn't be spared for those who bought them. As a result of the outcry, Sony reversed course for both consoles specifically for access to the PlayStation Store, nullifying the debate. Except that immediately afterward came word of an issue with the PS3 and PS4 console batteries and the way they check in with the PlayStation Network (PSN) to allow users to play digital or physical game media. With the PSN still sunsetting on those consoles, the batteries wouldn't be able to check in, and would essentially render the console and all the games users had worthless and unplayable.But now that too has been corrected by Sony, albeit in a completely unannounced fashion.
Top Publishers Aim To Own The Entire Academic Research Publishing Stack; Here's How To Stop That Happening
Techdirt's coverage of open access -- the idea that the fruits of publicly-funded scholarship should be freely available to all -- shows that the results so far have been mixed. On the one hand, many journals have moved to an open access model. On the other, the overall subscription costs for academic institutions have not gone down, and neither have the excessive profit margins of academic publishers. Despite that success in fending off this attempt to re-invent the way academic work is disseminated, publishers want more. In particular, they want more money and more power. In an important new paper, a group of researchers warn that companies now aim to own the entire academic publishing stack:
Tampa Bay PD's 'Crime-Free Housing' Program Disproportionately Targeted Black Residents, Did Nothing To Reduce Crime
It looks like landlords in Florida want to get back to things that made this country great: bigotry, segregation, and oppression. And look who's willing to pitch in! Why, it's that old friend of racists, local law enforcement. (h/t WarOnPrivacy)
Against 'Content Moderation' And The Concentration Of Power
Content moderation frameworks and toothless oversight boards legitimize the concentration of power in the hands of infrastructure providers and platforms. This gives them, and not democratic processes and structures, the discretion to egregiously shape the public debate.In 1964 Marshall McLuhan wrote that content is a “juicy piece of meat carried by the burglar to distract the watchdog of the mind” (McLuhan 2013). I will argue that today this is more true than ever. If we want to solve the issue of human rights violating content, we will need to look at the structures that allow for the production of it. Therefore, I will argue that “content” is a false category, and that infrastructure is often misunderstood as a largely material object whereas it is a complex assemblage of people, practices, institutions, cultures, and devices. To address the false premises on which the concept of “infrastructural content moderation” is based, I propose an analytical framework that does not separate the context from the content but rather offers an integrative approach to address online discourse production.Aristotle famously wrote that there is no matter without form and no form without matter. Similarly, Bergson said that color does not exist as an abstract category, but only as a quality of a substance. The same holds true for content on the Internet. A Facebook post is something different than a post on Tiktok, a blog post, a tweet, or a YouTube comment. One understands these messages differently. Just like one understands a sentence spoken in a comic club differently than one spoken in parliament, and a sentence uttered in a forest is different from one in a theater.It has taken centuries for legal and social rules for public and private spaces to develop. The Internet is a relatively new space that practically is largely private, but feels like the world's largest public space. It will take time for rules to sediment for this space. In the development of new rules, one should commence the interrogation of different possibilities with a simple question: Cui bono? Who profits?Julie Cohen describes in her book ‘Between Truth and Power’ that the shift in the image from the Internet as an “electronic superhighway” to a “cloud” should by no means be taken lightly. At least a highway has rules, a cloud has none. In the image that the Internet infrastructure industry has shown us, the Internet infrastructure is a given. A modular space on which things can be built, a neutral platform for economic growth and development, that would only suffer from regulation.But Keller Easterling explains that “infrastructure sets the invisible rules that govern the spaces of our everyday lives” and that “changes to the globalising world are being written, not in the language of law and diplomacy, but rather in the language of infrastructure.” She describes the practice of the development, implementation, and operation of these infrastructures as “extrastatecraft,” because these powers used to belong to nation states, but are now taken up by transnational corporations.The development, standardization, and implementation of Internet infrastructure is inherently political. Janet Abbate particularly says that: ‘the debate over network protocols illustrates how standards can be politics by other means. Denardis’ 2014 book ‘Protocol politics’ furthers the work by Abbate and showcases how “debates over protocols bring[ing] to light unspoken conflicts of interest.” Whereas the work of DeNardis focuses mostly on Internet protocols, she does emphasize that “politics are not external to technical architecture.”When one looks at the infrastructure that undergirds the exchange of discourse, we should not see it as a neutral foundation for platforms and services, but rather as a shaping force that has both direct and indirect power. This shaping power is what sets the rules for everything that happens on top of it, which is more influential than the haphazard removal of a particular user or group. This shaping power is deeply entrenched in the standardization and governance bodies where the Internet infrastructure is produced.Upon interrogation of these standards and governance bodies, one cannot help but notice, as the research by Corinne Cath-Speth shows, that the bodies can be characterized by a laissez-faire approach to technology development and defy any strong accountability measures. This culture is characterized by a libertarian, American, masculine approach that values individualism. It is exactly these qualities that perpetuate the idea that regulation will “break the Internet” and that individual choice and responsibility is the only way forward for the Internet infrastructure.This attitude is deeply ironic because for the first half of its existence the Internet was heavily funded by states, and the second half has been characterized by oligopolies. However, this sense of individual engineering pride keeps the status quo intact, which means a continuous exclusion of those who do not want to succumb to this culture, mostly women, people of color, and those from outside of Europe and the United States. This in turn strengthens a network topology that reinforces power structures of dominance and extraction based in the United States and Europe. Submarine cables now cover the whole world, but network traffic still largely centers in Europe and the United States, maps that very much resemble those of colonial trade routes.The Internet infrastructure and its standardization and governance regime exist to increase interconnection between transnational corporations, largely based in the United States and Europe. Expanding the data flows to and through these networks is what these networks and their governance is optimized for. This has transformed the Internet from a medium of connection to a medium of extraction. Solely focusing on the outgrowths of this culture and regime by focusing on content moderation would be naive at best, and legitimizing an extractive practice at worst.Reflections on the practice of content moderation should not solely focus on the content that should, or should not be, moderated, but rather on the structures that incentivize and perpetuate such speech. It is the responsibility of communication infrastructure providers to meaningfully engage with the human rights impact of their actions, and their chain responsibility. Thus far, hardly any Internet infrastructure provider has done so sufficiently. The industry’s lack of meaningful adoption and integration of the United Nations Guiding Principles for Business and Human Rights reminisce of the tobacco industry’s opposition against health codes, and their lobbying budgets reflect the same fear for regulation.Civil society should not be afraid to present strong alternative network ideologies that rely on free association and self-determination by end users. The priority of the networking and content provision industry should be to address problems of inequity and inequality, not to extract more private data to be sold to advertising and surveillance companies (which are anyhow based on flawed premises). The Internet is the public square of the world, we should better reimagine it as one. This means that the strongest actors should live up to their responsibilities, and not seek to wait for civil society to organize themselves and demand accountability, and fix their problems. Here we can only refer back to Spiderman: with great power, comes great responsibility. It is high time that the Internet infrastructure sector lives up to that.Niels ten Oever is a postdoctoral researcher with the ‘Making the hidden visible project at the Media Studies department at the University of Amsterdam.Techdirt and EFF are collaborating on this Techdirt Greenhouse discussion. On October 6th from 9am to noon PT, we'll have many of this series' authors discussing and debating their pieces in front of a live virtual audience (register to attend here).
Ken Popehat White (Again) Shows How To Respond To A Completely Thuggish Legal Threat Letter
It's been a while since we've seen a really good response letter to a -- as Ken White likes to call them -- "bumptious" legal threat letter. But here we've got one, courtesy of Ken himself, representing Chad Loder. Loder is a writer who has been calling out propagandist Andy Ngo and The Post Millennial, a propagandist rag that Ngo sometimes writes for. The Post Millennial was apparently sad about that and sent Loder a very silly legal threat:
Daily Deal: Meshforce M3 Mesh Wi-Fi System
Replace your old wireless router with the new Meshforce M3 Mesh Wi-Fi System. This flexible dual-band M3 system supports up to 60 devices. Its Wi-Fi coverage provides seamless connection for up to 4,500 sq.ft. - from your living room to your garage. Easy to expand the coverage by plugging more dots to enjoy a better Wi-Fi experience everywhere. This system supports up to 6 dots to build a Wi-Fi system for any home size. Use My mesh app and complete the setup of your mesh Wi-Fi System in less than 15 minutes. Manage your connections and guest network right on your iOS or Android mobile devices, at home or remotely, anytime and anyplace. Get one M3 hub and two M3 dots for $146. Use the coupon code VIP40 to get an additional 40% off.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
Blumenthal's Finsta Debacle: It Remains Unacceptable That Our Politicians Are So Clueless About The Internet
Fifteen years ago, the best example of how out of touch elected officials were regarding the internet was Senator Ted Stevens' infamous "it's a series of tubes" speech (which started out "I just the other day got, an internet was sent by my staff at 10 o'clock in the morning on Friday and I just got it yesterday.") Over the years, this unwillingness of those who put themselves in the position to regulate the internet to actually bother to understand it has become something of an unfortunate running joke. A decade ago, in the midst of the fight over SOPA/PIPA, we pointed out that it's no longer okay for Congress to not know how the internet works. And yet, a decade has passed and things have not gotten much better. Senator Ron Johnson tried to compare the internet to a bridge into a small creek. Senator Orrin Hatch has no clue how Facebook makes money.And now there's a new addition to the list of examples of totally clueless Senators seeking to regulate something they clearly don't understand. This time it's Senator Richard Blumenthal, who has been grandstanding about how he wants to take on the internet since long before he was elected to the Senate. He created the most cringe-worthy media clip of a politician in a while while trying to press Facebook's head of safety Antigone Davis during a Senate hearing on "grandstanding about how we all hate Facebook" (not the actual subject matter, but close enough).
The 'Digital Divide' Didn't Just Show Up One Day. It's The Direct Result Of Telecom Monopolization
We've noted for a while that the entirety of DC has a blind spot when it comes to discussing the U.S. broadband problem. As in, U.S. broadband is plagued by regional monopolies that literally pay Congress to pretend the problem isn't happening. That's not an opinion. U.S. broadband is slow, expensive, patchy, with terrible customer service due to two clear things: regional monopolization (aka market failure), and state and federal regulatory capture (aka corruption). That the telecom industry employs an entire cottage industry of think tankers, consultants, and policy wonks to pretend this isn't true doesn't change reality.But notice when regulators, politicians, and many news outlets discuss the problem, it's usually framed in this nebulous, causation free way. About 90% of the time, the problem is dubbed the "digital divide." But the cause of this broadband divide is always left utterly nebulous and causation free. It's almost pathological. Seriously, look at any news story about the "digital divide" in the last three months and try to find one that clearly points out that the direct cause of the problem is regional telecom monopolies and the corruption that protects them. You won't find it.This phenomenon again showed up this week in a CNET interview with Jessica Rosenworcel, who appears to be the top candidate in the Biden Administration's glacial pursuit of a permanent FCC boss. In the article, CNET talks repeatedly about the U.S. broadband problem without once mentioning that telecom monopolies exist, and are the primary reason U.S. broadband is painfully mediocre:
Facebook: Amplifying The Good Or The Bad? It's Getting Ugly
When the New York Times reported Facebook’s plan to improve its reputation, the fact that the initiative was called “Project Amplify” wasn’t a surprise. “Amplification” is at the core of the Facebook brand, and “amplify the good” is a central concept in its PR playbook.Amplify the goodMark Zuckerberg initiated this talking point in 2018. “I think that we have a clear responsibility to make sure that the good is amplified and to do everything we can to mitigate the bad,” he said after the Russian election meddling and the killings in Myanmar.Then, other Facebook executives adopted this notion regardless of the issue at hand. The best example is Adam Mosseri, Head of Instagram.In July 2019, addressing online bullying, Mosseri said: “Technology isn’t inherently good or bad in the first place …. And social media, as a type of technology, is often an amplifier. It’s on us to make sure we’re amplifying the good and not amplifying the bad.”In January 2021, After January 6 Capitol attack, Mosseri said: “Social media isn’t good or bad, like any technology, it just is. But social media is specifically a great amplifier. It can amplify good and bad. It’s our responsibility to make sure that we amplify more good and less bad.”In September 2021, after a week of exposés about Facebook by the WSJ, The Facebook Files, Mosseri was assigned to defend the company once again. “When you connect people, whether it’s online or offline, good things can happen and bad things can happen,” he said in his opening statement. “I think that what is important is that the industry as a whole tries to understand both those positive and negative outcomes, and do all they can to magnify the positive and to identify and address the negative outcomes.”Mosseri clearly uses the same messaging document, but Facebook’s PR template contains more talking points. Facebook also asserts that there have always been bad people or behaviors, and the current connectivity simply makes them more visible.A mirror for the uglyAccording to the “visibility” narrative, tech platforms simply reflect the beauty and ugliness in the world. Thus, social media is sometimes a cesspool because humanity is sometimes a cesspool.Mark Zuckerberg addresses this issue several times, with the main message that it is just human nature. Nick Clegg, VP of Global Affairs and Communications, repeatedly shared the same mindset. “When society is divided and tensions run high, those divisions play out on social media. Platforms like Facebook hold up a mirror to society,” he wrote in 2020. “With more than 3 billion people using Facebook’s apps every month, everything that is good, bad misogynist and ugly in our societies will find expression on our platform.” “Social media broadly, and messaging apps and technology, are a reflection of humanity,” Adam Mosseri repeated. “We communicated offline, and all of a sudden, now we’re also communicating online. Because we’re communicating online, we can see some of the ugly things we missed before. Some of the great and wonderful things, too.”This “mirror of society” statement is being criticized for being intentionally uncomplicated. Because the ability to shape, not merely reflect, people’s preferences and behavior is also how Facebook makes money. Therefore, despite Facebook’s recurring statements, it is accused of not reflecting but increasing the bad and ugly.Amplify the bad“These platforms aren’t simply pointing out the existence of these dark corners of humanity,” John Paczkowski from BuzzFeed News, told me. “They are amplifying them and broadcasting them. That is different.”After an accumulation of deadly events, such as the Christchurch shooting, Kara Swisher wrote about amplified hate and “murderous intent that leaps off the screen and into real life.” She argued that “While this kind of hate has indeed littered the annals of human history since its beginnings, technology has amplified it in a way that has been truly destructive.”It is believed that bad behavior (e.g., disinformation) is induced by the way that tech platforms are designed to maximize engagement. Thus, Facebook’s victim-centric approach refuses to acknowledge that perhaps bad actors don’t misuse its platform but rather use it as intended (“machine for virality”).Ev Williams, the co-founder of Blogger, Twitter, and Medium, said he now believes that he had failed to appreciate the risks of putting such powerful tools in users’ hands with minimal oversight. “One of the things we’ve seen in the past few years is that technology doesn’t just accelerate and amplify human behavior,” he wrote. “It creates feedback loops that can fundamentally change the nature of how people interact and societies move (in ways that probably none of us predicted).”So, things had turned toxic in ways that tech founders didn’t predict. Should they have foreseen them? According to Mark Zuckerberg, an era of tech optimism led to unintended consequences. “For the first decade, we really focused on all the good that connecting people brings … But it’s clear now that we didn’t do enough,” he said After the Cambridge Analytica scandal. He admitted they didn’t think through “how people could use these tools to do harm as well.” Several years after the Techlash coverage began, there’s a consensus that they needed to “do more” to purposefully deny the ability to abuse them.One of the reasons it was (and still is) a challenging task is their scale. According to this theme, the growth-at-all-cost “blinded” them, and they turned so big to be successfully managed at all. Due to their bigness, they are always in a game of cat-and-mouse with bad actors. “When you have hundreds of millions of users, it is impossible to keep track of all the ways they are using and abusing your systems,” Casey Newton, from the Platformer newsletter, explained in an interview. “They are always playing catch-up with their own messes.”Due to the unprecedented scale at which Facebook operates, it is dependent on algorithms. Then, it claims that any perceived errors result from “algorithms that need tweaking” or “artificial intelligence that needs more training data.” But is it just an automation issue? It depends on who you ask.The algorithms’ fault vs. the people who build them or use themCritics say that machines are only as good as the rules built into them. “Google, Twitter, and Facebook have all regularly shifted the blame to algorithms, but companies write the algorithms, making them responsible for what they churn out.”But platforms tend to avoid this responsibility. When ProPublica revealed that Facebook’s algorithms allowed advertisers to target users interested in “How to burn Jews” or “History of why Jews ruin the world,” Facebook’s response was: The anti-Semitic categories were created by an algorithm rather than by people.At the same time, Facebook‘s Nick Clegg argued that human agency should not be removed from the equation. In a post titled “You and the Algorithm: It takes two to Tango,” he criticized the dystopian depictions of their algorithms, in which “people are portrayed as powerless victims, robbed of their free will.” As if “Humans have become the playthings of manipulative algorithmic systems.”“Consider, for example, the presence of bad and polarizing content on private messaging apps - iMessage, Signal, Telegram, WhatsApp - used by billions of people around the world. None of those apps deploy content or ranking algorithms. It’s just humans talking to humans without any machine getting in the way,” Clegg wrote. “In many respects, it would be easier to blame everything on algorithms, but there are deeper and more complex societal forces at play. We need to look at ourselves in the mirror and not wrap ourselves in the false comfort that we have simply been manipulated by machines all along.”Fixing the machine vs. the underlying societal problemsNonetheless, there are various attempts to fix the “broken machine,” and some potential fixes are discussed more often. One of the loudest calls is for tougher regulation – legislation should be passed to implement reforms. Yet, many remain pessimistic about the prospects for policy rules and oversight because regulators tend not to keep pace with tech developments. Also, there’s no silver-bullet solution, and most of the recent proposals are overly simplistic.“Fixing Silicon Valley’s problems requires a scalpel, not an axe,” said Dylan Byers. However, tech platforms are faced with a new ecosystem of opposition, including Democrats and Republicans, antitrust theorists, privacy advocates, and European regulators. They all carry axes.For instance, there are many new proposals to amend Section 230 of the Communications Decency Act. But, as Casey Newton noted, “it won’t fix our politics, or our broken media, or our online discourse, and it’s disingenuous for politicians to suggest that it would.”When self-regulation is proposed, there is an inherent commercial conflict since platforms are in the business of making money for their shareholders. Facebook only acted after problems escalated and caused real damage. For example, only after the mob violence in India (another problem that existed before WhatsApp, and may have been amplified by the app) the company instituted rules to limit WhatsApp’s ‘virality.’” Other algorithms have been altered in order to eliminate conspiracy theories and their groups from being highly recommended.Restoring more human control requires different remedies: from decentralization projects, which seek to shift the ownership of personal data away from Big Tech and back toward users, to media literacy, which seek to formally educate people of all ages about the way tech systems function, as well as encourage appropriate, healthy uses.The proposed solutions could certainly be helpful, and they all should be pursued. Unfortunately, they are unlikely to be adequate. We will probably have an easier time fixing algorithms, or the design of our technology than we will have fixing society, and humanity has to deal with humanity’s problems.Techdirt’s Mike Masnick recently addressed the underlying societal problems that need fixing. “What we see - what Facebook and other social media have exposed – is often the consequences of huge societal failings.” He mentioned various problems with education, social safety nets, healthcare (especially mental healthcare), income inequality and corruption. Masnick concluded we should be trying to come up with better solutions for those issues rather than “insisting that Facebook can make it all go away if only they had a better algorithm or better employees.”We saw that with COVID-19 disinformation. After President Joe Biden blamed Facebook for “killing people,” and Facebook responded by saying they are “helping save lives,” I argued that this dichotomous debate sucks. Charlie Warzel called it (on his Galaxy Brian newsletter) “an unproductive, false binary of a conversation,” and he is absolutely right. Complex issues deserve far more nuance.I can’t think of a more complex issue than tech platforms’ impact on society, in general, and Facebook’s impact in particular. However, we seem to be stuck between the storylines discussed above, of “amplifying the good vs. the bad.” It is as if you can only think favorably or negatively about “the machine,” and you must pick a side and adhere to its intensified narrative.Keeping to a single narrative can escalate rhetoric and create an insufficient discussion, as evidenced by a recent Mother Jones article. The “Why Facebook won’t stop pushing propaganda” piece describes how a woman tried to become Montevallo’s first black mayor and lost. Montevallo is a very small town in Alabama (7,000 people), whose population is two-thirds white. Her race loss was blamed on Facebook: The rampart of misinformation and rumors about her affected the voting.While we can’t know what got people to vote one way or another, we should consider that racism was prevalent in places like Alabama for a long time. Facebook was the candidate's primary tool for her campaign, highlighting the good things about her historic nomination. Then, racism was amplified in Facebook’s local groups. In the article, the fault was centered on the algorithm amplification, on Facebook's “amplification of the bad.” Facebook’s argument that it only “reflects the ugly” does not hold true here if it makes it more robust. Yet, the root cause in this case remains the same, racism. Facebook “doing better” and amending its algorithms will not be enough unless we also address the source of the problem. WE can and should “do better,” as well.Dr. Nirit Weiss-Blatt is the author of The Techlash and Tech Crisis Communication
Copyright Continues To Be Abused To Censor Critics By Entities Both Big And Small
We've talked far too many times about how the DMCA takedown processes across internet industries as they stand are wide, wide open for abuse. From churches wielding copyright to attempt to silence critics engaging in protected speech, to lawyers using copyright to try to silence critics engaging in protected speech, to freaking political candidates abusing YouTube's DMCA notice process to silence critics engaging in protected speech... well, you get the idea. The point is that we've known for a long, long time that the current method by which the country and companies currently enforce copyright law tilts so heavily towards the accuser that it's an obvious avenue for misuse.And this is an issue created by bad actors big and small. Hell, apparently you cannot even criticique a sophomoric prank joke troop on YouTube without being targeted using copyright law.
Survey Confirms Los Angeles Sheriff's Department Is Still Home To Dangerous Gangs, Has No Solid Plan To Eliminate Them
The Los Angeles Police Department has spent years compiling a "gang database." The term "compile" is used loosely, because the LAPD decides people are gang members just because they know gang members, or are related to them, or live in the same buildings, or work near them, or pass through gang-controlled neighborhoods, or go to school with gang members, or just (as non-gang people are wont to do) wear clothes, shoes, and hats. It's ridiculous.And when that's not "inclusive" enough, LAPD officers fake it. LAPD officers have falsified records to justify unjustifiable stops and searches, something that ultimately resulted in criminal charges against three officers. But even with this wealth of bogus and barely supported information, the gang database (CALGANG) still has one glaring omission: the Los Angeles Sheriff's Department.
Social Media Regulation In African Countries Will Require More Than International Human Rights Law
There has been a lot of focus on moderation as carried out by platforms—the rules social media companies base their decision on what content remains online. There has however been limited attention on how actors other than social media platforms, in this case governments, seek to regulate these platforms.Focusing more on African governments, they carry out this regulation primarily through laws. These laws can be broadly divided into two: direct and indirect regulatory laws. The direct regulatory laws can be seen in countries like Ethiopia and more recently in Nigeria. They are similar to Germany’s Network Enforcement Act and France’s Online Hate Speech Law that directly place responsibilities on platforms and require them to remove online hate speech within a specific time and failure of which attracts heavy sanctions.Section 8 of Ethiopia’s Hate Speech and Disinformation Prevention and Suppression Proclamation 1185/2020 provides for various responsibilities for social media platforms and actors. These responsibilities include the suppression and prevention of disinformation and hate speech content by social media platforms and a twenty-four window within which such content must be removed from their platforms. It also provides that they should bring their policies in line with the first two responsibilities.The Proclamation further vests the reporting and public awareness responsibilities on the compliance of social media platforms in the Ethiopian Broadcasting Authority—a body empowered by law to regulate broadcasting services. The Ethiopian Human Rights Commission (EHRC), Ethiopia’s National Human Rights Institution (NHRI), also has responsibilities on public awareness. But it is the Council of Ministers that’s responsible for implementing laws in Ethiopia that may give further guidance on the responsibilities of social media platforms and other private actors.In Nigeria, the legislative proposal, the Protection from Internet Falsehoods, Manipulation and Other Related Matters bill, is yet to become law. The bill seeks to regulate disinformation and coordinated inauthentic behaviour online. The law is similar to that of Singapore which has been criticised by the current United Nations Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression for the threats it poses to online expression and online rights in general.Major criticisms against these laws include how they are opaque and pose threats to online expression. For example, the Ethiopian law defines hate speech broadly and does not include the contextual factors that must be considered in categorising online speech as hateful. With respect to the Nigerian bill, there are no clear oversight, accountability or transparency systems in place to check the government's unlimited powers to decide what constitutes disinformation.The indirect regulatory laws are those used by governments through their telecommunications regulatory agencies to compel Internet Service Providers (ISPs) to block social media platforms. This type of regulation requires ISPs to block social media platforms based on public emergencies or national interests. What constitutes these emergencies or interests are vague and in many instances are examples of voices or platforms critical of government policies.In January 2021, the Ugandan government ordered ISPs to block Facebook, Twitter, WhatsApp, Signal and Fiber. The order was issued through the communications regulator. The order came a day after Facebook’s announcement that it will close pro-government accounts sharing disinformation.In June 2021, the Nigerian government ordered ISPs to block access to Twitter stating that the latter’s activities constituted threats to Nigeria’s corporate existence. However, there have been contrary views that the order was as a result of both remote and immediate causes. The remote cause was the role Twitter played in connecting and rallying publics during the #EndSARS protests against police brutality while the immediate cause was attributed to Twitter’s deletion of President Muhammadu Buhari’s tweet which referred to the country’s civil war, contained veiled threats of violence, and violated Twitter’s abusive policies.In May 2021, Ethiopia had just lifted the block on social media platforms in six locations in the country. Routine shutdowns like these have become a thing for African governments and this often occurs during elections or a major political development.On a closer look, the cross-cutting challenge posed by both forms of regulation is the lack of accountability and transparency especially on the part of governments on how they enforce these provisions. Social media platforms are also complicit as there is little or no information on the nature of pressure they face from these government actors.Alongside the mainstream debates on how to govern social media platforms, it is time to also consider wider forms of regulation especially on how they manifest outside Western systems and the threats such regulation poses to online expression.One solution that has been suggested but also severely criticised is the application of international human rights standards to social media regulation. This standard has been argued to be the most preferred because of its customary application across contexts. However, its biggest strength also seems to be its biggest weakness—how does this standard apply in local contexts given the complexity of governing online speech and the myriad of actors involved?In order to work towards effective solutions, we will need to re-imagine and re-purpose traditional governance roles of not only governments and social media platforms, but also ISPs, civil society, and NHRIs. For example, the unchecked powers of most governments to determine what constitutes online harms must be revisited to ensure that there are judicial reviews and human rights impact assessments (HRIAs) of proposed government social media bans.ISPs must also be encouraged to jump into the fray, choose human rights, and not to roll over each time governments make such problematic demands to block social media platforms. For example, they should begin to join other actors like the civil society and the academia to lobby for laws and policies that make judicial reviews and HRIAs requirements before entertaining governments request for blocking of platforms or even content.The application of international human rights standards to social media regulation is not where the work stops, but is where it begins. For a start, proximate actors involved in social media regulation like governments, social media platforms, private actors, local and international civil society bodies, and treaty-making bodies like the United Nations and the African Union, NHRIs must come up with a typology of harms as well as actors actively involved in such regulation. In order to ensure that these addresses the challenges posed by these kinds of regulation, the responsibilities of such actors must be anchored on international human rights standards but in such a way that these actors actively communicate and collaborate.Tomiwa Ilori is currently a Doctoral Researcher at the Centre for Human Rights, Faculty of Law, University of Pretoria. He also works as a Researcher for the Expression, Information and Digital Rights Unit of the Centre.Techdirt and EFF are collaborating on this Techdirt Greenhouse discussion. On October 6th from 9am to noon PT, we'll have many of this series' authors discussing and debating their pieces in front of a live virtual audience (register to attend here).
Tune In To Our Live Stream Of The 300th Techdirt Podcast Episode!
As we recently announced, we're celebrating 300 episodes of the Techdirt Podcast with a special live-streamed episode today, an hour from now, at 1pm PT/4pm ET. Original co-hosts Dennis Yang and Hersh Reddy are returning to join Mike for this discussion, and we're also (barring technical issues) allowing our Patreon backers to call in live with questions!Watch the live stream on YouTube »If you're not yet a backer but would like to call in, there's still time! Just back us at any level on Patreon and you'll gain access to a Patron-only post there, which contains the link to watch via our podcast recording platform and use the call-in feature.We're excited to celebrate this milestone with our listeners and supporters, and look forward to seeing you all there!
Google, NBC Bring Dumb Cable TV Blackout Feuds To Streaming
For years cable TV has been plagued by retrans feuds and carriage disputes that routinely end with users losing access to TV programming they pay for. Basically, broadcasters will demand a rate hike in new content negotiations, the cable TV provider will balk, and then each side blames the other for failing to strike a new agreement on time like reasonable adults. That repeatedly results in content being blacked out for months, without consumers ever getting a refund. After a few months, the two sides strike a new confidential deal, your bill goes up, and nobody much cares how that impacts the end user. Rinse, wash, repeat.And while the shift to streaming TV has improved a lot about cable TV in general, these annoying feuds have remained. The latest case in point: Comcast NBC Universal is demanding more money from Google for the 14+ channels currently on the company's YouTube TV live streaming platform. Google appears to be balking, resulting in NBC running a bunch of annoying banners on its channels warning about a looming blackout, and directing people to this website blaming Google for not wanting to pay more money for the same content:In a blog post, Google notes that negotiations are ongoing, but suggests that Comcast isn't being reasonable in negotiations:
Daily Deal: The 2021 Google Software Engineering Manager Bundle
The 2021 Google Software Engineering Manager Bundle has 12 courses to help you learn software development. You'll learn about Data Science, Python, C#, Java, and more. Two courses will help you prepare for the CISA and CISM certification exams. It's on sale for $40.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
Facebook's Latest Scandals: The Banality Of Hubris; The Messiness Of Humanity
Over the last few weeks, the WSJ has run a series of posts generally called "The Facebook Files," which have exposed a variety of internal documents from Facebook that are somewhat embarrassing. I do think some of the reporting is overblown -- and, in rather typical fashion regarding the big news publications and their reporting on Facebook, presents everything in the worst possible light. For example, the report on how internal research showed that Instagram made teen girls feel bad about themselves downplays that the data actually shows a significantly higher percentage of teens indicated that Instagram made them feel better:But, of course, the WSJ's headline presents it very differently:None of this is to say that this is okay, or that Facebook shouldn't be trying to figure out ways to minimize people using the sites being made to feel worse about themselves. But the reporting decisions here do raise some questions.Another one of the articles highlights how Facebook has different rules for different users with regards to content moderation. And, again, on a first pass this sounds really damning:
CIA, NSA Block Ads Network-Wide To Protect Agencies. Ron Wyden Says Rest Of Gov't Should Do The Same.
Not everyone uses an ad-blocker. But most people do. And no matter how much online publications claim ad blocking is the same thing as stealing, it really isn't. If they're bent out of shape about it, it's because they assault users with ads, burying content behind a wall of uncurated virtual salesmen. If it bleeds, it leads, the old saying goes, but now it refers to readers' processing power and data allotments.Far too many online publications consider processing the check on the ad buy to be the end of their responsibility. But ad servers get hijacked. Other ad companies get purchased by ad pushers with more malleable morals. Everyone collects reams of data on every site visitor. The end user of sites seems to be the last concern for ad brokers and the people who sell to them, so it's no surprise more people are deploying ad blockers, seeing as readers of even supposedly-reputable sites have been hit with malware, spyware, and auto-playing video when just trying to access some content.Ads can be dangerous. They can compromise systems and hijack browsers. The general public definitely knows this. Enjoy this shade thrown at ad saturation and website design overcompensation:
Misquoting Einstein Is Fast And Stupid, But Not Accurate
I was writing something a while ago, and had reason to quote the famous aphorism “Computers are incredibly fast, accurate, and stupid; humans are incredibly slow, inaccurate, and smart.” I’ll bet you’ve heard a variation on that quote before, and probably have seen a meme or two with it. It’s usually attributed to Einstein.(wait, who’s Tom Asacker?)But when you’re writing a research paper, and you need to add the source citation into Zotero for bibliographic and reference management, you need the actual publication title and date, as well as the author. So I went looking.About 25 links and a Wiki-hole later, I stumbled over this article by Ben Shoemate, a web architect and developer who’d come across the same problem I had--in 2008.Ben had sought this quote as well, thinking that with tens of thousands of search results pointing to Einstein as the quote’s author, there must be a source somewhere.Even at NASA’s showcase at the conference called Supercomputing 2006, this quote was attributed to Einstein, and then later fact-checked. No one can find out who said it. The closest anyone had gotten was Ben finding a single page (page 691) of a screenshotted article that seemed to have it.Well, I was sitting in England, in a small room in Oxford to be precise, in lockdown, a short walk from the most extensive library on earth since the Library of Alexandria first smelled smoke. I had plenty of things better to do but a bee in my bonnet, nonetheless. I started trying to track down the source article Ben had mentioned; it was something from the Instrument Society of America. However, while I was a short walk from the Bodleian, the lockdown meant no one could go inside. Oddly, that turned out to be a boon for this little quest. Because most university libraries in the world right now are cooperating with each other to an extraordinary extent, I was able to talk the librarian at the Bod into calling whichever university would have the article I needed--the one surrounding page 691.It turned into a combination of Telephone and Who’s On First.
Content Moderation Case Studies: Coca Cola Realizes Custom Bottle Labels Involve Moderation Issues (2021)
Summary: Content moderation questions can come from all sorts of unexpected places — including custom soda bottle labels. Over the years, Coca Cola has experimented with a variety of different promotional efforts regarding more customized cans and bottles, and not without controversy. Back in 2013, as part of its “Share a Coke” campaign, the company offered bottles with common first names on the labels, which angered some who felt left out. In Israel, for example, people noticed that Arabic names were left off the list, although Coca Cola’s Swedish operation said that this decision was made after the local Muslim community asked not to have their names included.This controversy was only the preamble to a bigger one in the summer of 2021, when Coca Cola began its latest version of the “Share a Coke” effort — this time allowing anyone to create a completely custom label up to 36 characters long. Opening up custom labels immediately raised content moderation questions.Some people quickly noticed some surprising terms and phrases that were blocked (such as “Black Lives Matter”) while others that were surprisingly not blocked (like “Nazis”).As CNN reporter Alexis Benveniste noted, it was easy to get offensive terms through the blocks (often with a few tweaks), and there were some eye-opening contrasts:
Clearview Suffers Brief Bout Of Better Judgment, Drops Subpoena Demanding Activists' Communications With Journalists
Just a few days ago, Clearview -- the company that scrapes the web to build a facial recognition database it sells to law enforcement, government agencies around the world, and a number of private parties -- decided to make itself even less likable.It decided to subpoena transparency activists Open The Government, demanding copies of all FOIA requests it had made requesting info about Clearview. It also, more disturbingly, demanded copies of OTG's communications with journalists, clearly indicating it felt First Amendment protections were something it should enjoy, but shouldn't be extended to its critics.It really wasn't a step Clearview needed to take… for several reasons. First of all, Clearview's reputation is pure dog shit at the moment. It's unlikely to improve unless the company pulls the plug on its product and disbands. A move like this only earns it more (deserved) disdain and hatred. What it's not going to do -- even if successful -- is deter future criticism of the business and its scraped-together facial recognition product.Second, OTG was not a party to the lawsuit. Clearview has no right to demand these documents from a non-party, especially communications between it and journalists… journalists who are also not a party to the lawsuit Clearview is facing.Third, if Clearview wanted copies of OTG's FOIA requests, all it had to do is visit OTG's MuckRock page and download all of its publicly accessible requests and responses.There's some good news, though. Shortly after having shot itself in the face, Clearview had second thoughts about the self-inflicted wound it had just sustained. Here's Alexandra Levine for Politico.
The Vital Role Intermediary Protections Play for Infrastructure Providers
More than ever, the Internet powers much of our daily life. From staying in touch with friends and family to our work, healthcare, banking, and education, we rely on it and we take for granted that it will always be there.But the way that the Internet was built and how it functions were never a fait accompli. An obscure statute—Section 230, a law enacted more than twenty-five years ago—is core to the Internet as we know it today. It’s also frequently misunderstood. In recent years, many critics of Big Tech from across the political spectrum point to Section 230 as the enabling force for a litany of harmful online content and abuses, some of whom contend that its abolition would immediately lead to a better Internet.In reality, Section 230 provides a wide range of non-Big Tech actors, including Internet intermediaries, limited immunity allowing them to operate without worrying about liability stemming from content created by others. This legal protection catalyzes and supports the growth of an amazing, vast array of innovative companies that make the Internet what it is today.There has been ample discussion already of the fundamentals of Section 230, some of it right here in the pages of Techdirt, so it would not be useful for me to go through it all once more. I think it is essential first to clearly identify and describe the key elements of what we collectively call “the Internet,” explaining where and how infrastructure companies fit in. before quickly touching on what Section 230 is and debunking 3 pernicious and persistent myths about it. I will close by giving you 6 real examples of activities that are actually protected by Section 230 and demonstrate why this law is so vital.The Internet and its infrastructureThe Internet as we know it can be broken down into three sectors: the transmission sector, the infrastructure sector, and the content sector.
Microsoft CEO Politely Confirms Trump TikTok Fracas Was Dumb, Performative, Nonsense
Last year we noted how the calls to ban TikTok didn't make a whole lot of sense. For one thing, a flood of researchers have shown that TikTok is doing all the same things as many other foreign and domestic adtech-linked services we seem intent to...do absolutely nothing about.Secondly, the majority of the most vocal pearl-clutchers over the app (Josh Hawley, etc.) haven't cared a whit about things like consumer privacy or internet security, highlighting how the yearlong TikTok freak out was more about performative politics than policy. The wireless industry SS7 flaw? US cellular location data scandals? The rampant lack of any privacy or security standards in the internet of things? The need for election security funding?Most of the folks who spent last year hyperventilating about TikTok haven't made so much as a peep on these other subjects. Either you actually care about consumer privacy and internet security or you don't, and a huge swath of those hyperventilating about TikTok have been utterly absent from the broader conversation. In fact, many of them have done everything in their power to scuttle any effort to have even modest privacy guidelines for the internet era, and fought every effort to improve and properly fund election security. Again, that's because for many it's more about politics than serious, adult tech policy.After Trump Inc proposed banning TikTok, you'll recall the administration came up with another dumb idea. Basically, they suggested selling ByteDance-owned TikTok to Trump allies over at Oracle and Walmart. It was just glorified cronyism, though for whatever reason a lot of the press and policy circles seriously and meaningfully analyzed the move as if it was anything else. It wasn't, and quickly fell apart like the dumb house of cards it was.At one point Microsoft was tossed around as a potential suitor for TikTok as well. And in conversations this week with Kara Swisher, Microsoft CEO Satya Nadella confirmed the whole TikTok tapdance last year was every bit as stupid as we assumed it was. He's diplomatic about it, but Nadella notes how Trump's public posturing about TikTok wasn't backed by, well, anything:
Daily Deal: Spreeder Speed eReader VIP
Spreeder is an eReader that uses the RSVP technology or RSVP (rapid serial visual presentation) to allow you to speed read any digital content by reducing eye movement on your iPhone, iPad, Android, Mac, and PC. Easily read at 3 or more times than your normal speed, and save valuable time. Rather than simply giving you the software activities and leaving you on your own (as older programs do), world-leading experts guide you at every step of the way. It’s like having the world’s best speed reading instructors and technology right in the room with you. It's on sale for $39. Use the code VIP40 for an additional 40% off.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
Court To Sheriff: Sending An Officer To Tell A Teen To Delete Instagram Posts Is So Very Obviously A Rights Violation
Wisconsin is apparently America's Karen.
The SHOP SAFE Act Is A Terrible Bill That Will Eliminate Online Marketplaces
We've already posted Mike's post about the problems with the SHOP SAFE Act that is getting marked up today, as well as Cathy's lamenting the lack of Congressional concern for what they're damaging, but Prof. Eric Goldman wrote such a thorough and complete breakdown of the problems with the bill that we decided that was worth posting too.[Note: this blog post covers Rep. Nadler’s manager’s amendment for the SHOP SAFE Act, which I think will be the basis of a committee markup hearing today. If Congress were well-functioning, draft bills going into markup would be circulated a reasonable time before the hearing, so that we can properly analyze them on a non-rush basis, and clearly marked as the discussion version so that we’re not confused by which version is actually the current text.]The SHOP SAFE Act seeks to curb harmful counterfeit items sold through online marketplaces. That’s a laudable goal that I expect everyone supports. However, this bill is itself a giant counterfeit. It claims to focus on “counterfeits” that could harm consumer “health and safety,” but those are both lies designed to make the bill seem narrower and more balanced than it actually is.Instead of protecting consumers, this bill gives trademark owners absolute control over online marketplaces by overturning Tiffany v. eBay. It creates a new statutory species of contributory trademark liability that applies to online marketplaces (defined more broadly than you think) selling third-party items that bear counterfeit marks and implicate “health and safety” (defined more broadly than you think), unless the online marketplace operator does the impossible and successfully navigates over a dozen onerous and expensive compliance obligations.Because the bill makes it impossible for online marketplaces to avoid contributory trademark liability, this bill will drive most or all online marketplaces out of the industry. (Another possibility is that Amazon will be the only player able to comply with the law, in which case the law entrenches an insurmountable competitive moat around Amazon’s marketplace). If you want online marketplaces gone, you might view this as a good outcome. For the rest of us, the SHOP SAFE Act will reduce our marketplace choices, and increase our costs, during a pandemic shutdown when online commerce has become even more crucial. In other words, the law will produce outcomes that are the direct opposite of what we want from Congress.In addition to destroying online marketplaces, this bill provides the template for how rightsowners want to reform the DMCA online safe harbor to make it functionally impossible to qualify for as well. In this respect, the SHOP SAFE Act portends how Congress will accelerate the end of the Web 2.0 era of user-generated content.[The rest of this post is 4k+ words explaining what the bill does and why it sucks. You might stop reading here if you don’t want the gory/nerdy details.]Who’s Covered by the BillThe bill defines an “electronic commerce platform” as “any electronically accessed platform that includes publicly interactive features that allow for arranging the sale or purchase of goods, or that enables a person other than an operator of the platform to sell or offer to sell physical goods to consumers located in the United States.”Clearly, the second part of that definition targets Amazon and other major marketplaces, such as eBay, Walmart Marketplace, and Etsy. I presume it also includes print-on-demand vendors that enable users to upload images, such as CafePress, Zazzle, and Redbubble (unless those vendors are considered to be retailers, not online marketplaces).The first part of the definition includes services with “publicly interactive features that allow for arranging the sale or purchase of goods.” This is a bizarre way to describe any online marketplace, and it covers something other than enabling third-party sellers (that’s the second part of the definition), so what services does this describe? Read literally, all advertising “allow[s] for arranging the sale or purchase of goods,” so this law potentially obligates every ad-supported publisher to undertake the content moderation obligations the bill imposes on online marketplaces. That doesn’t make sense, because the bill uses the undefined term “listing” 11 times, and display advertising isn’t normally considered to be a listing. Still, this wording is unusual and broad — and you better believe trademark owners like its breadth. If the bill wasn’t meant to regulate all ads, the bill drafters should make that clear.Like most Internet regulations nowadays, the bill distinguishes entities based on size. See my article with Jess Miers on how legislatures should do that properly. The bill applies to services that have “sales on the platform in the previous calendar year of not less than $500,000.” Some problems with this distinction:
The Rule Of Fences, And Why Congress Needs To Temper Its Appetite To Undermine Internet Service Provider Liability Protection
As Congress takes up yet another ill-considered bill to deliberately create more risk of liability for Internet services, it is worth remembering something President Kennedy once said:
Genshin Impact Developer Goes With Extremely Fan-Friendly Fan-Art For Commercial Sale Policy
The manner in which content producers generally, and video game publishers specifically, handle art and content created by their biggest fans varies wildly. There's the Nintendo's of the world, where strict control over all things IP is favored over allowing fans to do much of anything with its properties. Other gaming companies at least allow fans to do some things with their properties, such as making let's play videos and that sort of thing. Still other gaming companies like Square have managed to let fans do some large and amazing projects with its IP.And then there is Chinese gaming studio miHoYo, makers of the hit title Genshin Impact, where the studio doesn't just allow fans to make their own art and merchandise... but also flatout tells them that they can go sell it, too.
Our Crowdfund For Our Paper Exploring NFTs Will Be Ending Soon
Support our crowdfunded paper exploring the NFT phenomenon »Last week we announced that we wanted to write a paper exploring the NFT phenomenon, and specifically what it meant with regards to the economics around scarce and infinitely available goods. To run this crowdfund, we're testing out a cool platform called Mirror that lets us mix crowdfunding and NFTs as part of the process (similarly, we're now experimenting with NFTs with our Plagiarism by Techdirt collection).We were overwhelmed by the support for the paper, which surpassed what we expected. The "podium" feature -- which gave special NFTs to our three biggest backers -- has closed with the winners being declared, but the rest of the crowdfund will remain open until this Thursday evening. We also offered up a special "Protocols, Not Platforms" NFT for the first 15 people who backed us at 1 ETH or above. So far, ten of those have been claimed, but five remain.If anyone is interested in supporting this paper and our work exploring scarcity and abundance, please check it out.
Yet Another Move To Funnel Money To Big Copyright Companies, Not Struggling Creators
When modern copyright came into existence in 1710, it gave a monopoly to authors for just 14 years, with the option to extend it for another 14. Today, in most parts of the world, copyright term is the life of the creator, plus 70 years. That’s typically over a hundred years. The main rationale for this copyright ratchet – always increasing the duration of the monopoly, never reducing it – is that creators deserve to receive more benefit from their work. Of course, when copyright extends beyond their death, that argument is pretty ridiculous, since they don’t receive any benefit personally.But the real scandal is not so much that creators’ grandchildren gain these windfalls – arguably something that grandpa and grandma might approve of. It’s that most of the benefit of copyright goes to the companies that creative people need to work with – the publishers, recording companies, film studios, etc.One of the cleverest moves by the copyright industry was to claim to speak for the very people it exploits must brutally. This allows its lobbyists to paint a refusal to extend copyright, or to make its enforcement harsher, as spitting in the face of struggling artists. It’s a hard argument to counter, unless you know the facts: that few creators can make a living from copyright income alone. Meanwhile, copyright companies prosper mightily: some publishers enjoy 40% profit margins thanks to the creativity of others.By claiming to represent artists, copyright companies can also justify setting up costly new organisations that will supposedly channel more money to creators. In fact, as later blog posts will reveal, collecting societies have a record of spending the money they receive on fat salaries and outrageous perks for the people who run them. In the end, very little goes to the artists they are supposed to serve.EurActiv has a report about an interesting new copyright organization:
Techdirt Podcast Episode 299: The Misinformation About Disinformation
Disinformation continues to be a major topic of discussion across many fields, but a lot of what people believe about the subject is... questionable at best. One of the more thoughtful writers on the subject is Joe Bernstein from Buzzfeed News, whose recent cover story in Harper's brings a very different and valuable perspective to the debate. This week, he joins us on the podcast to discuss the glut of misconceptions and misinformation about disinformation.Additionally, as we recently announced, we'll be celebrating our upcoming 300th episode of the podcast with a live stream featuring the return of the original co-hosts Dennis Yang and Hersh Reddy, including (hopefully, barring technical issues) the ability for viewers who back our Patreon to call in live and ask questions. The stream will happen on Thursday, September 30th at 1pm PT/4pm ET — stay tuned for more details on how you can watch the stream, and be sure to back our Patreon if you want a chance to call in!Follow the Techdirt Podcast on Soundcloud, subscribe via Apple Podcasts, or grab the RSS feed. You can also keep up with all the latest episodes right here on Techdirt.
Should Information Flows Be Controlled By The Internet Plumbers?
Content moderation is a can of worms. For Internet infrastructure intermediaries, it’s a can of worms that they are particularly poorly positioned to tackle. And yet Internet infrastructure elements are increasingly being called on to moderate content—content they may have very little insight into as it passes through their systems.The vast majority of all content moderation happens on the “top” layer of the internet—such as social media and websites, places online that are the most visible to an average user. Platforms and applications moderate the content that gets posted on their platforms every day. If a post violates a platform’s terms of service, the post is usually blocked or taken down. If a user continues to post content that violates a platform’s terms, then the user’s account is often suspended. These types of content moderation practices are increasingly understood by average Internet users.Less often discussed or understood are the types of services facilitated via actors in the Internet ecosystem that both support and exist under the upper content layers of the Internet.Many of these companies host content, supply cloud services, register domain names, provide web security, and many more features of what could be described as the plumbing services of the Internet. But instead of water and sewage, the Internet deals in digital information. In theory, these “infrastructure intermediaries” could moderate content, but for reasons of convention, legitimacy, and practicality they don’t usually do it on purpose.However, some notable recent exemptions may be setting precedent.Amazon Web Services removed Wikileaks from their system in 2010. Cloudflare kicked off the Daily Stormer. An Italian court ordered Cloudflare to remove a copyright infringing site. Amazon suspended hosting for Parler.What does all this mean? Infrastructure may have the means to perform “content moderation,” but it is critical to consider the effects of this trend to prevent harming the Internet’s underlying architecture.In principle, Internet service providers, registries, cloud providers and other infrastructure intermediaries should be agnostic to the content which passes over their systems. Their business models have nothing to do with whether one is sending text, audio or video. Instead, they are meant to act as neutral intermediaries, providing a reliable service. In a sense, they operate the plumbing system that delivers the water. While we might engage a plumber to inspect and repair our pumps, do we feel comfortable relying on the plumber to check the quality of the water every minute of every day? Should the plumber be able to shut off water access indefinitely with no oversight?Despite this, big companies have made decisions to moderate content that is clearly out of their scope as infrastructure intermediaries. It begs the question: why? Were these actions to uphold some sort of moral authority or primarily on the business grounds of public perception? How comfortable are we with these types of companies “regulating” content in the absence of—or even at the behest of—governmental regulation?If these companies add content moderation to their responsibilities, it takes away the time and resources they can dedicate to security, reliability, and new features, some of which may even help fight reasons for wanting to moderate content. And while large companies may have the means, it adds an additional role outside of their original purview or mission that would be costly or unattainable for most startups or smaller companies.As pressure mounts from public opinion, regulators, and courts, we should recognize what is happening and properly understand where problems can be best addressed and what problems we don’t know enough about to warrant messing with the plumbing of the Internet just yet. Moreover, we should be wary of any regulation which may turn to infrastructure intermediaries explicitly to moderate content.Asking an infrastructure intermediary to moderate content would be like asking the waiter to cook the meal, the pilot to repair the plane, or the police officer to serve as the judge. Even if it were possible, we must ask whether it is truly an acceptable approach.The Internet is often referred to as a layered architecture because it is comprised of different types of infrastructure and computer entities. Expecting them to each moderate content indiscriminately would be problematic. Who would they be accountable to?A core idea often proposed is that content moderation should occur at the highest available layer, closest to the user. Some even argue that content moderation below this, in the realm of infrastructure, is more problematic because these companies cannot easily moderate a single content item. Infrastructure needs to work at scale, and moderating a single piece of content may mean effectively turning off a water main to fix a dripping faucet. That is, infrastructure companies often have to paint with a broader brush by removing an entire user or an entity’s access to their service.These broad strokes of moderation are often deep and wide in their effect, and critics argue they go too far. Losing access to a system is clearly more final than having a single item removed from a system.Georgia Evans summarized the problem well, saying “the deeper into the stack a company is situated, the less precise and more extreme their actions against harmful content are.” For this reason, Corinne Cath refers to them as reluctant sheriffs and political gatekeepers. These are important complexities which must be woven into any understanding of deep-layer moderation by Internet infrastructure companies and policymakers.The tech community and policy makers must ensure that no policy proposals unintentionally require the plumber’s legal role to include quality assurance and access determination. In the realm of the Internet, certain actors have certain functions and things work in a modular, interoperable way by design. The beauty of the Internet is that no one company or entity must “do it all” to achieve a better Internet. But, we must also ensure that new demands for additional functionality—e.g., moderation—are situated at the right layer and target the party with the expertise and role most likely to do a careful job.Policymakers must consider the unintended impacts of content moderation proposals on infrastructure intermediaries. Legislating without due diligence to understand the impact on the unique role of these intermediaries could be detrimental to the success of the Internet, and an increasing portion of the global economy that relies on Internet infrastructure for daily life and work.Conducting impact assessments prior to regulation is one way to mitigate the risks. The Internet Society created the Internet Impact Assessment Toolkit to help policymakers and communities assess the implications of change—whether those are policy interventions or new technologies.Policy changes that impact the different layers of the Internet are inevitable. But we must all ensure that these policies are well crafted and properly scoped to keep the Internet working and successful for everyone.Austin Ruckstuhl is a Project & Policy Advisor at the Internet Society where he works on Internet impact assessments, defending encryption and supporting Community Networks as access solutions.Techdirt and EFF are collaborating on this Techdirt Greenhouse discussion. On October 6th from 9am to noon PT, we'll have many of this series' authors discussing and debating their pieces in front of a live virtual audience (register to attend here). On October 7th, we'll be hosting a smaller workshop focused on coming up with concrete steps we can take to make sure providers, policymakers, and others understand the risks and challenges of infrastructure moderation, and how to respond to those risks.
Area Free Market Proponent Sues Facebook For Defaming Him By Moderating His Personal Marketplace Of Climate Change Ideas
Being consistent is hard. Just ask John Stossel, libertarian news commentator and self-proclaimed supporter of free markets and deregulation.Here's John touting the power of free markets to route around perceived "censorship" by platforms engaging in moderation:
Daily Deal: The Complete NFT And Cryptocurrency Masterclass Bundle
The Complete NFT And Cryptocurrency Masterclass Bundle has 6 courses to help you learn all you need to know to create your own NFTS and how to trade cryptocurrency. You'll gain a strong understanding of the NFT world and how they work. You'll also learn some of the most popular methods that you can use to start earning passive income from cryptocurrency. It's on sale for $30. Use the code VIP40 for an additional 40% off.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
This Thursday, Watch The Techdirt Podcast Live - And Become A Patreon Backer To Call In With Questions!
As you may know, we’re fast approaching our 300th episode of the podcast, and to celebrate we’re bringing back the original co-hosts, Dennis Yang and Hersh Reddy, to join Mike Masnick for a special live-streamed episode this Thursday, September 30th at 1pm PT / 4pm ET.Stay tuned on Thursday morning when we’ll be sharing a link to the YouTube live stream here on the blog and on Twitter. But, for our backers on Patreon, we’re also testing out a new feature that will allow you to call in live and talk to the hosts! If you're already a backer, you can find the link to join via our recording and call-in platform on Patreon in your message inbox and in a backers-only post on our page. If not, now's the time to become a patron and get access to this and other special bonuses! This is the first time we’ve experimented with this feature, so we’re anticipating the possibility of technical issues that prevent it from working — but if all goes well, we’re excited to field your questions and celebrate 300 episodes of The Techdirt Podcast!
Rep. Jerry Nadler Pushing New Bill That Will Destroy Online Commerce; Make Sure Only Amazon Can Afford The Liability
'Tis the season for terrible, horrible, no good bills to destroy the open internet. First up, we've got Rep. Jerry Nadler, a close friend of the always anti-internet lobbying force that is the legacy copyright industries. Earlier this year he introduced the SHOP SAFE Act, which is due for a markup tomorrow, and has an unfortunately high likelihood of passing out of committee. The principle behind the Act (which Nadler has now updated with a manager's amendment) is that "something must be done" about people buying counterfeit goods online.Not addressed, at all, is whether or not counterfeit goods online are actually a real problem. I know that industry folks always insist that counterfeiting is a scourge that costs billions, but actual research on this shows something different entirely. A GAO report from years back showed that most of the stats regarding counterfeiting are completely exaggerated and multiples studies have shown that -- far from "tricking" people -- most people who buy counterfeits know exactly what they're doing, and that for many buyers, buying a counterfeit is an aspirational purchase. That is, they know they're not buying the real thing, but they're buying the counterfeit because that's what they can afford -- and if they can afford the real thing at a later date, they will buy it. But nearly all of the public commentary on counterfeiting assumes that the public is clueless, and being "tricked" into buying "dangerous" counterfeits.The second bad premise behind the SHOP SAFE Act is that the "real problem" is Section 230 (because everyone wants to assume that Section 230 can be blamed for anything bad online). So the core approach of the SHOP SAFE Act is to add liability to websites that allow people to sell stuff online. However, as EFF notes in its write up about the problems with this bill, if you try to sell something via Craigslist or even just via Gmail, the bill would effectively make those companies liable for your sale.
Research Shows Apple's New Do Not Track App Button Is Privacy Theater
While Apple may be attempting to make being marginally competent at privacy a marketing advantage in recent years, that hasn't always gone particularly smoothly. Case in point: the company's new "ask app not to track" button included in iOS 14.5 is supposed to provide iOS users with some protection from apps that get a little too aggressive in hoovering up your usage, location, and other data. In short, the button functions as a more obvious opt out mechanism that's supposed to let you avoid the tangled web of privacy abuses that is the adtech behavioral ad ecosystem.But of course it's not working out all that well in practice, at least so far. A new study by the Washington Post and software maker Lockdown indicates that many app makers are just...ignoring the request entirely. In reality, Apple's function doesn't really do all that much, simply blocking app makers from accessing one bit of data: your phone's ID for Advertisers, or IDFA. But most apps have continued to track a wide swath of other usage and location data, and the overall impact on user privacy has proven to be negligible:
Following Nationwide Police Brutality Protests, DOJ Steps Up To Issue Incremental Updates To Its Chokehold/No-Knock Warrant Policies
The Department of Justice is the nominal leader of US law enforcement, even if it really only has direct control of federal officers. That being said, it would have been nice to see the DOJ take the lead on law enforcement issues, rather than gently coast into the police reform driveway late in the proverbial night to add itself to the bottom of the list of reform efforts springing up all over the nation in response to, you guessed it, violence committed by police officers.Chokeholds have been controversial for forever, but even more so in recent years, as police officers across the nation have killed people they were just supposed to be arresting, using techniques most police departments claim (often after the fact) they've banned for years. The DOJ has never banned chokeholds previously, and it's apparently not going to start now.The new guidance [PDF] doesn't seem like much of an improvement over the old guidance, which was released more than 17 years ago. The old one said that the DOJ has had a "long-standing policy" that limits use of deadly force to situations where officers have a "reasonable belief" the arrestee "poses an imminent danger of death or serious physical injury to the officer or to another person." This is the same standard that governs almost all use of force by officers all over the nation and it really hasn't stopped them from deploying deadly force unreasonably in situations that could have benefitted from de-escalation and restraint.The revamped guidance doesn't change much, if anything, about the threat calculus officers must perform before deciding to kill someone by choking them to death.
Marvel Hit Once Again By Estate For Some Spider-Man, Doctor Strange Copyright Terminations
It's no secret that we haven't been huge fans of the termination rights that exist in current copyright law. Not because we don't want original artists to be able to profit from their own work, of course. Rather, the problems are that copyright is already simply too long, which makes the termination issue far too often not about artists themselves profiting from their work, but rather about their families doing so. Add to that the more salient issue that these termination rights tend to be mostly useful for creating massive messes and disputes between parties over the validity of termination requests and the fact is that this stuff gets really icky really fast.But, the current reality is that termination rights in the law exist, so there is no reason why creators shouldn't use that part of the law. You may recall that a decade ago Marvel was hit by a series of termination requests for copyrights on all kinds of superhero stories and characters by Jack Kirby. Kirby's estate lost in court every step of the way up to the Supreme Court, with Marvel arguing that all of Kirby's work was work for hire, but Marvel and the estate reached a settlement before SCOTUS could take up the case. For termination requests for work that occurred prior to the Copyright Act of 1976 coming into force, how those requests should be ruled upon is still an open question.But perhaps we have another shot at getting clarity on this and, what do you know, it concerns Marvel yet again. Another creator has petitioned for termination on some specific copyrights around Spider-Man and Doctor Strange.
The Future Of Streaming TV Looks Increasingly Like Cable, But Free
There's been little doubt that the streaming TV revolution has been a decidedly good thing. Competition from streaming has resulted in more options, for less money, and greater programming flexibility than ever before. Streaming customer satisfaction is consistently higher than traditional cable TV as a result, and lumbering giants that fought against evolution for years (at times denying that cord cutting even existing) have been forced to actually try a little harder if they want to retain TV subscribers.Of course the more things change, the more they stay the same. And a lot of the problems that plagued the traditional TV experience have made their way to streaming. For example, since broadcasters (which were primarily responsible for the unsustainable cost of traditional cable TV) must have their pound of flesh to satiate investor needs for quarterly returns, price hikes in live streaming service have been arriving fast and furiously. And the more the industry attempts to innovate, the more it finds itself retreading fairly familiar territory.Case in point: to lure more users to its platforms and streaming hardware, Google is in talks with multiple companies to offer users free streaming TV channels, complete with ads:
...154155156157158159160161162163...