Of all the trademark insanity we cover here, there are still little nuggets of niche gold when it comes to the truly insane trademark disputes. There are plenty of these categories, but one of my personal favorites is when real life brands get their knickers twisted over totally unrelated items in fiction. If you cannot conceptualize what I'm talking about, see the lawsuit brought by a software company that creates something called Clean Slate against Warner Bros. because...The Dark Knight Rises had a piece of software in it that was referred to as "clean slate."Which brings us, as most stories about insanity do, to Florida. Epic Games released a new map for its hit game Fortnite recently, entitled Coral Castle. The map includes motifs of water and structures made from coral. CCI, based out of Florida, holds trademarks for a real life landmark called Coral Castle. There too, you can catch real life motifs of water mixed with structures made to look like coral. It is not, however, a video game setting. It is real life. And, yet, CCI has decided to sue Epic Games over the name of its map.
The New Jersey Supreme Court has made the Fifth Amendment discussion surrounding compelled production of passwords/passcodes more interesting. And by interesting, I mean frustrating. (h/t Orin Kerr)The issue is far from settled and the nation's top court hasn't felt like settling it yet. Precedent continues to accumulate, but it's contradictory and tends to hinge on each court's interpretation of the "foregone conclusion" concept.If the only conclusion that needs to be reached by investigators is that the suspect owns the device and knows the password, it often results in a ruling that says compelled decryption doesn't violate the Fifth Amendment, even if it forces the suspect to produce evidence that could be used against them. Less charitable readings of this concept recognize that "admitting" to ownership of a device is admitting to ownership of everything in it, and view the demand for passcodes as violating Fifth Amendment protections against self-incrimination. The stronger the link between the suspect and the phone, the less Fifth Amendment there is to go around.This decision [PDF] deals with a crooked cop. Sheriff's officer Robert Andrews apparently tipped off a drug dealer who was being investigated. The dealer tipped off law enforcement about Andrews' assistance with avoiding police surveillance -- something that involved Officer Andrews telling the drug suspect to ditch phones he knew were being tapped and giving him information about vehicles being used by undercover officers.Two iPhones were seized from Andrews who refused to unlock them for investigators. Investigators claimed they had no other option but force Andrews to unlock them. According to the decision, there was no workaround available at that time (at some point in late 2015 or early 2016).
There's an excellent piece over at RealClearPolitics arguing that COVID-19 killed the techlash. It makes a fairly compelling argument, coming at it from multiple angles. First, there's the question of how real the "techlash" ever was. It's long appeared to be more of a media- and politician-driven narrative than a real anger coming from people who make use of technology every day:
As anti-police brutality protests have spread across the country in the wake of the yet another killing of an unarmed Black man by a white police officer, so has surveillance. Another set of documents found in the "Blue Leaks" stash shows a California-based "fusion center" spreading information about First Amendment-protected activities to hundreds of local law enforcement agencies. Pulling in information from all over -- including apparent keyword searches of social media accounts -- the Northern California Regional Intelligence Center (NCRIC) distributed info on protests and protesters to officers across the state.
Back in 2014 when Facebook bought Oculus, there were the usual pre-merger promises that nothing would really change, or that Facebook wouldn't erode everything folks liked about the independent kickstarted product. Oculus founder Palmer Luckey, who has since moved on to selling border surveillance tech to the Trump administration, made oodles of promises to that effect before taking his money and running toward the sunset. Among those promises was the promise users would never be forced to use a Facebook login account just to use your VR headset and its games, and that the company wouldn't track your behavior for advertising.Like every major merger, those promises didn't mean much. This week, Facebook and Oculus announced that users will soon be forced to -- use a Facebook account if they want to be able to keep using Oculus hardware, so the company can track its users for advertising purposes. The official Oculus announcement tries to pretend that this is some innate gift to the end user, instead of just an obvious way for Facebook to expand its behavioral advertising empire:
The Build a Strategy Game Development Bundle has 10 courses to help you learn how to build your own game with the Unity Real-Time Development Platform. You'll learn strategy game fundamentals and mechanics, camera control, resource gathering, unit spawning mechanics, 3D isometric city-building, and more. Other courses cover Godot Game Engine, Photon, Azure, and more. It's on sale for $40.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
We noted last week that Judge Lewis Kaplan (like so many other judges who have copyright troll Richard Liebowitz in their courts) was fed up with Richard Liebowitz's unwillingness to follow fairly straightforward orders, including that he produce the retainer agreement with his clients, as well as present evidence that the client knew of and approved the specific lawsuits at hand. Judge Kaplan did this in at least two (and possibly more?) cases. In the case we mentioned last week -- the Chosen Figure LLC v. Smiley Miley case -- despite already receiving a benchslap from the judge for not providing the retainer agreement, Liebowitz has filed some random emails between his own staff and... his client's girlfriend? That does include an email from his client saying he doesn't check email much so to have his girlfriend on email chains instead, though it's not clear that this will be enough to satisfy the judge's request for authorization for "this case specifically," but we'll see.However, much more interesting is that, for what appears to be the first time, Liebowitz has revealed his retainer agreement with clients. And, man, do his clients get a raw deal. Liebowitz gets 50% of any proceeds after costs which come out of any settlement received. In other words, more than half (potentially a lot more than half) of the money from any settlement goes to Liebowitz. That would mean that Richard Liebowitz has a larger financial stake in the outcome of these cases than his own clients.Also, in typical bad lawyering fashion, Liebowitz tells his clients there's a possibility that they might recover some fees from the other lawyers, but leaves out that his own clients may be on the hook for the other side's legal fees. And this is not theoretical as Liebowitz's track record includes costing his clients money in legal fees. Yet his retainer agreement seems to suggest the only reason his clients should think about legal fees is in how they might get them from the other side:
I've mentioned a few times that I don't think the TikTok ban is coherent policy.One, the majority of the politicians pearl clutching over the teen dancing app have been utterly absent from other privacy and security debates (say like U.S. network security flaws or the abuse of location data). In fact, many of them have actively undermined efforts to shore up U.S. privacy and security, whether we're talking about the outright refusal to fund election security improvements, or repeated opposition to even the most basic of privacy laws for the modern era. Let's be clear: a huge swath of these folks are simply engaged in performative, xenophobic politics and couldn't care less about U.S. privacy and security.Two, banning TikTok doesn't actually accomplish much of anything. It doesn't really really thwart Chinese intelligence, which could just as easily buy this data from an absolute ocean of barely regulated international adtech middlemen, obtain it from any one of a million hacked datasets available on the dark net, or steal it from the, you know, millions upon millions of "smart" and IOT devices we attach to our home and business networks with no security and reckless abandon. In full context of the U.S., where privacy and security standards are hot garbage, the idea that banning a Chinese teen dancing app does all that much is just silly.That said, I remain surprised by the big names in tech policy who continue to believe the Trump administration's sloppy and bizarre TikTok ban accomplishes much of anything. Case in point: Columbia law professor Tim Wu, whose pioneering work on net neutrality and open platforms I greatly admire, penned a new piece for the New York Times arguing that a "ban on Tiktok is overdue." Effectively, Wu argues that because China routinely bans U.S. services via its great firewall, turnabout is fair play:
The South Wales Police has been deploying a pretty awful facial recognition program for a few years now. Back in 2018, documents obtained by Wired showed its test deployment at multiple events attended by thousands was mostly a mistake. The system did ring up 173 hits, but it also delivered nearly 2,300 false positives. In other words, it was wrong about 92% of the time.Civil liberties activist Ed Bridges sued the South Wales Police after his image was captured by its camera system, which is capable of capturing up to 50 faces per second. Bridges lost at the lower level. His case was rejected by the UK High Court, which ruled capturing 50 faces per second was "necessary and proportionate" to achieve its law enforcement ends.Fortunately, Bridges has prevailed at the next level. The Court of Appeal has ruled in favor of Bridges and against the SWP's mini-panopticon.The decision [PDF] opens with a discussion of the automated facial recognition technology (AFR) used by the SWP, which runs on software developed by NEC called "NeoFace Watch." Watchlists are compiled and faces that pass SWP's many cameras are captured and compared to this list. On the list are criminal suspects, those wanted on warrants (or who have escaped from custody), missing persons, persons of interest for "intelligence purposes," vulnerable persons, and whatever this thing is: "individuals whose presence at a particular event causes particular concern."Here's how it works:
You will recall the brief clusterfuck that occurred earlier this month in Georgia's Paulding County. The school district there, which opened back up for in-person classes while making wearing a mask completely optional, also decided to suspend two students who took and posted pictures of crowded hallways filled with maskless students. While the district dressed these suspensions up as consequences for using a smartphone on school grounds, the school's administration gave the game away by informing all students that they would be disciplined for any criticism by students on social media in general. That, as we pointed out, is a blatant First Amendment violation.Once the blow-back really got going, the school district rescinded the suspensions. In the days following, students and teachers at the school began falling ill and testing positive for COVID-19. It got bad enough that the school decided to shut down. With so much media attention, it was a matter of who was going to get the FOIA requests in for documents on what led to the suspensions first.Vice put a request in. However, because this district can't seem to stop punching itself in the gut, the school district is attempting to duck the FOIA requests entirely. Not through redactions. It just isn't going to give up any internal documents at all, even as it acknowledges it has documents in hand.
Summary:As is the case on any site where consumer products are sold, there's always the chance review scores will be artificially inflated by bogus reviews using fake accounts, often described as "sock puppets."Legitimate reviews are organic, prompted by a buyer's experience with a product. "Sock puppets," on the other hand, are bogus accounts created for the purpose of inflating the number of positive (or -- in the case of a competitor -- negative) reviews for a seller's product. Often, they're created by the seller themself. Sometimes these faux reviews are purchased from third parties. "Sock puppet" activity isn't limited to product reviews. The same behavior has been detected in comment threads and on social media platforms.In 2012 -- apparently in response to "sock puppet" activity, some of it linked to a prominent author -- Amazon engaged in a mass deletion of suspected bogus activity. Unfortunately, this moderation effort also removed hundreds of legitimate book reviews written by authors and book readers.In response to authors' complaints that their legitimate reviews had been removed (along with apparently legitimate reviews of their own books), Amazon pointed to its review guidelines, claiming they forbade authors from reviewing other authors' books.
In today's insanity, Facebook's top lobbyist in India, Ankhi Das, has filed a criminal complaint against journalist Awesh Tiwari. Tiwari put up a post on Facebook over the weekend criticizing Das, citing a giant Wall Street Journal article that is focused on how Facebook's rules against hate speech have run into challenges regarding India's ruling BJP party. Basically, the article said that Facebook was not enforcing its hate speech rules when BJP leaders violated the rules (not unlike similar stories regarding Facebook relaxing the rules for Trump supporters in the US).Das is named in the original article, claiming that she had pushed for Facebook not to enforce its rules against BJP leaders because it could hurt Facebook's overall interests in India. Tiwari called out Das' role in his Facebook post, and it appears Das took offense to that:
On July 2nd,1999, Ricky Byrdsong was out for a jog near his home in Skokie, Illinois, with two of his young children, Sabrina and Ricky Jr. The family outing would end in tragedy. His children watched helplessly as their father was gunned down. He was the victim of a Neo-Nazi on a murderous rampage targeting Jewish, Asian and Black communities. Ten other people were left wounded. Won-Joon Yoon, a 26 year-old graduate student at the University of Indiana, would also be killed.When you distill someone's life down to their final minutes, it does a disservice to their humanity and how they lived. Though I didn't know Won-Joon Yoon, I met Coach Byrdsong — one of few Black men's head basketball coaches in the NCAA — through my father, who is also part of this small fraternity. As head coaches in Illinois in the late 90s, their names were inevitably linked to each other. They occasionally played one another. Beyond his passion for basketball, Coach Byrdsong's love of God, and his commitment to community and family shone bright.Coach Byrdsong was the first Black head basketball coach at Northwestern University in Evanston, Illinois. His appointment was a big deal: Northwestern is a private university in an NCAA "power conference," with a Black undergraduate population of less than 6%. I visited Northwestern's arena when my dad was an assistant coach at the University of Illinois. At 11-years old, I remember being surrounded by belligerent college students making ape noises. When I hear jangling keys at sporting events, I'm transported back to the visceral feeling of being surrounded by thousands of (white) college students, alumni and locals, shaking their car keys while smugly chanting "that's alright, that's ok, you will work for me one day."Their ditty, directed towards a basketball court overwhelmingly composed of Black, working-class student athletes, seemed to say: you don't belong here, and you never will — a sentiment that still saturates the campus. This is the world that Neo Nazi Benjamin Smith came from. Smith was raised in Wilmette, Illinois, one of the richest and whitest suburbs in the country, less than five miles from where he killed Coach Byrdsong.The digital boundaries that exist online, much like the neighborhood ones, carve up communities often by ethnicity, class, and subculture. In these nooks a shared story and ideology is formed that reinforces an "us against the world" mentality. It's debatable whether that's intrinsically bad — but in this filter bubble, it is hard to see our own reflection accurately, let alone others. This leaves both our digital and physical bodies vulnerable.Matthew Hale, Smith's mentor and founder of the World Church of the Creator, was an early adopter of Internet technology. He was part of a 90s subculture of white nationalists that flocked to the web, stitching a digital hood anonymizing those who walk and work amongst us. Hale's organization linked to white power music, computer games, and developed a website "Creativity for Kids," with downloadable white separatist coloring books. They used closed chat rooms and internet forums to rile up thirst for a race war. They understood the importance of e-commerce as a vehicle for trafficking hate, and they experimented with email bombing and infiltrating chat rooms.Beyond being tech savvy, Hale was also a lawyer, who in 1999 was being defended by the ACLU. The Illinois Bar Association had denied Hale's law license based on his incitement of racial hatred and violence against ethnic and religious groups. The ACLU has had a long run of defending white nationalists including Charlottesville "Unite the Right" organizer Jason Kessler. In 1978 they defended the organizers of a Skokie Nazi march, the same community where Coach Byrdsong was assassinated. At the time 1 in every 6 Jewish residents there was either a survivor, or directly related to a survivor of the Holocaust.Hale's law license was rejected based on three main points:
What appears to be a very combative divorce between two very combative people in Marin County, California has reached the point of criminal charges. Not justifiable criminal charges, but criminal charges all the same.Melissanne Velyvis has been very publicly documenting everything about her divorce proceedings and her ex-husband's (Dr. John Velyvis) alleged domestic abuse. In an apparent attempt to silence her from discussing her personal life (which necessarily involved discussing his personal life), John approached a judge and secured a restraining order forbidding his ex-wife from publishing "disparaging comments." Here's Judge Beverly Wood making her feelings clear about Melissanne's divorce-focused blogging:
The 2020 Ultimate Web Developer and Design Bootcamp Bundle has 11 courses designed to help you kick start your career as a web developer and designer. You'll learn about Java, HTML, CSS3, APIs, and more. By the end of the courses, you will be able to confidently design, code, validate, and launch websites online. The bundle is on sale for $40.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
Late Monday, it came out that Oracle is one of the potential American acquirers of TikTok from the Chinese company ByteDance, after President Trump ordered Bytedance sell TikTok out of spite. Microsoft has been the most talked about potential purchaser, though there were also rumors of a potential bid by Twitter.The Oracle rumor strikes many as particularly bizarre, for good reason. Oracle is pretty much an enterprise-only focused company. However, if it has one strength it is in buying up companies and integrating them into its cashflow generation machine. I'm still not sure I see the synergies here, but perhaps Larry Ellison is finally realizing that Oracle is the opposite of cool in Silicon Valley.However, the thing that struck me most about all of this is that Oracle is one of the main companies behind the plot to undermine Section 230. Oracle has been a funder of a weird group of anti-Section 230 activists, and has been involved in multiple anti-Section 230 crusades. And, as we've pointed out in the past, it seems pretty clear why: Oracle has always been incredibly (to a petty level) jealous of Google and Facebook's success -- and seems to see Section 230 reform as a weapon it can use to attack those companies without harming itself, since Oracle doesn't really host much user generated content.Of course, that would change if Oracle actually ended up buying TikTok. Suddenly, it would have a massive platform full of user generated content, and it would be fascinating to watch if Oracle changes its tune on 230 (or calls off its attack dogs who keep misrepresenting 230). That would certainly be interesting. Of course, the general rumor is that Oracle is really just doing this to drive up the price for Microsoft (who Oracle is losing to in the fight for "cloud" supremacy), but President Trump has given his blessing for an Oracle/TikTok deal, which isn't too surprising, given that Oracle's top execs have been sucking up to Trump and praising him since he was elected.
Late Monday, it came out that Oracle is one of the potential American acquirers of TikTok from the Chinese company ByteDance, after President Trump ordered Bytedance sell TikTok out of spite. Microsoft has been the most talked about potential purchaser, though there were also rumors of a potential bid by Twitter.The Oracle rumor strikes many as particularly bizarre, for good reason. Oracle is pretty much an enterprise-only focused company. However, if it has one strength it is in buying up companies and integrating them into its cashflow generation machine. I'm still not sure I see the synergies here, but perhaps Larry Ellison is finally realizing that Oracle is the opposite of cool in Silicon Valley.However, the thing that struck me most about all of this is that Oracle is one of the main companies behind the plot to undermine Section 230. Oracle has been a funder of a weird group of anti-Section 230 activists, and has been involved in multiple anti-Section 230 crusades. And, as we've pointed out in the past, it seems pretty clear why: Oracle has always been incredibly (to a petty level) jealous of Google and Facebook's success -- and seems to see Section 230 reform as a weapon it can use to attack those companies without harming itself, since Oracle doesn't really host much user generated content.Of course, that would change if Oracle actually ended up buying TikTok. Suddenly, it would have a massive platform full of user generated content, and it would be fascinating to watch if Oracle changes its tune on 230 (or calls off its attack dogs who keep misrepresenting 230). That would certainly be interesting. Of course, the general rumor is that Oracle is really just doing this to drive up the price for Microsoft (who Oracle is losing to in the fight for "cloud" supremacy), but President Trump has given his blessing for an Oracle/TikTok deal, which isn't too surprising, given that Oracle's top execs have been sucking up to Trump and praising him since he was elected.
A coalition of cities has filed a desperate, and likely doomed, lawsuit (pdf) against streaming providers like Netflix and Disney. In it, the cities proclaim that they are somehow owed 5 percent of gross annual revenue. Why? Apparently they believe that because these streaming services travel over telecom networks that utilize the public right of way, they're somehow owed a cut:
Earlier this year we noted that the Australian government was setting up a you're-too-successful tax on Google and Facebook which it planned to hand over to media organizations. We should perhaps call it the "Welfare for Rupert Murdoch" tax, because that's what it is. Murdoch, of course, owns a huge share of media operations in Australia and has been demanding handouts from Google for years (showing that his claimed belief in the free market was always hogwash).In response, Google has now released an open letter to Australians pointing out that this plan to tax Google to funnel money to Murdoch will have massive unintended consequences. In particular, Google argues, under the law, Google would be required to give an unfair advantage to big media companies:
Readers here will be sick of this, but we're going to have to keep beating it into the general populace's head: trademark law is about preventing confusion as to the source of a good or service. The idea is to keep buyers from being fooled into buying stuff from one company or person while thinking they were buying it from another. That's basically it.It's a lesson still to be learned, and one which a federal judge has imparted on famed jewelry maker Tiffany & Co. The backstory here is that back in 2013, on Valentine's Day of all days, Tiffany & Co. sued Costco over the latter's advertisement of "Tiffany" style rings.
Washington DC responded to widespread protests following the killing of George Floyd with a set of police reforms that tried to address some systemic problems in the district's police department, starting with its lack of transparency and accountability.The reform bill -- passed two weeks after George Floyd's killing -- placed new limits on deadly force deployment, banned the Metropolitan PD from acquiring military equipment through the Defense Department's 1033 program, and mandated release of body-camera footage within 72 hours of any shooting by police officers. The names of the officers involved are covered by the same mandate, ensuring it won't take a lawsuit to get the PD to disclose info about officers deploying deadly force.But there's a lawsuit already in the mix -- one that hopes to keep the public separated from camera footage and officers' names. Unsurprisingly, it's been filed by a longtime opponent of police accountability.
This week we've got another cross-post, with the latest episode of The Neoliberal Podcast from the Progressive Policy Institute. Host Jeremiah Johnson invited Mike, along with PPI's Alec Stapp, to discuss everything about encryption: the concept itself, the attempts at laws and regulations, and more.Follow the Techdirt Podcast on Soundcloud, subscribe via iTunes or Google Play, or grab the RSS feed. You can also keep up with all the latest episodes right here on Techdirt.
ICE continues to not care what anyone thinks of it. Its tactics over the past few years have turned it into one of the federal government's most infamous monsters, thanks to its separation of families, caging of children, unfettered surveillance of undocumented immigrants, its fake university sting created to punish students trying to remain in the country legally, its sudden rescinding of COVID-related distance learning guidelines solely for the purpose of punishing students trying to remain in the country legally… well, you get the picture.Perhaps it's fitting ICE is buying tech from a company that appears unconcerned that most of the public hates it. Clearview -- the facial recognition software that matches uploaded facial images with billions of images scraped from the open web -- is one of the latest additions to ICE's surveillance tech arsenal.
The storm has passed and the charges have been dropped. But the fact that someone who tweeted about police behavior, and, worse, people who retweeted that tweet, were ever charged over it is an outrage, and to make sure that it never happens again, we need to talk about it. Because it stands as a cautionary tale about why First Amendment protections are so important – and, as we'll explain here, why Section 230 is as well.To recap, protester Kevin Alfaro became upset by a police officer's behavior at a recent Black Lives Matter protest in Nutley, NJ. The officer had obscured his identifying information, so Alfaro tweeted a photo asking if anyone could identify the officer "to hold him accountable."Several people, including Georgana Szisak, retweeted that tweet. The next thing they knew, Alfaro, Sziszak, and several other retweeters found themselves on the receiving end of a felony summons pressing charges of "cyber harassment" of the police officer.As we've already pointed out, the charges were as pointless as they were spurious, because they themselves directly did the unmasking of the officer's identity, which the charges maintained was somehow a crime to ask for. Over at the Volokh Conspiracy, Eugene Volokh took further issue with the prosecution, and in particular its application of the New Jersey cyber harassment statute against the tweet. Particularly in light of an earlier case, State v. Carroll (N.J. Super. Ct. App. Div. 2018), he took a dim view:
Calmind Mental Fitness App helps you improve your quality of life by focusing on what's important and getting rid of distractions. It provides soothing and sensory stories to reduce stress and help you fall asleep faster, as well as ASMR triggers and calming tones. Calmind is on sale for $70.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
Last week there was quite a lot of news paid to Apple kicking Fortnite out of the iOS app store for violating the rules by avoiding Apple's in-app payment setup (out of which Apple takes 30%). Epic, who had been hinting at this for a while, introduced a direct payment offering that effectively avoided the 30% charge that Apple (and Google) require from developers.There have been arguments over the last decade or so since Apple implemented its policy requiring subscription revenue to go through Apple's system -- but this is probably the biggest fight yet. Epic was clearly expecting Apple to do this because almost immediately after Fortnite was removed from the app store, Epic first released a Nineteen Eighty-Fortnite parody ad, mocking Apple's infamous 1984 Superbowl ad.Almost immediately, Epic also sued Apple over the removal in a legal complaint that was clearly prepared well in advance. Represented by some of the top antitrust lawyers in the country, and weighing in at 65 pages, Epic had spent some time preparing for this fight. To drive this point home, the lawsuit itself references 1984 in the opening paragraph, tying into Epic's marketing campaign:
While fifth-generation (5G) wireless will result in faster, more resilient networks (once it's finally deployed at scale years from now), the technology has been over-hyped to an almost comical degree. Yes, faster, lower latency networks are a good thing, but 5G is not as paradigm-rattling as most wireless carriers and hardware vendors have led many in the press to believe. 5G is more of a useful evolution than a revolution, but it has become the equivalent of magic pixie dust in tech policy circles, wherein if you simply say "it will lead to faster deployment of 5G!" you'll immediately add gravitas to your otherwise underwhelming K Street policy pitch.Here on planet Earth, most consumers couldn't care less about 5G. In most surveys U.S. consumers -- who pay some of the highest prices in the world for mobile data -- say their top priority is usually lower prices. That's increasingly true during a pandemic and economic crisis, where every dollar counts.Enter Verizon, which, instead of reading the market, has been repeatedly trying to charge $10 extra for 5G despite consumers not seeing the value. Verizon executives had fooled themselves into thinking a "premium" upgrade warranted a premium price tag. But consumers quickly realized the extra money simply wasn't worth it. For one, Verizon's network is barely available (one study stated a full 5G signal was available about 0.4% of the time). First generation 5G devices are also expensive and tend to suffer from crappier battery life. All for admittedly faster speeds most users don't think they need yet.With consumers not really that interested, and no other wireless carriers attempting to charge extra anyway, Verizon has been forced to finally back away from the $10 monthly surcharge after flirting with it since last year:
Eugene Volokh reports an Ohio court has hit a number of defendants in a libel lawsuit with an unconstitutional order forbidding them from posting the name of the man suing them. It's no ordinary man, though. It's a police officer who several attendees of a Cincinnati city council meeting have both identified and claimed used a racist hand sign while interacting with them.
The disruption caused by COVID-19 has touched most aspects of daily life. Education is obviously no exception, as the heated debates about whether students should return to school demonstrate. But another tricky issue is how school exams should be conducted. Back in May, Techdirt wrote about one approach: online testing, which brings with it its own challenges. Where online testing is not an option, other ways of evaluating students at key points in their educational career need to be found. In the UK, the key test is the GCE Advanced level, or A-level for short, taken in the year when students turn 18. Its grades are crucially important because they form the basis on which most university places are awarded in the UK.Since it was not possible to hold the exams as usual, and online testing was not an option either, the body responsible for running exams in the UK, Ofqual, turned to technology. It came up with an algorithm that could be used to predict a student's grades. The results of this high-tech approach have just been announced in England (other parts of the UK run their exams independently). It has not gone well. Large numbers of students have had their expected grades, as predicted by their teachers, downgraded, sometimes substantially. An analysis from one of the main UK educational associations has found that the downgrading is systematic: "the grades awarded to students this year were lower in all 41 subjects than they were for the average of the previous three years."Even worse, the downgrading turns out to have affected students in poorly performing schools, typically in socially deprived areas, the most, while schools that have historically done well, often in affluent areas, or privately funded, saw their students' grades improve over teachers' predictions. In other words, the algorithm perpetuates inequality, making it harder for brilliant students in poor schools or from deprived backgrounds to go to top universities. A detailed mathematical analysis by Tom SF Haines explains how this fiasco came about:
Multiple experts on Section 230 have pointed out that the NTIA's bizarre petition to the FCC to reinterpret Section 230 of the Communications Decency Act is complete nonsense. Professor Eric Goldman's analysis is quite thorough in ripping the petition to shreds.
Google's on-again, off-again relationship with China is off again. A decade ago, Google threatened to pull out of China because the government demanded a censored search engine. Fast forward to 2018 and it was Google offering to build a censored search engine for the China market. A few months later -- following heavy internal and external criticism -- Google abandoned the project.China is now imposing its will on Hong Kong in violation of the agreement it made when the UK returned control of the region to the Chinese government. Its latest effort to stifle long-running pro-democracy demonstrations took the form of a "national security" law which was ratified by the far-too-obsequious Hong Kong government. The law equates advocating for a more independent Hong Kong with sedition and terrorism, allowing authorities to punish demonstrators and dissidents with life sentences for, apparently, fighting back against a government that agreed it wouldn't impose its will on Hong Kong and its residents.For years, Google has refused to honor data requests from the Chinese government. Following this latest attack on Hong Kong autonomy, it appears Google now feels the region is indistinguishable from China.
Paid advertising content should not be covered by Section 230 of the Communications Decency Act. Online platforms should have the same legal risk for ads they run as print publishers do. This is a reform that I think supporters of Section 230 should support, in order to save it.Before I explain why I support this idea, I want to make sure I'm clear as to what the idea is. I am not proposing that platforms be liable for content they run ads next to -- just for the ads themselves. I am not proposing that the liability lies in the "tools" that they provide that can be used for unlawful purposes, that's a different argument. This is not about liability for providing a printing press, but for specific uses of the printing press -- that is --publication.I also don't suggest that platforms should lose Section 230 if they run ads at all, or some subset of ads like targeted ads, like a service-wide on/off switch. The liability would just be normal, common-law liability for the content of the ads themselves. And “ads” just means regular old ads, not all content that a platform commercially benefits from.It's fair to wonder whom this applies to. Many of the examples listed below have to do with Facebook selling ads that are displaying on Facebook, or Google placing ads on Google properties, and it's pretty obvious that these companies would be the ones facing increased legal exposure under this proposal. But the internet advertising ecosystem is fiendishly complex, and there are often many intermediaries between the advertiser itself, and the proprietor of the site the ad is displayed on.So at the outset, I would say that any and all of them could be potentially liable. If Section 230 doesn't apply to ads, it doesn't apply to supplying ads to others; in fact, these intermediary functions are considered a form of "publishing" under the common law. Which party to sue would be the plaintiff's choice, and there are existing legal doctrines that prevent double recovery, and to allow one losing defendant to bring in, or recover from, other responsible parties.It's important to note, too, that this is not strict or vicarious liability. In any given case, it could be that the advertiser could be found liable for defamation or some kind of fraud but the platform isn't, because the elements of the tort are met for one and not the other. Whether a given actor has the "scienter" or knowledge necessary to be liable for some offense has to be determined for each party separately -- you can impute the state of mind from one party onto another, and strict liability torts for speech offenses are, in fact, unconstitutional.The Origins Of An IdeaI first started thinking about it in the context of monetized content. After a certain dollar threshold is reached with monetized content, there should be liability for that, too, since the idea that YouTube can pay thousands of dollars a month to someone for their content but then have a legal shield for it simply doesn't make sense. The relationship of YouTube to a high-paid YouTuber is more similar to that between Netflix and a show producer, than it is between YouTube and your average YouTuber whose content is unlikely to have been reviewed by a live, human YouTube official. But monetized content is a marginal issue; very little of it is actionable, and frankly the most detestable internet figures don't seem to depend on it very much.But the same logic runs the other way, to when the content creator is paying a platform for publishing and distribution, instead of the platform paying the content creator. And I think eliminating 230 for ads would solve some real problems, while making some less-workable reform proposals unnecessary.Question zero should be: Why are ads covered by Section 230 to begin with? There are good policy justifications for Section 230 -- it makes it easier for there to be sites with a lot of user posts that don't need extensive vetting, and it gives sites a free hand in moderation. Great. It's hard to see what that has to do with ads, where there is a business relationship. Business should generally have some sense of whom they do business with, and it doesn't seem unreasonable for a platform to do quite a bit more screening of ads before it runs them than of tweets or vacation updates from users before it hosts them. In fact, I know that it's not an unreasonable expectation because the major platforms such as Google and Facebook already do subject ads to heightened screening.I know I'm arguing against the status quo, so I have the burden of persuasion. But in a vacuum, the baseline should be that ads don't get a special liability shield, just as in a vacuum, platforms in general don't get a liability shield. The baseline is normal common law liability and deviations from this are what have to be justified.I'm Aware That Much "Harmful" Content is not UnlawfulA lot of Section 230 reform ideas either miss the mark or are incompletely theorized, since, of course, much -- maybe even most -- harmful online content is not unlawful. If you sued a platform over it, without 230, you'd still lose, but it would take longer.You could easily counter that the threat of liability would cause platforms to invest more in content moderation overall, and I do think that this is likely true, it is also likely that such investments could lead to over moderation that limits free expression by speakers that are even considered mildly controversial.But with ads, there is a difference. Much speech that would be lawful in the normal case -- say, hate speech -- can be unlawful when it comes to housing and employment advertisements. Advertisements carry more restrictions and regulations in any number of ways. Finally, ads can be tortious in the standard ways as well: they can be fraudulent, defamatory, and so on. This is true of normal posts as well -- but with ads, there's a greater opportunity, and I would argue obligation, to pre-screen them.Many Advertisements Perpetuate HarmScam ads are a problem online. Google recently ran ads for scam fishing licenses, despite being told about the problem. People looking for health care information are being sent to lookalike sites instead of the site for the ACA. Facebook has even run ads for low-quality counterfeits and fake concert tickets. Instead of searching for a locksmith, you might as well set your money on fire and batter down your door. Ads trick seniors out of their savings into paying for precious metals. Fake customer support lines steal people's information -- and money. Malware is distributed through ads. Troublingly, internet users in need of real help are misdirected to fake "rehab clinics" or pregnancy "crisis centers" through ads.Examples of this kind are endless. Often, there is no way to track down the original fraudster. Currently, Section 230 allows platforms to escape most legal repercussions for enabling scams of this kind, while allowing the platforms to keep the revenue earned from spreading them.There are many more examples of harm, but the last category I'll talk about is discrimination, specifically through housing and employment discrimination. Such ads might be unlawful in terms of what they say, or even to whom they are shown. Putting racially discriminatory text in a job or housing ad can be discriminatory, and choosing to show a facially neutral ad to just certain racial groups could be, as well. (There are tough questions to answer -- surely buying employment ads on sites likely to be read by certain racial groups is not necessarily unlawful -- but, in the shadow of Section 230, there's really no way to know how to answer these questions.)In many cases under current law, there may be a path to liability in the case of racially discriminatory ads, or other harmful ads. Maybe you have a Roommates-style fact pattern where the platform is the co-creator of the unlawful content to begin with. Maybe you have a HomeAway fact pattern where you can attach liability to non-publisher activity that is related to user posts, such as transaction processing. Maybe you can find that providing tools that are prone to abuse is itself a violation of some duty of care, without attributing any responsibility for any particular act of misuse. All true, but each of these approaches only addresses a subset of harms and frankly seem to require some mental gymnastics and above-average lawyering. I don't want to dissuade people from taking these approaches, if warranted, but they don't seem like the best policy overall. By contrast, removing a liability shield from a category of content where there is a business relationship and a clear opportunity to review content prior to publication would incentivize platforms to more vigorously review.A Cleaner Way to Enforce Anti-Discrimination Law and Broadly Police HarmIt's common for good faith reformers to propose simply exempting civil rights or other areas of law from Section 230, preventing platforms from claiming Section 230 as a defense of any civil rights lawsuit, much as how federal criminal law is already exempted.The problem is that there is no end of good things that we'd like platforms to do more of. The EARN IT Act proposes to create more liability for platforms to address real harms, and SESTA/FOSTA likewise exempts certain categories of content. There are problems with this approach in terms of how you define what platforms should do, and what content is exempted, and issues of over-moderation in response to fears of liability. This approach threatens to make Section 230 a Swiss cheese statute where whether it applies to a given post requires a detailed legal analysis, which has other significant harms and consequences.Another common proposal is to exempt "political" ads from Section 230, or targeted ads in general (or to somehow tackle targeting in some non-230 way). There are just so many line-drawing problems here, making enforcement extremely difficult. How, exactly, do you define "targeted"? How, by looking at an ad, can you tell whether it is targeted, contextual, or just part of some broad display campaign? With political ads, how do you define what counts? Ads from or about campaigns are only a subset of political ads--is an ad about climate change "political"--or an ad from an energy company touting its green record? In the broadest sense yes, but it's hard to see how you'd legislate around this topic.Under the proposal to exempt ads from Section 230, the primary question to answer is not what is the content addressed to and what harms it may cause, but simply, whether it is an ad. Ads are typically labeled as such and quite distinct--and it may be the case that there need to be stronger ad disclosure requirements and penalties for running ads without disclosure. There may be other issues around boundary-drawing as well--I perfectly well understand that one of the perceived strengths of Section 230 is its simplicity, relative to complex and limited liability shields like Section 512 of the DMCA. Yet I think they're tractable.Protection for Small PublishersI've seen some publishers respond to user complaints of low-quality or even malware-distributing ads running on sites where the publishers point out that they don't see or control the ads--they are delivered straight from the ad network to the user, alongside publisher content. (I should say straight away that this still counts as "publishing" an ad--if the user's browser is infected by malware that inserts ads, or if an ISP or some other intermediary inserts the ad into the publisher's content, then no it is not liable, but if a website embeds code that serves ads from a third party, it is "publishing" that ad in the same sense as a back page ad on a fancy Conde Nast magazine. Whether that leads to liability just depends on whether the elements of the tort are met, and whether 230 applies, of course.)For major publishers I don't have a lot of sympathy. If their current ad stack lets bad ads slip through, they should use a different one, if they can, or demand changes in how their vendors operate. The incentives don't align for publishers and ad tech vendors to adopt a more responsible approach. Changing the law would do that.At the same time it may be true that some small publishers depend on ads delivered by third parties, and not only does the technology not allow them to take more ownership of ad content, they lack the leverage to demand to be given the right tools. Under this proposal, these small publishers would be treated like any other publisher for the most part, though I tend to think that it would be harder to meet the actual elements of an offense with respect to them. That said I would be on board with some kind of additional stipulation that ad tech vendors are required to defend and pay out for any case where publishers below a certain threshold are hauled into court for distributing ads they have no control over, but are financially dependent on. Additionally, to the extent that the ad tech marketplace is so concentrated that major vendors are able to shift liability away from themselves to less powerful players, antitrust and other regulatory intervention may be needed to assure that risks are borne by those who can best afford to prevent them.The Tradeoffs That Accompany This Idea Are Worth ItI am proposing to throw sand in the gears on online commerce and publishing, because I think the tradeoffs in terms of consumer protection and enforcing anti-discrimination laws are worth it. Ad rates might go up, platforms might be less profitable, ads might take longer to place, and self-serve ad platforms as we know them might go away. At the same time, fewer ads could mean less ad-tracking and an across-the-board change to the law around ads should not tilt the playing field towards big players any more than it already is, and would not likely lead to an overall decline in ad spending, just a shift in how those dollars are spent (to different sites, and to fewer but more expensive ads)..This proposal would burden some forms of speech more than others, too, so it’s worth considering First Amendment issues. One benefit of this proposal over subject matter-based proposals is that it is content neutral, applying to a business model. Commercial speech is already subject to greater regulation than other forms of speech, and this is hardly a regulation, just the failure to extend a benefit universally. Though of course this can be a different way of saying the same thing. But, if extending 230 to ads is required if it’s extended anywhere, it would seem that same logic would require that 230 be extended to print media or possibly even first-party speech. That cannot be the case. And I have to warn people that if proposed reforms to Section 230 are always argued to be unconstitutional, that makes outright repeal of 230 all the more likely, which is not an outcome I’d support.Fans of Section 230 should like this idea because it forestalls changes they no doubt think would be worse. Critics of 230 should like it because it addresses many of the problems they've complained about for years, and has few if any of the drawbacks of content-based proposals. So I think it's a good idea.John Bergmayer is Legal Director at Public Knowledge, specializing in telecommunications, media, internet, and intellectual property issues. He advocates for the public interest before courts and policymakers, and works to make sure that all stakeholders -- including ordinary citizens, artists, and technological innovators -- have a say in shaping emerging digital policies.
Earlier this year, the DOJ Inspector General released a report that -- surprise, surprise -- showed the FBI abusing its FISA privileges. The FBI had placed former Trump campaign advisor Carter Page under surveillance, suspecting (but only momentarily) that he was acting as an agent of a foreign power. (Guess which one.)The report said the first wiretap request might have been valid, but subsequent requests for extensions weren't. The Inspector General said the agency cherry-picked info to keep the wiretap alive, discarding any evidence it had come across that would have ended the surveillance.Even more damning, it found that FBI lawyer Kevin Clinesmith altered an email from another federal agency to hide Carter Page's involvement with that agency from the FISA court. The FISA court demanded the DOJ hand over information on any other cases before it that Clinesmith might have had a hand in. But that wasn't the end of it. Clinesmith was also referred to the DOJ for criminal charges.The criminal charge has arrived. The criminal complaint [PDF] was filed in the DC federal district court. It details the email Clinesmith altered and submitted with the Carter Page surveillance extension request to the FISA court in 2017.The original email -- sent to Clinesmith by an unnamed government agency -- said that Page was an "operational source" for this agency. The "Other Government Agency" (OGA) stated this in the email to Clinesmith:
The Learning Apps Bundle is a hub of over 50 of the best educational apps for kids that makes learning fun and entertaining. These apps are aimed at kids of all ages from toddlers to teens. They offer apps for basics animal names and sounds, alphabets, and numbers, and up to complicated topics of math and physics. All these apps are interactive and easy to use. The bundle is on sale for $20.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
A week after issuing his first ridiculous executive order about TikTok, barring any transactions involving the company if it is still owned by ByteDance, President Trump decided he needed to issue a second executive order about TikTok, this one more directly ordering ByteDance to sell it. The authority used in this one is different. The first one was using the IEEPA, which is what Trump has used to claim "national security" reasons for imposing tariffs on China without Congressional approval. This time he's using 50 USC 4565, which allows the US treasury to block certain mergers, acquisitions and takeovers that might impact national security.Except, here Trump is using that in reverse. ByteDance bought Musical.ly (and made it TikTok) two years ago. Trump didn't raise a peep at the time. To turn around now, two years later, and pretend that he can order the deal unwound is just silly.Even if you don't trust ByteDance/TikTok, you should be absolutely concerned about this for multiple reasons: it's a clear and blatant abuse of power by the President. Allowing any President to just declare a foreign-owned company a problem and try to force it to sell to an American company is going to cause all sorts of long-term problems for the US. What's to stop foreign governments from doing the same to us? China is probably just itching to do something similar in retaliation. Second, to reach back two years and try to unwind a merger at this point based on this flimsy legal theory is just crazy as well. It's clear that this is nothing more than vindictiveness on the part of the President.If there are real security issues with TikTok, then there should be due process. There should be investigations and evidence. Not just a childish, narcissistic President suddenly declaring that an entire company must be sold.
To be very clear: American consumers don't like broadband usage caps. At all. Most Americans realize (either intellectually or on instinct) that monthly broadband usage caps and overage fees are little more than monopolistic cash grabs. They're confusing, frustrating price hikes on captive customers that accomplish absolutely none of their stated benefits. They don't actually help manage congestion, and they aren't about "fairness" -- since if fairness were a goal you'd have a lot of grandmothers paying $5-$10 a month for broadband because they only check their email twice a day.Enter U.S. cable giant Charter (Spectrum), which is currently in the middle of trying to get the FCC to kill the merger conditions applied as part of its 2015 $79 billion acquisition of Time Warner Cable and Bright House Networks. Those conditions, among other things, required that Charter adhere to net neutrality (despite the fact that the GOP has since killed net neutrality rules), and avoid usage caps and overage fees. Both conditions had 7 year sunset clauses built in, and Charter, eager to begin jacking up U.S. broadband consumer prices ever higher, has been lobbying to have them killed two years early.Charter's lobbying tactics so far have included giving money to groups like the Boys and Girls Club in exchange for gushing support for the elimination of the merger conditions, despite the fact that doing so would harm these groups' constituents with higher prices.Charter's other major play apparently involves trying to tell the FCC that U.S. consumers really like monthly usage caps and annoying fees, restrictions the cable monopoly claims are "popular." From a filing (pdf) spotted by Ars Technica:
This week, our first place winner is Daydream with a comment digging into the details of a Michigan Supreme Court ruling that prevented an attempt to seize someone's house over an $8.41 debt:
Five Years AgoThis week in 2015, Google was in the news twice — first for their inevitable admission that Google+ was a failure, and then for their surprising announcement of the new corporate structure under the parent company Alphabet. Meanwhile, a CIA FOIA dump provided new information about spying on the Senate, including the accidental release of an apology letter the CIA wrote but never sent. We also saw more DMCA shenanigans as Vimeo complied with bogus mass-takedowns over the word "Pixels" and a convicted fraudster sent a bogus takedown to Techdirt over our coverage of previous bogus takedowns.Ten Years AgoThis week in 2010, RIM managed to work out a deal with the Saudi Arabian government to prevent a BlackBerry ban, raising the question of just what device security would be like under this new agreement. We saw some... questionable journalism choices as the Washington Post peddled anti-Craigslist ideas by citing one of its own anti-Craigslist advertisers, and the Associated Press was strangely not reporting on the judge denying sanctions in its lawsuit against Shepard Fairey. Meanwhile, we took a look at how the FBI was prioritizing copyright issues, Congress introduced yet another iteration of a disastrous fashion copyright bill, Viacom unsurprisingly appealed the YouTube ruling, and, in a major move to protect free speech, the anti-libel-tourism SPEECH Act became law.Fifteen Years AgoThis week in 2005, AOL was trying to regain some relevance by moving into the wireless space, while Blockbuster gave up on trying to beat Netflix on price by raising its online DVD rental prices to match. The FCC was subtly but significantly downgrading the concept of internet freedoms, one school was refusing to back down on felony charges against students over some harmless hacking, and an Australian ISP was threatening to sue a forum over public information. We also talked some more about the myth of copy protection as a useful idea, and wondered if some of the companies trying to foist it on people thought buyers were complete idiots.
We are living in truly dystopian times. As you may have heard, this week there have been a bunch of stories regarding the somewhat systematic dismantling of US Postal Service operations in what appears to be a coordinated effort by this administration to foil the process of sending and collecting mail-in ballots. But, apparently, rather than ensuring its own ability to handle mail-in ballots for this election, the US Postal service is trying to... patent blockchain-based voting?As you almost certainly know, President Trump has been -- without any factual basis at all -- decrying mail-in ballots, despite the fact that they have been proven safe and effective. As we're in the middle of a pandemic -- made significantly worse by this administration's own incompetence -- whose main mode of transmission is gathering indoors, the need for more mail-in ballots is obvious to anyone who cares about a functioning democracy. Instead, the President has apparently focused on making it impossible. While that seemed like a conspiracy theory to many, he admitted he was holding up funding for exactly that reason:
Summary: After an investigation by BuzzFeed uncovered several accounts trafficking in paid access to "decks" -- Tweetdeck accounts from which buyers could mass-retweet their own tweets to make them go "viral" -- Twitter acted to shut down the abusive accounts.Most of the accounts were run by teens who leveraged the tools provided by Twitter-owned Tweetdeck to provide mass exposure to tweets for paying customers. Until Twitter acted, users who saw their tweets go viral under other users' names tried to police the problem by naming paid accounts and putting them on blocklists.Twitter's Rules expressly forbid users from "artificially inflating account interactions”. But most accounts were apparently removed under Twitter's anti-spam policy -- one it beefed up after BuzzFeed published its investigation. The biggest change was the removal of the ability to simultaneously retweet tweets from several different accounts, rendering these "decks" built by "Tweetdeckers" mostly useless. Tweetdeckers responded by taking a manual approach to faux virality, sending direct messages requesting mutual retweets of posted content.Unlike other corrective actions taken by Twitter in response to mass abuse, this cleanup process appears to have resulted in almost no collateral damage. Some users complained their follower counts had dropped, but this was likely the result of near-simultaneous moderation efforts targeting bot accounts.Decisions to be made by Twitter:
Clearview -- the facial recognition company selling law enforcement agencies (and others) access to billions of photos and personal info scraped from the web -- is facing lawsuits over its business model, which appears to violate some states' data privacy laws. It's also been hit with cease-and-desist requests from a number of companies whose data has been scraped.What was once a toy for billionaires has become a toy for cops, who are encouraged to test out the software by running searches on friends and family members. Clearview claims it's been instrumental in fighting crime, but evidence of this remains nonexistent.Now, the company appears to be going on the offensive. Clearview has already argued -- through its legal rep, Tor Ekelund -- that Section 230 of the CDA insulates it against lawsuits over its use of third-party content. It's a novel argument, considering Clearview isn't actually the third party. That would be the original hosts of the content. Clearview is something else and it's not clear Section 230 applies to these lawsuits, which are about what Clearview does with the content, rather than over the content itself.The New York Times reports Clearview has hired a prominent First Amendment lawyer -- one that has defended the paper in the past -- to make the argument that selling government agencies data scraped from the web is protected speech.
Conservative criticism of social media content moderation is often characterized by misinformation and unfounded allegations. Factually unsupported assertions that federal law requires firms such as Facebook and Twitter to choose whether they are “platforms” and “publishers” — and dubious claims that “Big Tech” is engaged in a concerted anti-conservative campaign are prominent — but they’re not the most interesting feature of the present content moderation debate.More interesting is the lack of imagination that seems to dominate the discourse. Rather than exploring different content moderation regimes, conservatives have focused on shaping the rules of established companies through regulation and legislation. The ironic result of this narrow thinking could be the entrenchment of market incumbents.Conservative complaints about Silicon Valley censorship are often based on poor methodological study and collections of anecdotes. Although conclusive evidence that Silicon Valley is engaged in an anti-conservative campaign is lacking, many Republican lawmakers have used claims of bias as the basis for legislative proposals that would radically change how the Internet is regulated and governed.Conservative critics of the most prominent social media companies are correct to note that content moderation at Facebook, Twitter, and YouTube (owned by Google) is centralized, with human content moderators and machine learning tools tasked with implementing a single governing set of content guidelines. This centralized system is far from perfect, and in an environment where Twitter users post half a billion tweets a day and YouTube account holders upload about 500 hours of video a minute false positives and false negatives should be expected.In addition, speech intended for specific audiences may be misunderstood by moderators from different backgrounds, but at scale, firms simply lack the time or resources to provide boutique, culturally-aware governance.Centralized content moderation also suffers from a perceived lack of transparency and process, with Silicon Valley behemoths considered by many to be secretive, distant institutions with few incentives to care about an individual case when their empires include millions or billions.Republican responses to allegations of political bias have focused on Section 230 of the Communications Decency Act, the law that shields owners of interactive computer services such as social media companies, newspaper comments sections, university and library websites, and others from being held liable for the vast majority of content posted by third party users.A separate post would be required to dissect every Republican Section 230 proposal, but it is fair to say that most take aim at Section 230 with the intent of reforming social media companies’ content moderation rules. Proposals include conditioning Section 230 protections on “politically neutral” content moderation policies.But while the modern debate on social media has focused on Twitter, Facebook, YouTube, and other household name companies, Republican lawmakers should remember that the centralized content moderation model is not a necessary feature of social media and that there are other models that offer solutions “big tech” critics across the political spectrum seek.Although not household names, there are social media services that implement more permissive content moderation policies. Facebook, Twitter, and YouTube are hardly alone in the social media universe. The Internet is full of social media sites. Indeed, some of these sites – such as Gab and Parler – emerged as centralized alternatives to Twitter, with their creators citing concerns about big tech bias.There are social media sites that reject centralization altogether. Mastodon is an example of a social media service that embraces a governance structure very different from those seen in big tech social media. It is open source and allows users to host their own nodes.Diaspora is another social media service that rejects the centralized governance of Facebook, Twitter, and YouTube. It is a non-profit and based on the principles of decentralization, privacy, and freedom to alter and tweak source code. There are also LBRY and the InterPlanetary File System; peer-to-peer decentralized protocols that allow users to share content absent any central governing authority.Conservatives who want a social media service where they can form their own communities, find like-minded users, and build content moderation rules consistent with their values have plenty of options available.Nonetheless, conservatives concerned about big tech bias seem unaware of the plethora of options available. It has never been easier for conservatives to build their own communities, share ideas, and seek to convince others of their ideology. Sadly, rather than embrace competition and innovation, many conservative activists and lawmakers have turned to government.The risks are difficult to overstate. Powerful market incumbents may oppose regulation, but once the writing is on the wall, they will take steps to ensure that they, and not smaller competitors, are able to comply with new regulations. The result will be the entrenchment of the companies conservative activists criticize. When conservative lawmakers and activists claim that Section 230 is a big tech subsidy, they are engaged in misleading rhetoric that is precisely the opposite of the truth. If anything, Section 230 should be considered a subsidy for big tech competitors. It ensures that they do not need to hire teams of lawmakers, saving them startup costs.An unintended consequence of Section 230 reforms and legislation motivated by weak claims of anti-conservative bias could be big tech getting bigger, with Facebook, Google, and Twitter continuing to dominate American speech online.We are still in the early years of online speech, yet activists and lawmakers seem to have forgotten much of its short history. Firms that at one time seem to dominate online speech, online search, and online entertainment have been displaced in the past. AskJeeves, AOL instant messenger, MySpace, and many others have fallen into obscurity or disappeared altogether. Facebook, Twitter, and Google may be dominant today, but their continued success is not an axiom of history.Conservatives convinced of big tech’s anti-conservative bias ought to consider the numerous platforms and competing content moderation models available. The future of online speech does not have to be centralized and dominated by a handful of firms, but continued calls for regulation in the name of content moderation risks further empowering market incumbents.Matthew Feeney is the director of Cato’s Project on Emerging Technologies, where he works on issues concerning the intersection of new technologies and civil liberties. Before coming to Cato, Matthew worked at Reason magazine as assistant editor of Reason.com. He has also worked at The American Conservative, the Liberal Democrats, and the Institute of Economic Affairs. His writing has appeared in The New York Times, The Washington Post, HuffPost, The Hill, the San Francisco Chronicle, the Washington Examiner, City A.M., and others. He also contributed a chapter to libertarianism.org’s Visions of Liberty. Matthew received both his B.A and M.A in philosophy from the University of Reading.
Back in June, we wrote about how a judge had sided with Twitter in the very first of Rep. Devin Nunes' long series of frivolous SLAPP suits, saying that the company was clearly protected from lawsuit by Section 230 and that it did not need to reveal the identity of the two satirical Twitter accounts who had mocked Devin Nunes so mercilessly that he decided to ignore his oath to protect the Constitution (which, last I checked, still includes the 1st Amendment) and sued.Some assumed that this was the end of the lawsuit. It was not. First of all, the lawsuit against the two satirical accounts (one claiming to be Devin Nunes' cow and one claiming to be Devin Nunes' Mom) along with political consultant Liz Mair, were still alive and kicking unfortunately. But also, Nunes is still attempting to bring Twitter back into the case. He has filed a proposed amended complaint that his lawyer -- the ever ridiculous Steven Biss -- argues should get around Section 230 and make Twitter a party to the lawsuit. And... just as I originally finished writing this story, Judge John Marshall rejected that attempt. At around the same time, Liz Mair has filed her attempt to get the case against her dismissed in both this case, as well as in the second case Nunes filed against her.Let's start by looking at the proposed amended complaint. As "amended complaints" go, following a judge completely dismantling your legal arguments, this is... not very amended. Indeed, I scrolled through both the original and the amended complaint and they appear to be identical, page for page (if there are any changes, they are so minor as to be cosmetic, and I couldn't see any), right up until the very, very end. While the original complaint had five claims (negligence, defamation per se, insulting words, common law conspiracy, and injunction), the new one has... six. After it includes the identical (as far as I can tell) first five claims, it adds in a sixth: "aiding and abetting." This is Biss's weak ass attempt to bring Twitter back into the case and get around Section 230:
Restflix is a streaming service designed to help users fall asleep faster and rest better. With 20+ personalized channels full of meditative music, bedtime stories, and calming videos designed to gently ease users into a productive night’s sleep. Through the use of guided meditation, bedtime stories, peaceful and serene natural views and sounds, better sleep is achievable. Along with using Restflix for developing better sleep habits, it can also be a great tool for relaxation and mental healing. Three subscriptions are available: 1 year for $30, 2 year for $50, or 3 years for $60.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
Lately so many of our copyright trolling stories have been about Richard Liebowitz or Mathew Higbee, but we shouldn't forget about Malibu Media, which is still out there doing Malibu Media things. The latest, to come out of a court in Connecticut is that the infamous copyright troll has had a default judgment request denied. This is exceptionally rare.Default judgments are what you get when the other side doesn't even bother to show up. They're almost always granted as a matter of course (though, collecting on a default judgment is not always so easy). However, in this case US District Court judge Jeffrey Meyer isn't buying what Malibu Media is selling. Judge Meyer jumps right in and points out how unfair it is to blame the ISP account holder for actions that may have been done by someone else:
As we've noted a few times, the Trump administration's repeal of net neutrality did a lot more than just kill net neutrality rules. It effectively neutered the FCC's ability to hold giant broadband providers accountable for much of anything, from attempting to charge customers a rental fee for hardware they own, to the litany of bogus fees ISPs use to falsely inflate their advertised rates. So when a select group of folks try to claim that "killing net neutrality must not have mattered because the internet still works," they're advertising their ignorance.Another problematic aspect of the FCC's net neutrality repeal was that it also attempted to ban states from protecting consumers. The goal of the telecom sector, if you haven't noticed, is a complete and total oversight vacuum of one of the least competitive, and most disliked, business sectors in America. And it's fairly shocking how far along they've gotten in their quest without more people generally pointing out it's kind of a bad idea to let the Comcasts and AT&Ts of the world run amok sans regulator oversight or meaningful competition.Unfortunately for the telecom sector, its quest to block states from filling the consumer protection void hasn't gone that well. The courts so far have generally ruled that the FCC can't abdicate its authority over consumer protection, then turn around and try to dictate what states can or can't do. That's not stopping the Trump administration or telecom giants, which have continued their lawsuits against states like California on a state by state basis. Last week, the DOJ and ISPs filed amended complaints in California in a bid to scuttle that state's net neutrality rules:
In a little over 15 years, DHS agencies interacted with millions of travelers passing through our nation's airports… and relieved them of over $2 billion in cash. (And that's just agencies like the CBP and ICE. The DEA also lifts cash from airline passengers -- something it loves doing so much it hires TSA agents to look for money, rather than stuff that could threaten transportation security.)That's just one of several disturbing findings in the Institute for Justice's (IJ) new report [PDF] on the DHS's ability to separate travelers from their money. Utilizing the Treasury Department's forfeiture database, the IJ discovered the DHS is a fan of taking cash and does so more frequently at certain airports. The most popular airport for cash seizures is, by far, Chicago's O'Hare. In 2014, the airport accounted for 34% of all cash seized despite handling only 6% of all air travelers.More travelers means more opportunities, which explains some of the increase in seizures over the past decade. But as the IJ points out, seizures are outpacing the bump in travel stats.
Anyone who knows anything about me knows how much I both love and rely on profanity. Love, because profane language is precisely the sort of color the world needs more of. Rely on, because I use certain profane words the way most people use commas. So, when the courts decided that even the most profane words could be used in trademarks, I applauded. Fucks were literally given.But not every piece of profanity deserves a trademark. And, while I again applaud Boston University's decision to create a profane slogan around COVID-19 safety awareness for its student body, why in the actual fuck did the slogan have to be trademarked?First, the context:
Richard Liebowitz appears to be in trouble with a judge yet again. Judge Lewis Kaplan has issued quite an order in one of Liebowitz's thousands of cases -- Chosen Figure LLC v. Smiley Miley -- asking for proof that the plaintiff actually knows it's a client of Richard Liebowitz. The judge seems quite aware of Liebowitz's reputation:
The Nutley, New Jersey Police Department fears for the safety of its officer. It fears so much it tried to bring criminal charges against people who retweeted a tweet asking Twitter users to identify an officer who was policing a protest. Georgana Sziszak is one of the five people charged for interacting with the tweet, as Adi Robertson reports for The Verge.