Feed techdirt Techdirt

Favorite IconTechdirt

Link https://www.techdirt.com/
Feed https://www.techdirt.com/techdirt_rss.xml
Updated 2026-01-13 17:18
Chattanooga Built Its Own Broadband Network. Now It's The Top Ranked 'Work From Home' City In The US
PC Magazine recently unveiled their list of the best cities to work at home from. To make the list, the magazine examined affordable housing, the availability of fast gigabit broadband, reasonably priced internet connections in general, and the presence of employers with friendly work from home policies. At the top of the list? Chattanooga, Tennessee:
Random Jackass Attempts To Trademark 'Mayor Of Mar-A-Lago' In The Most Hilarious Way
For years now, I have railed on the USPTO for its overly permissive posture when it comes to granting trademarks. The whole thing is far too easy, with far too little concern shown by examiners as to how distinct or useful proposed marks actually are. All of that being said, there are still some hoops you have to jump through to get a trademark. And there are some rules governing how to get through those hoops.It appears someone needs to give Natale Passaro some lessons in how trademarks work, then. See, Passaro recently filed for a trademark on the term "Mayor of Mar-A-Lago." The proposed classes for the mark are to be for "shirts" and "consulting services". Part of the application requirements, however, is documentation on "specimen of use." This is basically the USPTO asking the applicant to show evidence of the mark's current or proposed use.
Content Moderation Case Study: Google 'Removes' German Residences From Street View By Request (2010)
Summary: Google's Street View is a powerful mapping tool that allows users to visit places they'll possibly never be able to visit and allows local users to see homes and businesses they're trying to locate.But Google's Street View hasn't been warmly welcomed everywhere. In Germany -- a country with a long history of pervasive surveillance by government agencies -- Google’s mapping project hit a roadblock. In an effort to comply with German privacy laws, Google worked with data protection authorities to ensure all requirements were met before its cars and cameras hit the road.Restrictions on data collection have resulted in Germany being one of the least-mapped countries in Europe.After meeting with considerable public opposition to Google's street mapping, Google allowed residents to opt out. This resulted in opted-out locations being blurred in Street View, providing owners with more privacy inside Street View than they enjoyed outside it.Decisions to be made by Google:
Appeals Court Affirms $1.5 Million Restitution Judgment Against Paul Hansmeier
The long saga of Paul Hansmeier -- one of the Prenda Law Firm partners who turned the already-shady business of copyright infringement lawsuits into a rolling debacle composed of fraud, extortion, and catastrophic failures -- has produced another coda.Hansmeier got into the copyright enforcement business thinking it would produce a steady stream of easy cash. He and his associates went after people who allegedly downloaded porn, thinking that people would pay anything asked to avoid having their sexual predilections exposed to friends, family, and the public at large.When that didn't work as well as Hansmeier had hoped, he went further. He and his associates produced their own porn (but didn't star in it, thankfully) and served it up to piracy services in order to produce a steady stream of defendants. (These home productions were never made available for legal viewing. They only existed as lawsuit bait.) It all ended in a criminal indictment and a guilty plea by Hansmeier, who had recently branched out into ADA trolling in his home state of Minnesota.That brings us to the Eighth Circuit Court of Appeals, which has affirmed everything Hansmeier wishes wasn't happening to him, like his 168-month prison sentence for fraud and money laundering and a $1.5 million restitution order.The court recounts the sordid details of the long-running scam, including the numerous shell companies created to hide the origin of copyrighted films, as well as the profits (and losses) of the law firm supposedly representing a handful of porn-producing clients. It details the use of "ruse defendants" to avoid courts' limitation of discovery requests after judges started figuring out just how shady Hansmeier and Prenda Law were. What began as sanctions and subpoena denials slowly and steadily turned into disciplinary action from state bars and, finally, a criminal investigation that resulted in guilty pleas by both Hansmeier and his partner, John Steele.Hansmeier doesn't want to be on the hook for $1.5 million in restitution. But the Appeals Court [PDF] doesn't see anything that warrants a reversal of the lower court's order. As it points out, Hansmeier still comes out ahead, even after being ordered to pay back $1.5 million of his illegal takings.
Why We Filed A Comment With Facebook's Oversight Board
Back when Facebook's Oversight Board was just getting organized, a colleague suggested I represent people before it as part of my legal practice. As a solo lawyer, my entrepreneurial ears perked up at the possibility of future business opportunities. But the rest of me felt extremely uncomfortable with the proposition. I defend free speech, but I am a lawyer and I defend it using law. If Facebook removes you or your content that is an entirely lawful choice for it to make. It may or may not be a good decision, but there is nothing for law to defend you from. So it didn't seem a good use of my legal training to spend my time taking issue with how a private entity made the moderation decisions it was entirely within its legal rights to make.It also worried me that people were regarding Facebook's Oversight Board as some sort of lawmaking body, and I was hesitant to use my lawyering skills to somehow validate and perpetuate that myth. No matter how successful the Board turns out to be, it is still limited in its authority and reach, and that's a good thing. What is not good is when people expect that this review system should (a) have the weight of actual law or (b) be the system that gets to evaluate all moderation decisions on the Internet.Yet here I am, having just written a comment for the Copia Institute in one of its cases. Not because I changed my mind about any of my previous concerns, but because that particular high-profile case seemed like a good opportunity to help reset expectations about the significance of the Oversight Board's decisions.As people who care about the online ecosystem we want those decisions to be as good as they can be because they will have impact, and we want that impact to be as good as it can be. With our comment we therefore tried to provide some guidance on what a good result would look like. But whether the Board gets its decisions right or wrong, it does no good for the public, or even the Board itself, to think its decisions mean more than they do. Nor is it necessary: the Oversight Board already has a valid and even valuable role to play. And it doesn't need to be any more than what it actually is for it to be useful.It's useful because every platform makes moderation decisions. Many of these decisions are hard to make perfectly, and many are made at incredible scale and speed. Even with the best of intentions it is easy for platforms to make moderation decisions that would have been better decided the other way.And that is why the basic idea of the Oversight Board is a good one. It's good for it to be able to provide independent review of Facebook's more consequential decisions and recommend how to make them better in the future. Some have alleged that the board isn't sufficiently independent, but even if this were true, it wouldn't really matter, at least insofar as Facebook goes. What is important is that there is any operational way to give Facebook's moderation decisions a second look, especially in a way that can be informed by additional considerations that may not have been included in the original decision. That the Oversight Board is designed to provide such review is an innovation worth cheering.But all the Oversight Board can do is decide what moderation decision might have been better for Facebook and its user community. It can't articulate, and it certainly can't decree, a moderation rule that could or should apply at all times on every platform anywhere, including platforms that are much different, with different reaches, different purposes, and different user communities than Facebook has. It would be impossible to come up with a universally applicable rule. And it's also not a power this Board, or any similar board, should ever have.As we said in our comment, and have explained countless times on these pages, platforms have the right to decide what expression to allow on their systems. We obviously hope that platforms will use this right to make these decisions in a principled way that serves the public interest, and we stand ready to criticize them as vociferously as warranted when they don't. But we will always defend their legal right to make their moderation choices however perfectly or imperfectly they may make them.What's important to remember in thinking about the Oversight Board is that this is still Facebook making moderation decisions. Not because the Board may or may not be independent from Facebook, but because Facebook's decision to defer to the Board's judgment is itself a moderation decision. It is not Facebook waiving its legal right to make moderation choices but rather it exercising that very right to decide how to make those choices, and this is what it has decided. Deferring to the Board's judgment does not obviate real-world law protecting its choice; it's a choice that real world law pointedly allows Facebook to make (and, thanks to Section 230, even encourages Facebook to try).The confusion about the mandate of the Oversight Board seems to stem in part from the way the Board has been empowered and operates. In many ways it bears the hallmarks of a self-contained system of private law, and in and of itself that's fine. Private law is nothing new. For instance, when you hear the term "arbitration," that's basically what arbitration is: a system of private law. Private law can exist alongside regular, public, democratically-generated law just fine, although sometimes there are tensions because for it to work all the parties need to agree to abide by it instead of public law, and sometimes that consent isn't sufficiently voluntary.But consent is not an issue here: before the Oversight Board came along Facebook users had no legal leverage of any kind over Facebook, so this is now a system of private law that Facebook has agreed can give them some. We can and should of course care that this system of private law is a good one, well-balanced and equitable, and thus far we've seen no basis for any significant concern. We instead see a lot of thoughtful people working very hard to try to get it right and open to being nudged to do better if such nudging should happen to be needed. But even if they were getting everything all wrong, in the big picture it doesn't really matter either, because ultimately it is only Facebook's oversight board, inherently limited in its authority and reach to that platform.The misapprehension that this Board can or should somehow rule over all moderation decisions on the Internet is also not helped by the decision to call it the "Oversight Board," rather than the "Facebook Oversight Board." Perhaps it could become a model for other platforms to use, and maybe, just maybe, if it really does become a fully spun-off independent, sustainable, self-contained private law system it might someday be able to supply review services to other platforms too—provided, of course, that the Board is equipped to address these platforms' own particularities and priorities, which may differ significantly from Facebook's.But right now it is only a solution for Facebook and only set up to consider the unique nature of the Facebook platform and what Facebook and its user community want from it. It is far from a one-size-fits-all solution for Internet content moderation generally, and our comment said as much, noting that the relative merit of the moderation decision in question ultimately hinged on what Facebook wanted its platform to be.Nevertheless, it is absolutely fine for it to be so limited in its mission, and far better than if it were more. Just as Facebook had the right to acquiesce to this oversight board, other platforms equally have the right, and need to have the right, to say no to it or any other such board. It won't stop being important for the First Amendment to protect this discretion, regardless of how good a job this or any other board might do. While the Oversight Board can, and likely should, try to incorporate First Amendment values into its decisions to the extent it can, actual First Amendment law operates on a different axis than this system of private law ever would or could, with different interests and concerns to be balanced.It is a mistake to think we could simply supplant all of those considerations with the judgment of this Oversight Board. No matter how thoughtful its decisions, nor how great the impact of what it decides, the Oversight Board is still not a government body. Neither it (nor even Facebook) has the sort of power the state has, nor any of the Constitutional limitations that would check it. Facebook remains a private actor, a company with a social media platform, and Facebook's Oversight Board simply an organization built to help it make its platform better. We should be extremely wary of expecting it to be anything other than that.Especially because that's already plenty for it to be in order for it to be able to do some good.
CBP Facial Recognition Program Has Gathered 50 Million Face Photos, Identified Fewer Than 300 Imposters
The CBP and DHS have released their annual report [PDF] covering trade and travel. It touts the agencies' successes in these areas but raises some questions about the use of facial recognition tech to make the nation safer.Dave Gershgorn, writing for One Zero, points out the system the DHS and CBP claim is essential to national security isn't doing much to secure the nation. And it's not for a lack of input data.
Daily Deal: The Ultimate Remote-Work Collaboration Bundle
The Ultimate Remote-Work Collaboration Bundle has 7 courses to help you learn how to efficiently work from home. You'll learn how to set up a suitable workspace, and you will learn strategies to optimize productivity, streamline communication, and maintain a work-life balance. Courses also cover email and video conference etiquette, how to use Slack, Google chat and meet, Microsoft 365 Teams, and more. It's on sale for $25.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
Zuckerberg's Grand Illusion: Understanding The Oversight Board Experiment
Everyone, it seems, has an opinion about The Oversight Board -- which everyone refers to as the Facebook Oversight Board, because despite its plans to work with other social media companies, it was created by Facebook, and feels inevitably connected to Facebook by way of its umbilical cord. As we noted earlier this month, after the Oversight Board's first decisions came down, everyone who had a strong opinion about the Oversight Board seemed to use the results to confirm their existing beliefs about it.To some, the Oversight Board is just an attempt for Facebook and Mark Zuckerberg to avoid taking responsibility for societal-level impacts of its platform. For others it's a cynical ploy/PR campaign to make it look like it's giving up some of its power. To still others, it's a weak attempt to avoid regulation. To many, it's a combination of all three. And, then, to some, it's an interesting experiment in content moderation that attempts to actually separate some final decision making ability from a website itself. And, again, it could still be some combination of all of those. As I've said since it launched, I find it to be an interesting experiment, and even if the cynical reasons are a driving force behind it, that may not matter if the Board actually turns into a sort of authority that creates change. As I recently talked about in a podcast episode, the norms may become important.That is, even if the whole thing is a cynical marketing ploy by a finger-waving Mark Zuckerberg, that might not matter if the Board itself actually is able to create meaningful change within the company and how it handles both moderation decisions and content moderation policy. And it's reasonable to point out that this has a high chance of failure and that there are a variety of structural problems in how the Board is setup, but that doesn't mean failure is guaranteed. And there's enough of a chance that the Board could make a difference that I think it's worth paying close attention to what happens with it.And, if you believe that it's important to understand, then you owe it to yourself to read Prof. Kate Klonick's brilliant, thorough and detailed account of the making of the Board with lots of behind-the-scenes tidbits. I can think of no one better to do this kind of reporting. Klonick, a former reporter who has a JD and a PhD and is now a law professor, wrote the seminal paper on social media content moderation, The New Governors, which is also required reading for anyone seeking to understand content moderation online.Buried deep within the article is an important point that gets to what I say above, about how the norms around the Board might make it powerful, even if the Board is not initially imbued with actual power:
A 90 Year Old Shouldn't Have To Buy A $10,000 Ad Just To Get AT&T To Upgrade His Shitty DSL Line
Last week I wrote over at Motherboard about 90 years old North Hollywood resident Aaron Epstein, whose family has been an AT&T subscriber since the 1930s. Epstein himself has been a loyal AT&T subscriber since around 1960, and has had the company's DSL service since it was first introduced in the late 90s. Unfortunately for Epstein, much like countless millions of other Americans, his DSL line only delivered speeds of 1.5 to 3 Mbps, and he's been waiting for decades for faster speeds to no avail.To try and nudge AT&T to action, Epstein recently took out a $10,000 ad in the Wall Street Journal just to yell at AT&T CEO John Stankey:
Minneapolis, Minnesota Becomes The Latest Major City To Pass A Facial Recognition Ban
Facial recognition bans are slowly becoming the status quo around the nation. Good.The tech is faulty. And that's understating things. There's plenty of evidence showing the tech does little but generate false positives. Bogus arrests are starting to pile up.Just as concerning are the false negatives -- something no one can actually tabulate. But you can't ignore the fact that AI prone to misidentifying people (especially minorities) is capable of letting as many guilty people go free as it's capable of subjecting innocent people to wrongful detainments and arrests.Pockets of facial recognition resistance have cropped up. They've been mainly relegated to the coasts so far. Following multiple municipal bans, the state of California blocked use of this technology until 2022. The same thing happened on the other side of the country when Massachusetts lawmakers passed a moratorium on the tech -- one that will prevent law enforcement agencies from acquiring or using this tech until at least the end of 2021. This move followed several citywide bans passed by local governments in the state.But what about the rest of the country? There's a lot of flyover country between the two coasts. And there's been very little activity in America's so-called "heartland." Until now. The Minneapolis city council has decided it's not just going to sit on the sidelines and see how this whole facial recognition thing plays out.
Conservative News Outlet Ordered To Pay More Than $250,000 In Legal Fees To Rachel Maddow, MSNBC
Last summer, California's anti-SLAPP law gave MSNBC host Rachel Maddow an early exit from a bogus defamation lawsuit brought by one of the few "news" outlets that's farther to the right than Fox News, One America News.OAN claimed it had been defamed when Maddow referred to one of its hosts as a "Kremlin-paid journalist." This comment referred to OAN "reporter" Kristian Rouz's concurrent employment as a Sputnik "journalist." Sputnik is owned by the Russian government and tends to produce exactly the sort of reporting you'd expect from such an arrangement.As the court noted during its dismissal of the suit, Maddow's position at MSNBC is one of a commentator -- someone expected to give their opinion on world events. Thus, the stuff OAN was arguing (badly) was defamatory was actually protected opinion. And it was informed opinion that had basis in fact: Rouz did work for Sputnik and did produce propaganda on the Russian government's behalf.Now, OAN owes MSNBC and Maddow some money. Losing a defamation suit via an anti-SLAPP motion means the victorious party can ask for legal fees. As Mary Papenfuss reports for Huffington Post, OAN's parent company (Herring Networks) has been ordered to write a very big check.
Texas Power, Phone Outages Again Highlight How Infrastructure Underinvestment Will Be Fatal Moving Forward
If you hadn't noticed, the United States isn't really prepared for climate change. In part because corporations and disinformation mills have convinced countless Americans a destabilizing climate isn't actually happening. But also because we were already perpetually underinvesting in our core infrastructure before the symptoms of an unstable climate began to manifest. It's a massive problem that, as John Oliver highlighted six years ago, doesn't get the same attention as other pressing issues of the day. You know, like the latest influencer drama or mortal threat posed by TikTok.Infrastructure policy is treated as annoying and boring... until a crisis hits and suddenly everybody cares. As millions of Texans found out this week when the state's energy infrastructure crumbled like a rotten old house under the weight of heating energy demands, leaving millions without power during a major cold snap. While outlets like the Wall Street Journal and Fox News quickly tried to weaponize the crisis by blaming the renewable energy sector for the problems, deeper, more technical dives seem to indicate a lack of wind power output wasn't the underlying problem:
Techdirt Podcast Episode 270: Regulating The Internet Won't Fix A Broken Government
Questions of content moderation and intermediary liability have seeped into just about everything these days, and not just with regards to Section 230 but also a whole host of laws in the US and around the world. A lot of people seem to think that a long list of societal and political failings can be rectified by regulating content online, and don't talk about how these problems run deeper and have been around for a long time. One person who doesn't fall into this trap is Heather Burns from the Open Rights Group, and she joins Mike on this week's episode to talk about why regulating the internet won't magically fix everything else.Follow the Techdirt Podcast on Soundcloud, subscribe via iTunes, or grab the RSS feed. You can also keep up with all the latest episodes right here on Techdirt.
230 Matters: One Week Until Our Event & Discussion With Section 230's Authors
Get your tickets for Section 230 Matters before February 23rd »A week from today on Tuesday, February 23rd at 12:30pm PST, Techdirt is hosting a fundraising event: Section 230 Matters, a celebration for 25 years of Section 230 featuring both of its authors: Chris Cox and Senator Ron Wyden.For reasons I still don't fully understand, Section 230 remains under attack from pretty much all corners. The law that has helped enable so many amazing services is constantly being blamed for an array of problems -- most of which are totally unrelated to Section 230. It's blamed for people saying bad things online. It's blamed for targeted advertising (which makes no sense at all). It's blamed for too little moderation, and too much moderation. It's blamed for who won elections and who lost elections.But what almost no one does is look at how it has created many amazing things, including the ability for us to build a community here at Techdirt. It's enabled tons of other communities around the globe, often connecting people who had no way to connect before. Before the internet, most communities were limited to people in your geographic region. And that's fine for a small segment of communities, but is quite limiting in many cases -- and sometimes actively problematic when your interests, identity, needs or calling are frowned upon or denied by your local community.Section 230 has enabled millions of different communities to form around all sorts of topics and interests. And, yes, some of those communities are problematic, but so many more are actively helpful and useful. And yet, somehow, we tend to ignore all of the good that 230 has enabled and focus narrowly on the few people with ill intent.And thus, as we hit Section 230's 25th anniversary, we felt it deserves something of a party.Apparently, many of you have agreed. We're both happy and humbled at the response to our event (on a cool events platform that also owes its existence to things like Section 230), to the point that may run out of seats for attendees to the event before long. If you're interested in attending (or sponsoring) please consider doing so soon!Get your tickets for Section 230 Matters before February 23rd »
First Circuit Rejects Device Search Challenge, Says The Fourth Amendment Doesn't Apply At Our Nation's Borders
US borders continue to be lawless places. Not because there's more criminal activity there, but because the Constitution that protects us away from borders (and international airports, etc.) barely applies at all within 100 miles of them.The First Circuit Court of Appeals is the latest appeals court to decide borders and constitutional protections don't mix. A lawsuit over warrantless, suspicionless device searches has been rejected, with the court finding in favor of the government.This deepens the split between circuits and their interpretation of the Constitution's effectiveness within 100 miles of the border. The Ninth Circuit said device searches must be limited to searches for contraband. In that case, the government couldn't show evidence of drug dealing would be found on the suspect's phone. The court said the government couldn't use the border search warrant exception to engage in fishing expeditions for other criminal evidence.The Fourth Circuit also limited border searches, but only required the government to show reasonable suspicion before engaging in a forensic examination of people's phones. Not great, but better than the "this is fine" rulings handed down by the Eleventh Circuit in 2018 and this one [PDF] from the First, handed down last week.The case handled by the First Circuit is an anomaly. It deals with a civil lawsuit brought by several plaintiffs demanding an injunction blocking the government from engaging in suspicionless device searches. Everything else handled so far by Appeals Courts has arisen from criminal cases with defendants challenging evidence obtained by warrantless (and, in some cases, suspicionless) device searches.The First Circuit rejects the district court's finding that border officers must have something more than "because we feel like it" to engage in phone searches. It says the Riley decision doesn't apply, even if it's a search incident to an arrest, because the border search exception means no border control officer should ever have to obtain a warrant.According to the Appeals Court, a warrant requirement would just make things difficult for the government.
Daily Deal: The Learn to Draw Comic Book Characters Bundle
With the Learn to Draw Comic Book Characters Bundle, you'll learn techniques to systematically break down the various parts of the body into simpler shapes and understand how to work them into one figure. You will also learn how to draw and paint various fantasy art elements digitally, how to draw various heads and faces from any angle, and how to draw dynamic comic book superheroes. It's on sale for $20.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
Parler's Found A New Host (And A New CEO)... For Now
On Monday Parler announced to the world that it was back with a new host (and a new interim CEO after the board fired founder and CEO John Matze a few weeks ago). The "board" is controlled by the (until recently, secret) other founder: Rebekah Mercer, who famously also funded Cambridge Analytica, a company built upon sucking up social media data and using it to influence elections. When Matze was fired, he told Reuters that the company was being run by two people since he'd been removed: Matthew Richardson and Mark Meckler.Richardson has many ties to the Mercers, and was associated with Cambridge Analytica and the Brexit effort. Meckler was, for a few years, considered one of the founding players and leading spokespeople for the "Tea Party" movement in the US, before splitting with that group and pushing for a new Constitutional Convention (at times in a "strange bedfellows" way with Larry Lessig). With the news on Monday that Parler was back up (sort of), it was also announced that Meckler had taken over as interim CEO.Given the role of Meckler, Richardson, and Mercer, you can bet that the site is still pushing to be the Trumpiest of social media sites. As for who is actually the new hosting firm, there's been some confusion in the press. The twitter account @donk_enby, who famously scraped and archived most of the older Parler before it was shut down by Amazon last month, originally said Parler's new hosting firm was CloudRoute, who it appears may just be a Microsoft Azure reseller of some kind. In a later tweet, @donk_enby mentions that another firm, SkySilik, seems to share an IP space with CloudRoute, perhaps renting IP addresses from CloudRoute.A few hours later, SkySilk admitted to being the new hosting company and put out a weird statement that suggests a somewhat naive team who had no idea what they were getting into:
State Laws Restricting Community Broadband Are Hurting US Communities During The Pandemic
We've talked for years about how telecom monopolies like Comcast and AT&T have ghost written laws in more than twenty states, banning or hamstringing towns and cities looking to build their own broadband networks. We've also noted with COVID clearly illustrating how broadband is essential for education, opportunity, employment, and healthcare, such restrictions are looking dumber than ever. Voters should have every right to make local infrastructure decisions for themselves, and if big ISPs and armchair free market policy wonks don't want that to happen, incumbent ISPs should provide faster, cheaper, better service.As the pandemic continues, some cities have found ways around such restrictions -- by focusing more specifically on serving struggling, low income Americans. Texas is one such state that long ago passed municipal restrictions, courtesy of Dallas-based AT&T. AT&T doesn't want to upgrade or repair many of its DSL lines, but it also doesn't want communities upgrading or building networks either lest it become a larger trend (too late). As a result, in San Antonio, an amazing 38% of homes still don't have residential broadband.The city's existing network can't really expand commercial service thanks to a law written by AT&T. But that law doesn't prohibit the city from servicing the poor by offering free service, something made possible by the recent CARES Act:
Funniest/Most Insightful Comments Of The Week At Techdirt
This week, our first place winner on the insightful side is Stephen T. Stone weighing in on one of the many comment-section incarnations of the neverending debate about conservative censorship:
Game Jam Winner Spotlight: ~THE GREAT GATSBY~
This week, we announced the winners of Gaming Like It's 1925, our third annual game jam celebrating works that entered the public domain in the US this year. Over the next few weeks, we'll be taking a closer look at each of the winning games from the six categories (in no particular order), starting today with the winner of Best Visuals: ~THE GREAT GATSBY~ by Floatingtable Games.The first thing that strikes you about ~THE GREAT GATSBY~ is just how robust the graphics are for a game jam entry. It's a platformer presented in a retro pixel-art style — the designer explains that it has the same screen resolution as a Nintendo Game Boy, but one more color in its palette. The player is immediately presented with a beautiful title screen depicting one of the most iconic pieces of imagery from the novel:From there, the game reveals itself to be more than just the mechanical prototype one might expect from a platformer in a game jam — rather, it's a fully-formed (albeit very short) experience that includes an opening "cinematic", some RPG-style interactions with NPC characters including simple dialogue choices, two main platforming levels (the first of which requires you to retrace your steps, finding the path more challenging in reverse — a classic level design technique — and the second of which feels distinctly different and introduces a new kind of obstacle), and a clear conclusion. In other words, there's some genuine thought put into the game design here, and an effort to make the game "complete" that really paid off. But it's still the graphics that stand out the most, from the detailed cityscapes with parallax-animated skylines in the background and pixelated haze drifting through the air......to the interior scene with its own set of unique sprites, the stylish character portraits, and the simple, easily-understood interface elements:Note the attention to detail — it would have been easy and perfectly acceptable to slap the same simple window graphic from the outdoor scenes onto the interior wall, but instead we get a brand new custom sprite that includes the skyline visible outside in the distance. That kind of extra effort is apparent all throughout the graphics of the game, and that's why it was an easy pick for the Best Visuals award.Play ~THE GREAT GATSBY~ in your browser on Itch, and check out the other jam entries too. Congratulations to Floatingtable Games for the win! We'll be back next week with another game jam winner spotlight.
Hacked Florida Water Plant Found To Have Been Using Unsupported Windows 7 Machines And Shared Passwords
By now, you have likely heard about the recent hack into a Florida water treatment plant which resulted in the attacker remotely raising the levels of sodium hydroxide to 100 times the normal level for the city's water supply. While those changes were remediated manually by onsite staff, it should be noted that this represents an outside attacker attempting to literally poison an entire city's water supply. Once the dangerous part of all of this was over, attention rightfully turned to figuring out how in the world this happened.The answer, as is far too often the case, is poor security practices at the treatment plant.
Content Moderation Case Study: Valve Takes A Hands Off Approach To Porn Via Steam (2018)
Summary: Different platforms have different rules regarding “adult” content, but they often prove difficult to enforce. Even the US judicial system has declared that there is no easy way to define pornography, leading to Justice Potter Stewart’s famous line, “I know it when I see it.”Many, if not most, internet websites have rules regarding such adult content, and in 2017 Valve’s online game platform, Steam, started trying to get more serious about enforcing its rules, leading to some smaller independent games being banned from the platform. Over the next few months more and more games were removed, though some started pointing out that this policy and the removals were doing the most harm to independent game developers.In June of 2018, Valve announced that it had listened to various discussions on this and decided that it was going to take a very hands off approach to moderating content, including adult content. After admitting that there are widespread debates over this, the company said that it would basically allow absolutely anything on the platform, with very, very few exceptions:
Civil Rights Groups Argue That Biden Should Drop Assange Prosecution; Noting That It Is An Attack On Journalism
It's easy to dislike and distrust Julian Assange. He's done many things to inspire both reactions. Still, it's important to separate out personal feelings towards the guy with the question of whether or not he broke US law with publishing the things he did via Wikileaks. For years, the Obama DOJ refused to indict him, in part due to the recognition that nearly all of Assange's activities were similar to the kinds of things that journalists do all the time. The Trump DOJ had no such restraint (even as some prosecutors warned of problems with the idea), and as we and others have pointed out the indictment is a huge threat to investigative journalism and things like source protection.Now that Biden is President, a whole bunch of civil rights groups have sent a letter to Acting Attorney General Monty Wilkinson, asking him to drop the case against Assange. The letter notes that many of the signatories do not agree with Assange or Wikileaks, but that doesn't mean the case is a good one:
The Copia Institute To The Oversight Board Regarding Facebook's Trump Suspension: There Was No Wrong Decision
The following is the Copia Institute's submission to the Oversight Board as it evaluates Facebook's decision to remove some of Trump's posts and his ability to post. While addressed to the Board, it's written for everyone thinking about how platforms moderate content.The Copia Institute has advocated for social media platforms to permit the greatest amount of speech possible, even when that speech is unpopular. At the same time, we have also defended the right of social media platforms to exercise editorial and associative discretion about the user expression it permits on its services. This case illustrates why we have done both. We therefore take no position on whether Facebook's decision to remove former-President Trump's posts and disable his ability to make further posts was the right decision for Facebook to make because choosing to do so or choosing not to is each defensible. Instead our goal is to explain why.Reasons to be wary of taking content down. We have long held the view that the reflex to remove online content, even odious content, is generally not a healthy one. Not only can it backfire and lead to the removal of content undeserving of deletion, but it can have the effect of preserving a false monoculture in online expression. Social media is richer and more valuable when it can reflect the full fabric of humanity, even when that means enabling speech that is provocative or threatening to hegemony. Perhaps especially then, because so much important, valid, and necessary speech can so easily be labeled that way. Preserving different ideas, even when controversial, ensures that there will be space for new and even better ones, whereas policing content for compliance with current norms only distorts those norms' development.Being too willing to remove content also has the effect of teaching the public that when it encounters speech that provokes the way to respond is to demand its suppression. Instead of a marketplace of ideas, this burgeoning tendency means that discourse becomes a battlefield, where the view that will prevail is the one that can amass enough censorial pressure to remove its opponent—even if it's the view with the most merit. The more Facebook feeds this unfortunate instinct by removing user speech, the more vulnerable it will be to further pressure demanding still more removals, even when it may be of speech society would benefit from. The reality is that there will always be disagreements over the worth of certain speech. As long as Facebook assumes the role of an arbitrator, it will always find itself in the middle of an unwinnable tug-of-war between conflicting views. To break this cycle, removals should be made with reluctance and only limited, specific, identifiable, and objective criteria to justify the exception. It may be hard to employ them consistently at scale, but more restraint will in the long run mean less error.Reasons to be wary of leaving content up. The unique challenge presented in this case is that the Facebook user at the time of the posts in question was the President of the United States. This fact cuts in multiple ways: as the holder of the highest political office in the country Trump's speech was of particular relevance to the public, and thus particularly worth facilitating. After all, even if Trump's posts were debauched, these were the views of the President, and it would not have served the public for him to be of this character and the public not to know.On the other hand, as the then-President of the United States his words had greater impact than any other user's. They could do, and did, more harm, thanks to the weight of authority they acquired from the imprimatur of his office. And those real-world effects provided a perfectly legitimate basis for Facebook to take steps to (a) mitigate that damage by removing posts and (b) end the association that had allowed him to leverage Facebook for those destructive ends.If Facebook concludes that anyone's use of its services is not in its interests, the interests of its user community, or the interests of the wider world Facebook and its users inhabit, it can absolutely decide to refuse that user continued access. And it can reach that conclusion based on wider context, beyond platform use. Facebook could for instance deny a confessed serial killer who only uses Facebook to publish poetry access to its service if it felt that the association ultimately served to enable the bad actor's bad acts. As with speech removals, such decisions should be made with reluctance and based on limited, specific, identifiable, and objective criteria, given the impact of such terminations. Just as continued access to Facebook may be unduly empowering for users, denying it can be equally disempowering. But in the case of Trump, as President he did not need Facebook to communicate to the public. He had access to other channels and Facebook no obligation to be conscripted to enable his mischief. Facebook has no obligation to enable anyone's mischief, whether they are a political leader or otherwise.Potential middle-grounds. When it comes to deciding whether to continue to provide Facebook's services to users and their expression, there is a certain amount of baby-splitting that can be done in response to the sorts of challenges raised by this case. For instance, Facebook does more than simply host speech that can be read by others; it provides tools for engagement such as comments and sharing and amplification through privileged display, and in some instances allows monetization. Withdrawing any or all of these additional user benefits is a viable option that may go a long way toward minimizing the problems of continuing to host problematic speech or a problematic user without the platform needing to resort to removing either entirely.Conclusion. Whether removing Trump's posts and further posting ability was the right decision or not depends on what sort of service Facebook wants to be and which choice it believes it best serves that purpose. Facebook can make these decisions any way it wants, but to minimize public criticism and maximize public cooperation how it makes them is what matters. These decisions should be transparent to the user community, scalable to apply to future situations, and predictable in how they would, to the extent they can be, since circumstances and judgment will inevitably evolve. Every choice will have consequences, some good and some bad. The choice for Facebook is really to affirmatively choose which ones it wants to favor. There may not be any one right answer, or even any truly right answer. In fact, in the end the best decision may have little to do with the actual choice that results but rather the process used to get there.
Announcing The Winners Of The 3rd Annual Public Domain Game Jam!
It's that time again — the judges' scores and comments are in, and we've selected the winners of our third annual public domain game jam, Gaming Like It's 1925! As you know, we asked game designers of all stripes to submit new creations based on works published in 1925 that entered the public domain in the US this year — and just as in the past two jams, people got very creative in terms of choosing source material and deciding what to do with it. Of course, there were also a lot of submissions based on what is probably the most famous newly-public-domain work this year, The Great Gatsby — but while everyone expected that, nobody expected just how unique some of those entries would be! So without further delay, here are the winners in all six categories of Gaming Like It's 1925:Best Analog Game — Fish Magic by David HarrisDavid Harris is our one and only returning winner this year: he won the same category in Gaming Like It's 1924 with his previous game, The 24th Kandinsky, which as the name suggests was based on the artwork of Wassily Kandinsky. This year's entry, Fish Magic, continues in a similar tradition, but now drawing inspiration from Paul Klee's 1925 painting of the same name. The game itself is very different, but just as captivating: it turns Klee's painting into a game board which players navigate to collect words, then tasks them with inventing new kinds of "fish magic" or "magic fish" with the words in their collection. Where The 24th Kandinsky was tailored to Kandinsky's abstract art, with players focused on manipulating the shapes and forms of his compositions, Fish Magic's gameplay is more suited to Klee's surreal and expressionist style, shifting the focus to the magical ideas and mysterious underwater world evoked by the titular painting. Our judges were immediately drawn to this clever and original premise, and impressed by how complete and well-thought-out the final product is, making Fish Magic a shoe-in for the Best Analog Game.Best Digital Game — Rhythm Action Gatsby by Robert TylerAnyone working on a game based on The Great Gastby for this year's jam knew they'd be facing competition, and would have to do something unexpected to truly stand out — and that's just what Robert Tyler did with Rhythm Action Gatsby. Rhythm action games are a simple premise, and it would have been easy to just slap one together, but this entry was lovingly crafted with an original music composition, recorded narration of a famous passage from the book, and carefully choreographed animations, all presented via a representation of the iconic cover art that we all recognize in a pretty, polished package — plus, bonus points for taking the time to include a basic accessibility option to turn off screen flashes. Our judges immediately found it cute, delightful, and genuinely fun, even taking multiple runs at the roughly-two-minute game to improve their scores, putting it straight to the top of the charts for the Best Digital Game.Best Adaptation — The Great Gatsby: The Tabletop Roleplaying Game by SegoliOne of the things we loved most about last year's entries was that, beyond just using newly-public-domain materials, several of them brought themes of copyright and culture into the games themselves. While there was less of that this year, The Great Gatsby: The Tabletop Roleplaying Game by Segoli puts these concepts at the core of its game mechanics in a fun and amusing way that won some of our judges over before the end of the first page of rules. The game is a robust, well-thought-out framework for improvising and roleplaying a new version of the story of The Great Gatsby, with the traditional setup of a Game Master and a group of players — with the twist that those players are encouraged to play as other public domain characters. Indeed, the comical character creation rules aren't about rolling dice to assign skill points, but about figuring out what's in the public domain where you're playing, and the core mechanic for player actions can be more or less challenging depending on whether the action invokes a still-copyrighted work. And yet despite all this playful copyright fun, the game also encourages a genuine exploration of the book and aims to produce great alternative versions of its story — all of which makes it the winner of Best Adaptation.Best Remix (Tie!) — Art Apart by Ryan Sullivan, and There Are No Eyes Here by jukelThe Best Remix category, for a game that draws on multiple 1925 works, is one of the most interesting and most challenging categories in the jam. This year, there wasn't a single stand-out winner, but rather two games that are at once very similar and very different, and both deserving of the prize.Art Apart by Ryan Sullivan is a game that, at first glance, nobody expected very much from — it's just a series of digital jigsaw puzzles of 1925 paintings. But once they dove in, our judges were pleasantly surprised by just how charming it was thanks to a great array of paintings and a selection of gentle background music (also from 1925, of course!) This attention to detail carries through in other features, like a timer with a "best time" memory and a full-featured interface that lets the user switch between puzzles and background tracks at will. Mostly, it's a showcase of how the act of mixing multiple creative works can be valuable in and of itself when someone takes the time to choose those works well.There Are No Eyes Here by jukel is its own kind of painting-based puzzle, taking an approach that is more focused on the elements of the artwork. Indeed, one wonders if the game was at least partly inspired by last year's The 24th Kandinsky, as it is also based on paintings by the famed Russian abstract artist, but this time ones from 1925. The game makes the elements of the paintings themselves into the levers of the puzzle, essentially becoming a spot-the-hidden-object game in which players locate the elements of the paintings that they can manipulate to complete each stage. It carefully mixes and matches elements of multiple Kandinsky paintings, forcing the player to carefully study their elements in a way most people haven't taken the time to do, and rewarding them with hand-crafted animations. It's a simple game that is as abstract and intriguing as the works it draws from.Best Deep Cut — Remembering Grußau by Max Fefer (HydroForge Games)Building on public domain works doesn't have to be all about chopping up and changing them, and games don't always have to achieve their goals in an oblique way. Sometimes, there are games like Remembering Grußau by Max Fefer/HydroForge Games that tell you exactly what they are: in this case, a guided reflection on the death of Jewish artist Felix Nussbaum and a work he painted in 1925, nearly twenty years before he was killed at Auschwitz. The game is calm, meditative, and deeply moving, remaining entirely focused on the painting and prompting the player to study it and consider its meaning with the knowledge of Nussbaum's life and death. It's the only Twine game among this year's winners, but it also goes beyond the browser-based interactive story, tasking players with writing a letter on paper and returning to the game after spending time to contemplate it. Our judges found it impactful and highly effective in its goals, and by drawing on one specific lesser-known work and truly exploring it to the fullest, it became the clear choice for Best Deep Cut.Best Visuals — ~THE GREAT GATSBY~ by Floatingtable GamesIn terms of its visual presentation, ~THE GREAT GATSBY~ by Floatingtable Games is one of the most polished submissions we've ever had in these jams. It's a simple, classic platformer — complete with double-jumps and deadly spike hazards, plus some story cutscenes — and while the gameplay won't blow any minds, the striking monochrome pixel graphics will catch plenty of eyes. The brief level loosely tells the story of the second chapter of The Great Gatsby, and from the warm brown color palette to the parallax cityscape backdrop to the expressive character portraits, everything on screen just looks great. Why turn The Great Gatsby into a retro-style platformer? Well, why not? If nothing else, it's a great way to win this year's prize for Best Visuals!The winning designers will be contacted via their Itch pages to arrange their prizes, so if you see your game listed here, keep an eye on your incoming comments!In the coming weeks, we'll be taking a closer look at each of these winners in a series of posts, but for now you can head on over to the game jam page to try out all these games as well as several other great entries that didn't quite make the cut. Congratulations to all our winners, and a huge thanks to everyone who submitted a game — and finally, another thanks to our amazing panel of judges:
Daily Deal: The Complete 2021 American Sign Language Bundle
The need for American Sign Language speakers is continuing to rise. This Complete 2021 American Sign Language Bundle includes Level 1, 2, and 3 and a bonus course for free: baby sign language. Learn the basics from the sign language alphabet to more advanced signs, such as for medical emergencies. This bundle is exactly what you need to become confident in sign language for many situations. It's on sale for $20.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
Mental Health Team Handling 911 Calls In Denver Wraps Up Six Months With Dozens Of People Helped, Zero People Arrested
In June of last year -- as protests over police brutality occurred all over the nation -- Denver, Colorado rolled out a program that combined common sense with a slight "defunding" of its police department. It decided calls that might be better handled by social workers and mental health professionals should be handled by… social workers and mental health professionals.The city's STAR (Support Team Assistance Response) team was given the power to handle 911 calls that didn't appear to deal with criminal issues. Calls related to mental health or social issues were routed to STAR, allowing cops to handle actual crime and allowing people in crisis to avoid having to deal with people who tend to treat every problem like a crime problem.In its first three months, STAR handled 350 calls -- only a very small percentage of 911 calls. But the immediate developments appeared positive. A supposed indecent exposure call handled by STAR turned out to be a homeless woman changing clothes in an alley. A trespassing call turned out to be another homeless person setting up a tent near some homes. Suicidal persons were helped and taken to care centers. Homeless residents were taken to shelters. No one was arrested. No one was beaten, tased, or shot.The zero arrests streak continues. STAR has released its six-month report [PDF] and the calls it has handled have yet to result in an arrest, strongly suggesting police officers aren't the best personnel to handle crises like these -- unless the desired result is more people in holding cells.Granted, this is a very limited data set. At this point, STAR only has enough funding to support one van to handle calls during normal business hours: Monday-Friday from 10 am to 6 pm. Despite these limitations, the team handled 748 calls (about six calls per shift). Roughly a third of the calls handled came from police officers themselves, who requested STAR respond to an incident/call.Not only did none of the 748 calls result in an arrest, but STAR got things under control faster than law enforcement officers.
Ajit Pai Tried To Strangle A Broadband Aid Program For Low Income Americans. Then A Pandemic Hit.
While recently departed FCC boss Ajit Pai was perhaps best known for ignoring the public and making shit up to dismantle FCC authority over telecom monopolies, many of his other policies have proven to be less sexy to talk about--but just as terrible.One of the biggest targets throughout Pai's four year tenure as boss was the FCC's Lifeline program, an effort started by Reagan and expanded by Bush Jr. that long enjoyed bipartisan support until Trumpism rolled into town. Lifeline doles out a measly $9.25 per month subsidy that low-income homes can use to help pay a tiny fraction of their wireless, phone, or broadband bills (enrolled participants have to chose one). The FCC, under former FCC boss Tom Wheeler, had voted to expand the service to cover broadband connections, something Pai (ever a champion to the poor) voted down.Despite constant pledges that one of his top priorities was fixing the "digital divide," Pai's tenure as boss included a notable number of efforts to scuttle the Lifeline program that weren't paid much attention to -- until a pandemic came to town. COVID-19 has shone a bright spotlight on the fact that 42 million Americans still can't access broadband (double official FCC estimates), and millions more can't afford service thanks to monopolization and limited competition.Under Chairman Ajit Pai's "leadership," the FCC voted 3-2 in late 2017 to eliminate a $25 additional Lifeline subsidy for low-income native populations on tribal land. As part of Pai's effort, he also banned smaller mobile carriers from participating in the Lifeline program. Pai's attempt to neuter Lifeline in tribal areas certainly hurt overall enrollment, but didn't always fare well in the courts. One ruling (pdf), for example, noting that Pai and his staff not only pulled their justifications completely out of their asses, but failed to do any meaningful research whatsoever into how the cuts would impact poor and tribal communities:
Epic Games' Case Against Teenage Fortnite Cheater Finally Settles
As you may recall, back in 2017 Epic Games went on something of a crusade against cheating in its online hit game Fortnite. While much of Epic's attention was focused on websites that sold cheating software for the game, the company also set its sights on individuals who were actively promoting the use of cheating software in online videos. One of those Epic sued was a 14 year old who, if I'm being frank, sounds like a bit of a jackass. While the teen, identified in court documents only as "C.R.", was having his own mother defend him in letters to the judge in the case, he was also then going around uploading still more videos advocating the use of cheating software and taunting Epic Games. Epic's lawyers defeated the teen's mother, which, real feather in their cap for that I suppose. And so the trial continued.Until recently, when, as Epic has done in other cases against underage targets for its litigation, the company and the defendant managed to come to a settlement.
Louisiana AG Sues Journalists To Keep Them From Obtaining Documents Detailing Sexual Harassment By Top Prosecutor
Another public official is attempting to make the public records request process even more aggravating and expensive than it already is.In many cases, the public does what it's allowed to do: request records. And, in many cases, governments refuse to do what they're obligated to do. So, people sue. They dig into their own pockets and force the government to do what they were always supposed to do. And when they do this, the general public digs deep into their own pockets to pay government agencies to argue against the public's interests.This is diabolical enough. It's also, unfortunately, the standard M.O. for government agencies. Pay-to-play. Every FOIA request is a truth-or-dare game played on a field slanted towards the government, which has unlimited public funds to gamble with.But when just being dicks about isn't diabolical enough, government agencies and officials go further. When it's simply not enough to engage in litigation as defendants and argue against accountability and transparency, these entities go on the offensive.That's right. Government agencies and officials occasionally engage in proactive lawsuits, daring the defendants (i.e., citizens making public records requests) to prove they're entitled to the documents. This shifts the burden away from the government and onto the person with limited funds and almost nonexistent power. It's no different than demanding millions for the production of PDFs. It's an option deployed solely for the purpose of keeping everything under wraps.The latest participant in the "fuck the public and our obligations as public servants" is Louisiana's Attorney General.
Annoyance Builds At Elon Musk Getting A Billion In Subsidies For Starlink Broadband
So we've noted a few times how Elon Musk's Starlink is going to be a great thing for folks stuck out of reach of traditional broadband options. Though with a $600 first month price tag ($100 monthly bill and $500 hardware charge) it's not some magic bullet for curing the "digital divide." And without the capacity to service more densely populated areas, the service is only going to reach several million rural Americans. That's a good start, but it's only going to make a tiny dent for the 42 million Americans that lack access to any broadband, or the 83 million currently stuck under a broadband monopoly (usually Comcast). Starlink is going to be a good thing, but not transformative or truly disruptive to US telecom monopolies.There are a few other issues with the tech as well. One is the creation of light pollution that's harming scientific research (which US regulators have absolutely no plan to mitigate). Then there's the fact that Musk's Starlink recently gamed the broken FCC auction process to nab nearly a billion dollars it doesn't really deserve. Consumer group Free press did a good job breaking down how we're throwing a billion dollars at the second richest man on the planet via an FCC RDOF auction that's very broken, and by proxy easily exploitable by clever companies:
Twitter & India Still Arguing Over Whether Or Not Twitter Accounts Supporting Farmer Protests Need To Be Removed
Last week we wrote about the Indian government threatening to jail Twitter employees after the company reinstated a long list of accounts that the government demanded be blocked (Twitter blocked them for a brief period of time, before reinstating them). The accounts included some Indian celebrities and journalists, who were talking about the headline news regarding farmer protests. The Mohdi government has proven to be incredibly thin-skinned about negative coverage, and despite Indian protections for free expression, was demanding out-and-out censorship of these accounts. The threats to lock up Twitter employees put the company in an impossible position -- and it has now agreed to geoblock (but not shut down) some accounts, but not journalists, activists and politicians.The company implies, strongly, that the demands from the Indian government deliberately mixed actual incendiary/dangerous content with mere political critics of the Mohdi administration -- and makes it clear that it's willing to take action on "harmful content" or accounts that legitimately violate Twitter's rules. But that it will not agree to do so for those whose speech it believes is protected under Indian freedom of expression principles:
Gun Trafficking Investigation Shows The FBI Is Still Capable Of Accessing Communications On Encrypted Devices
It's been clear for some time that the FBI and DOJ's overly dramatic calls for encryption backdoors are unwarranted. Law enforcement still has plenty of options to deal with device encryption and end-to-end encrypted messaging services. Multiple reports have shown encryption is rarely an obstacle to investigations. And for all the noise the FBI has made about its supposedly huge stockpile of locked devices, it still has yet to hand over an accurate count of devices in its possession, more than two years after it discovered it had been using an inflated figure to back its "going dark" hysteria for months.An ongoing criminal case discussed by Thomas Forbes for Fortune provides more evidence law enforcement is not only finding ways to bypass device encryption, but access contents of end-to-end encrypted messages. This isn't the indictment of Signal (a popular encrypted messaging service) it first appears to be, though. The access point was the iPhone in law enforcement's possession which, despite still being locked, was subjected to a successful forensic extraction.
Daily Deal: The Complete 2021 Learn Linux Bundle
The Complete 2020 Learn Linux Bundle has 12 courses to help you learn Linux OS concepts and processes. You'll start with an introduction to Linux and progress to more advanced topics like shell scripting, data encryption, supporting virtual machines, and more. Other courses cover Red Hat Enterprise Linux 8 (RHEL 8), virtualizing Linux OS using Docker, AWS, and Azure, how to build and manage an enterprise Linux infrastructure, and much more. It's on sale for $59.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
Orrin Hatch, Who Once Wanted To Destroy The Computers Of Anyone Who Infringed On Copyrights, Now Lies About Section 230
Former Senator Orrin Hatch was so anti-technology, and supportive of the anti-technology recording industry, that former music tech startup entrepreneur and sci-fi author Rob Reid referred to him as "Senator Fido" in his comic novel about the music industry, because Senator "Fido" Hatch was such a lapdog of the recording industry that he would be willing to slip whatever anti-tech language they wanted into any new regulation. Even outside of the world of fiction, Hatch was way out there in his anti-technology ideas. In 2003, when he was Chair of the powerful Judiciary Committee, he floated the idea that copyright holders should invest in malware that would literally destroy the computers of anyone who opened an unauthorized file. The suggestion was so crazy that when an exec for an anti-piracy company at the hearing where Hatch raised this idea pushed back saying "no one is interested in destroying anyone's computer," Hatch immediately corrected him and said that, yes, indeed, Hatch himself was very interested in that idea:
Dumb New GOP Talking Point: If You Restore Net Neutrality, You HAVE To Kill Section 230. Just Because!
As the FCC gets closer to restoring net neutrality, a new and bizarre GOP talking point has emerged. It goes something like this: if you're going to restore some modest rules holding telecom monopolies accountable, you just have to dismantle a law that protects free speech on the internet! This of course makes no coherent sense whatsoever, but that's not stopping those looking to demolish Section 230, a law that is integral to protecting speech online.Take FCC Commissioner Brendan Carr, for example. Despite having a post at the nation's top communications regulator, Carr is literally incapable of even acknowledging that US telecom monopolies exist. Or that said monopolization is directly responsible for the high broadband prices, spotty coverage, terrible customer service, and/or sluggish speeds everybody loathes. His tenure has been spent rubber stamping the every whim of Comcast and AT&T, yet, for no coherent reason whatsoever he's emerged as a major voice in the conversation about Section 230 and social media.This week, Carr had this to say at the INCOMPAS policy summit:
Steam Becomes Available In China, Offers 53 Whole Games To Customers
There is no shortage of critiques for Valve's online PC game store, Steam. That's to be expected, frankly, given how big the platform is. Still, on the ground with individual gamers, one of the most common complaints you hear will be that the sheer volume of games on Steam is somewhat paralyzing for customers deciding where to spend their money. Steam tried to combat this for years with its Steam Curators program, where gamers put their trust in curators to pare down game search results. It never really worked, though, as the program encountered the same issue as the game: the sheer volume of curators.And so nothing really got solved. Except for in China, it seems, where Steam recently launched with a grand total of 53 whole games available to buyers.
Content Moderation Case Study: Twitter Attempts To Tackle COVID-related Vaccine Misinformation (2020)
Summary: Following on its efforts in tamping down on election-related misinformation, Twitter's latest moderation efforts target misleading posts about COVID and the coronavirus, with a specific focus on vaccine related information.Despite being months into a global pandemic, there has been a lack of clear, consistent communication from all levels of government in the United States, which has given conspiracy theorists and anti-vaccination activists plenty of room to ply their dubious trades. Twitter is hoping to reduce exposure to tweets containing misleading information as the nation continues to deal with multiple COVID outbreaks.Since early in the pandemic, Twitter had been aggressive in moderating misleading content regarding how the virus spreads, unproven remedies and treatments, and other health related info. Its new policy expands on that, mainly to focus on false information and conspiracy theories regarding vaccines.Twitter won't be limiting itself to applying warnings to tweets with dubious content. The platform will force users to delete tweets that don't comply with its expanded code of conduct. Added to restrictions on misinformation about the spread of the disease and its morbidity rates are bans on false claims about immunization safety or COVID's dangers.Decisions for Twitter:
How To Think About Online Ads And Section 230
There's been a lot of consternation about online ads, sometimes even for good reason. The problem is that not all of the criticism is sound or well-directed. Worse, the antipathy towards ad tech, regardless of whether it is well-founded or not, is coalescing into yet more unwise, and undeserved, attacks on Section 230 and other expressive discretion the First Amendment protects. If these attacks are ultimately successful none of the problems currently lamented will be solved, but they will create lots of new ones.As always, effectively addressing actual policy challenges first requires a better understanding of what these challenges are. The reality is that there are at least three separate issues that are raised by online ads: those related to ad content itself, those related to audience targeting, and those related to audience tracking. They all require their own policy responses—and, as it happens, none of those policy responses call for doing anything to change Section 230. In fact, to the extent that Section 230 is even relevant, the best policy response will always require keeping it intact.With regard to ad content, Section 230 applies, and should apply, to the platforms that run advertiser-supplied ads for the same reasons it applies, and should apply, to the platforms hosting the other sorts of content created by users. After all, ad content is, in essence, just another form of user generated content (in fact, sometimes it's exactly like other forms of user content). And, as such, the principles behind having Section 230 apply to platforms hosting user-generated content in general also apply – and need to apply – here.For one thing, as with ordinary user-generated content, platforms are not going to be able to police all the ad content that may run on their site. One important benefit of online advertising versus offline is that it enables far more entities to advertise to far larger audiences than they would be able to afford in the offline space. Online ads may therefore sometimes be cheesy, low-budget affairs, but it's ultimately good for the consumer if it's not just large, well-resourced, corporate entities who get to compete for public attention. We should be wary of implementing any policy that might choke off this commercial diversity.Of course, the flip side to making it possible for many more actors to supply many more ads is that the supply of online ads is nearly infinite, and thus the volume is simply too great for platforms to be able to scrutinize all of them (or even most of them). Furthermore, even in cases where platforms might be able to examine an ad, it is still unlikely to have the expertise to review it for all possible legal issues that might arise in every jurisdiction where the ad may appear. Section 230 exists in large part to alleviate these impossible content policing burdens to make it possible for platforms to facilitate the appearance of any content at all.Nevertheless, Section 230 also exists to make it possible for platforms to try to police content anyway, to the extent that they can, by making it clear that they can't be held liable for any of those moderation efforts. And that's important if we want to encourage them to help eliminate ads of poor quality. We want platforms to be able to do the best they can to get rid of dubious ads, and that means we need to make it legally safe for them to try.The more we think they should take these steps, the more we need policy to ensure that it's possible for platforms to respond to this market expectation. And that means we need to hold onto Section 230 because it is what affords them this practical ability.What's more, Section 230 affords platforms all this critical protection regardless of whether they profit from carrying content or not. The statute does not condition its protection on whether a platform facilitates content in exchange for money, nor is there any sort of constitutional obligation for a platform to provide its services on a charitable basis in order to benefit from the editorial discretion the First Amendment grants it. Sure, some platforms do pointedly host user content for free, but every platform needs to have some way of keeping the lights on and servers running. And if the most effective way to keep their services free for some users to post their content is to charge others for theirs, it is an absolutely constitutionally permissible decision for a platform to make.In fact, it may even be good policy to encourage as well, as it keeps services available for users who can't afford to pay for access. Charging some users to facilitate their content doesn't inherently make the platform complicit in the ad content's creation, or otherwise responsible for imbuing it with whatever quality is objectionable. Even if that an advertiser has paid for algorithmic display priority, Section 230 should still apply just as it applies to any other algorithmically driven display decision the platform employs.But on the off-chance that the platform did take an active role in creating that objectionable content, Section 230 has never stood in the way of holding the platform responsible. What Section 230 simply says is that making it possible to post unlawful content is not the same as creating content; for the platform to be liable as an "information content provider," aka a content creator, it had to have done something significantly more to birth its wrongful essence than simply be a vehicle for someone else to express it.It's even true if the platform allows the advertiser to choose its audience. After all, the content has already been created. Audience targeting is something else entirely, but it's also something we should be wary of impinging upon.There may, of course, be situations where advertisers try to target certain types of ads (ex: jobs, housing offers) in harmful ways. And when they do it may be appropriate to sanction the advertiser for what may amount to illegally discriminatory behavior. But not every such targeting choice is wrongful; sometimes choosing narrow audiences based on protected status may even be beneficial. But if we change the law to allow platforms be held equally liable with the advertiser for their wrongful targeting choices, we will take away the ability for platforms to offer audience targeting for any reasons, even good ones, by making it legally unsafe in case the advertiser does it for bad ones.Furthermore, doing so will upend all advertising as we've known it, and in a way that's offensive to the First Amendment. There's a reason that certain things are advertised during prime time, or during sports broadcasts, or on late night tv, just as there's a reason that ads appearing in the New York Times are not necessarily the same ones running in Field & Stream or Ebony magazines. The Internet didn't suddenly make those choices possible; advertisers have always wanted the most bang for their buck, to reach the people most likely to be their ultimate customers as cost effectively as possible. And as a result they have always made choices about where to place their ads based on the demographics those ads likely reach. To now say that it should be illegal to allow advertisers to ever make such choices, simply because they may sometimes make these decisions wrongfully would disrupt decades upon decades of past practice and likely run afoul of the First Amendment, which generally protects the choice of whom to speak to. In fact, it protects it regardless of the medium in question, and there is no principled reason why an online platform should be any less protected than a broadcaster or some sort of printed periodical (especially not the former).Even if it would be better if advertisers weren't so selective—and it's a fair argument to make, and a fair policy to pursue—it's not an outcome we should use the weight of legal liability to try to force. It won't work, and it impinges on important constitutional freedoms we've come to count on. Rather, if there is any affirmative policy response to ad tech that is warranted it is likely with the third constituent part: audience tracking. But even so, any policy response will still need to be a careful one.There is nothing new about marketers wanting to fully understand their audiences; they have always tried to track them as well as the technology of the day would allow. What's new is how much better they now can. And the reality is that some of the tracking ability is intrusive and creepy, especially to the degree it happens without the audience being aware of how much of their behavior is being silently learned by strangers. There is room for policy to at minimum encourage, and potentially even require, such systems to be more transparent in how they learn about their audiences, tell others what they've learned, and give those audiences a chance to say no to much of it.But in considering the right regulatory response there are some important caveats. First, take Section 230 off the table. It has nothing to do with this regulatory problem, apart from enabling platforms that may use ad tech to exist at all. You don't fix ad tech by killing the entire Internet; any regulatory solution is only a solution when it targets the actual problem.Which leads to the next caution, because the regulatory schemes we've seen attempted so far (GDPR, CCPA, Prop. 24) are, even if well-intentioned, clunky, conflicting, and with plenty of overhead that compromises their effectiveness and imposes their own unintended and chilling costs, including on expression itself (and of more expression than just that of advertisers).Still, when people complain about online ads this is frequently the area they are complaining about and it is worth focused attention to solve. But it is tricky; given how easy it is for all online activity to leave digital footprints, as well as the many reasons we might want to allow those footprints to be measured and then those measurements to be used (even potentially for advertising), care is required to make sure we don't foreclose the good uses while aiming to suppress the bad. But for the right law, one that recognizes and reasonably reacts to the complexity of this policy challenge, there is an opportunity for a constructive regulatory response to this piece of the online ad tech puzzle. There is no quick fix – and ripping apart the Internet by doing anything to Section 230 is certainly not any kind of fix at all – but if something must be done about online advertising, this is the something that's worth the thoughtful policy attention to try to get right.
Trump And Oracle's Dumb TikTok Cronyism Falls Apart
Remember when America spent a year and a half hyperventilating about a Chinese teen dancing app instead of securing American infrastructure from Russian hackers or other threats? Remember when a bunch of GOP officials with a long track record of not caring whatsoever about consumer privacy or internet security exploited xenophobic fears about the app to land political allies Oracle and Walmart a major windfall? Remember when 90% of the press couldn't be bothered to inform readers this was all performative cronyism by an unqualified nitwit? Good times.This morning the Wall Street Journal announced that the much hyped deal to sell ByteDance-owned TikTok to Oracle and Walmart is looking unsurprisingly dead in the wake of previous legal challenges and Trump's election loss. Instead, the government appears poised to do what made sense from the start: focus on the broader problem of lax privacy and dodgy security standards across the board in telecom/adtech/tech, instead of singling out a teen dancing app:
Snippet Taxes Not Only Violate The Berne Convention, But Also Betray The Deepest Roots Of Newspaper Culture
Last week Techdirt wrote about Australia's proposed News Media Bargaining Code. This is much worse than the already awful Article 15 of the EU Copyright Directive (formerly Article 11), which similarly proposes to force Internet companies to pay for the privilege of sending traffic to traditional news sites. A post on Infojustice has a good summary of the ways in which the Australians aim to do more harm to the online world than the Europeans:
Daily Deal: AI And Python Development eBook Bundle
The AI and Python Development eBook Bundle has 15 eBooks to help you master artificial intelligence. You'll learn the history of AI and its early applications and move on to learn about AI in the modern world where it's used in everything from neural nets to playing complex board games and more. You'll also learn about Python and TensorFlow. The bundle is on sale for $20.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
Latest Anti-Accountability Move By Cops Involves Playing Music While Being Recorded In Hopes Of Triggering Copyright Takedowns
Cops tend to dislike being recorded. They don't care much for their own recording devices. They routinely disable equipment or conveniently "forget" to activate body cameras.And they dislike the recording devices everyone carries with them at all times: cellphones. Cellphone ubiquity means it's almost impossible for cops to prevent an incident or interaction from being recorded. Add these devices to the steadily-increasing deployment of internet-connected security cameras and there's really nowhere to hide anymore.Simply shutting down recordings or arresting citizens for pointing cameras at them is a very risky option. There's tons of case law on the books that says recording public officials is protected First Amendment activity. So, cops are getting creative. Some of the less creative efforts include shining bright flashlights at people holding cameras in hopes of ruining any footage collected. Sometimes officers just stand directly in front of people who are recording to block their view of searches or arrests taking place. Often the excuse is "crowd control," when it's actually just an attempt at narrative control.Now, here's the latest twist: cops have figured out a way to prevent recordings from being streamed or uploaded to social media services or video platforms like YouTube. Believe it or not, it involves a particularly pernicious abuse of intellectual property protections.
16 States Ask The FCC What The Hell Is The Point Of The Verizon Tracfone Merger
Late last year, Verizon announced it would be acquiring Tracfone for around $6.2 billion. As we noted when the deal was first announced, it was yet another example of the "growth for growth's sake" mindset that has long infected US industry, particularly the telecom sector. There are really no real benefits to be gleaned from further consolidation in the space (especially in the wake of a T-Mobile Sprint merger that immediately resulted in layoffs and reduced US wireless competition by around 25%). Yet we really adore pretending otherwise as the government rubber stamps deal after deal.In a letter (pdf) to the FCC, attorneys general from 16 states and the District of Columbia urged the agency to actually, you know, do its job and ask more questions about the deal. TracFone is among the biggest providers of Lifeline, the FCC program that provides services for about 1.7 million low-income subscribers in 43 states. Verizon is a lumbering media and telecom monopoly that views such programs (and the regulators that oversee them) as largely an irritant. Putting the TracFone contributions at risk during an historic economic and health crisis isn't particularly bright.As such, the states are wondering if the FCC might be able to take a few moments to make sure the deal doesn't harm those relying on the program:
Chastity Penis Lock Company That Was Hacked Says It's Now Totally Safe To Put Your Penis Back In That Chastity Lock
While we've covered the Internet of Broken Things for some time, where companies fail to secure the devices they sell which connect to the internet, the entire genre sort of jumped the shark in October of last year. That's when Qiui, a Chinese company, was found to have sold a penis chastity lock that communicates with an API that was wide open and sans any password protection. The end result is that users of a device that locks up their private parts could enjoy those private parts entirely at the pleasure of nefarious third parties. Qiui pushed out a fix to the API... but didn't do so for existing users, only new devices. Why? Well, the company stated that pushing it out to existing devices would again cause them to all lock up, with no override available. Understandably, there wasn't a whole lot of interest in the company's devices at that point.But fear not, target market for penis chastity locks! Qiui says it's now totally safe to use the product again!
Why Is Congress Pushing For Locking Up More Culture?
In a weird bit of performative nonsense, Senators Thom Tillis and Pat Leahy, along with Representatives Hakeem Jeffries and Nancy Mace, have come together to... try to help kids lock up culture under copyright. Specifically, they want a bill that would allow kids to register a copyright for free for participants in the Congressional Art Competition and the Congressional App Competition. It is not at all clear why this is necessary, other than to perpetuate the myth that you need a copyright to be creative.First, to be clear, any such unique and original artwork is already covered by copyright. For better or for worse (by which I mean, for worse), the US now says that copyright is automatic from the time the work is "fixed" in a tangible medium (and if you try to point out that computer code is not a tangible medium, it gets them very, very angry, so don't bother...). So no one needs to register their copyright to be protected. Not registering does limit the ability of the copyright holder to sue or to get statutory damages. But if anyone creating works for a Congressional Art Competition is seeking to sue others, well, that seems like a bigger problem right there.But here's the key point: copyright is supposed to be there solely as an incentive for creation. The entire setup and basis for copyright in the Constitution is so that Congress can create incentives to promote the progress of science and the useful arts (and, copyright was meant for the "science" part, patents are the "useful arts"). I can pretty much assure you that no one creating artwork or apps for a Congressional competition is doing so because they're incentivized by the copyright. They're doing so because of the competition itself and the desire to express themselves (and maybe get some attention for what they've done).So encouraging locking these things up is bizarre and counterproductive. More to the point, why aren't these elected officials suggesting that the artists and developers entering these competitions explore the many Creative Commons options to help get their works more widely known?The answer, tragically, is as obvious as it is cynical. This is all driven by the legacy copyright industries who keep trying to push the myth that copyright = creation. And these are their favorite elected officials. Hollywood backed Tillis strongly in the last election, in which he was expected to lose, so he clearly owes them. Leahy has always been extremely close to Hollywood. Beyond being the Senate supporter of SOPA (his version was PIPA), Hollywood always rewards Leahy by giving him small roles in every Batman film. His daughter is also a Vice President and top lobbyist for the Motion Picture Academy, Hollywood's top lobbying body.On the House side, the legacy copyright industry has been cultivating a close relationship with Jeffries for a while now, including setting up a neat fundraiser for him in which if you just pay him (and Jerry Nadler) $5k each you get to hang out with Jeffries at the Grammies. Nice work if you can get it. Nancy Mace is new to Congress, so she may just be along for the ride here.The problem with all of this is just how cynically corrupt this seems. Even if it's in the form of "soft corruption," the connection of a few Senators and Representatives pushing a misguided line of thinking -- that completely undermines the very basis for copyright law -- in favor of the myth pushed by Hollywood and the legacy recording industry, it just makes everyone actually respect copyright even less.This isn't what copyright is for, and it's shameful that these elected officials are pushing the myth forward.
Techdirt Podcast Episode 269: The Oversight Board Starts Overseeing Facebook
The first batch of decisions about Facebook's content moderation from the recently-established Oversight Board has garnered lots of reactions, including many kneejerk ones — but there's plenty to discuss, so for this week's episode Mike is joined by Harvard Law's Evelyn Douek to talk about the decisions themselves and what they signal about the board as a whole.Follow the Techdirt Podcast on Soundcloud, subscribe via iTunes or Google Play, or grab the RSS feed. You can also keep up with all the latest episodes right here on Techdirt.
Section 230 Lets Tech Fix Content Moderation Issues. Congress Should Respect That
Congress is on the brink of destroying the internet as we know it.Bipartisanship in Congress is usually rare to see, but odd alliances have formed in the Capitol against Section 230, a law that regulates content moderation online which is in large part responsible for the incredible growth and diversity of the internet. Republicans accuse Facebook and Twitter of censoring conservative users on their platforms. Democrats accuse these companies of not doing enough in removing extremist or false content. While both sides agree that S230 has got to go, they’re at war with each other over who will drive regulatory efforts on content moderation. In the end, it won’t really matter who wins. Either way, the spoils of this war will be a gutted S230 or its repeal. That’s bad news for everyone.Before they ruin the internet entirely, Democrats and Republicans should take a step back and let industry standards catch up with the times.Removing Section 230 because of actors like Facebook and Twitter would mean harming other websites that haven’t done anything wrong and putting companies in the crossfire. On the other hand, too many new restrictions would cripple the competitive edge our tech sector has over the rest of the world. In both cases only larger companies like Facebook and Twitter would survive, while small businesses — like a family restaurant in Steubenville, Ohio, whose social media presence is driven entirely by customer reviews — would suffer and likely close.This doesn’t mean that nothing should be done. Something should be done, and soft law is the way.Soft law is not “law” in the normal sense. It refers to the diverse tools used by private or government bodies to guide how industries should develop. Common soft laws include industry standards created by public-private partnerships, the LEED rating system of the U.S. Green Building Council, and the guides on how to treat COVID by the Center for Disease Control. The uniqueness of soft law is that, instead of coming primarily from government regulators, it can come from anywhere. And instead of focusing on setting strict rules, it focuses on methods to attain ideal outcomes. This makes it “soft” because interpretation of the ‘law’ will differ between participants, who will not be fined for going their own way. Soft law provides guidance while encouraging innovation in reaching industry goals. In this way, it beats the rigidity of hard law.Soft law is already heavily utilized in artificial intelligence and automated vehicles, so legislators, regulators, and private companies advocating for this approach would have a strong precedent to point to as Section 230 talks continue. Moreover, this wouldn’t be the first time that we tried to regulate the internet with soft law. The early internet was ‘regulated’ by the Clinton administration through The Framework for Global Electronic Commerce, which established principles of how the federal government would regulate internet activities and how it expected the private sector to act. Most importantly, it stated that, “…governments should recognize the unique qualities of the Internet. The genius and explosive success of the Internet can be attributed in part to its decentralized nature and to its tradition of bottom-up governance.”As legislators look to revise regulations on the internet, it's essential they preserve that bottom-up governance that made the internet such an explosive success. To that end, rather than prescribing a one-size-fits-all approach to content moderation, the government should encourage companies to develop their own standards and make those standards publicly accessible. Instead of prescribing a single set of rules for the internet, the government should hold up companies developing their unique standards as models for the industry at large.A great example of one such model is the Oversight Board of Facebook, which recently announced its first series of case complaints against the company. The board, composed of former Prime Ministers, think tank leaders, and legal scholars, deliberated and overturned four out of five cases of censorship. Facebook released a statement saying they would abide by the decisions and work to create clearer content moderation policies. Facebook’s approach is innovative for tech giants like itself, but smaller companies require different standards for their audience. Nonprofits like Wikipedia handle this with their own open-source system that encourages volunteer administrators collaborating on content issues. Smaller companies like AllTrails bring moderation to their entire user-base to suggest new trail maps and edit current ones based on user feedback.Government needs to understand that what works for Facebook won't work for everyone else, and targeting Section 230 to fix all content moderation problems is the wrong approach. The key idea of Facebook’s Oversight Board, Wikipedia’s volunteer administrators, and AllTrails’ public moderation is that they all accomplish the same goal in very different ways. And that’s the essence of soft law. Protected by Section 230, and without an overarching government agency or document requiring them to reach a prescribed standard, companies should be able to create innovative methods in content moderation all on their own.Some argue that self-regulation is a big nothing burger — that it’s little more than a facade shielding companies from having to take any real responsibility for content posted on their sites. But that’s not true. Leaving content moderation solely to the companies makes them accountable to the public. By now we should all know just how compelling the public can be. For instance, last June public perception of Facebook’s ability to make good decisions on content moderation was overwhelmingly negative, with about 80% not trusting ‘Big Tech,’ but trusting the government even less. It’s no coincidence that Facebook launched its Oversight Board that summer. Other examples of companies imposing standards voluntarily to meet the public’s demand for accountability include Reddit’s “Transparency Report” which is issued every year allowing the public to see what content is being removed and the reasons for doing so. This report is a part of Reddit’s interpretation of the Santa Clara Principles, a soft law effort led by the Electronic Frontier Foundation, ACLU, and several other non-profits. Following these principles allows the public to hold companies accountable to their own promises, addressing a major issue in customer trust while maintaining the integrity of Section 230.Section 230 allowed entrepreneurs the protection and flexibility to explore new directions in tech that lead to some of the greatest economic and technological advancements in US history. Instead of gutting a law that made the internet what it is today, regulators should respect soft law alternatives brought by the private sector and encourage companies to find what works, helping users and businesses that rely on platforms currently protected by Section 230. Innovation is what will win the war of the web. We’ll only have a free internet as long as we can keep it.Luke is an economics graduate student at George Mason University focusing on entrepreneurship, health, and innovative technology. You can follow him on twitter @LiberLuke.
If We're Going To Talk About Discrimination In Online Ads, We Need To Talk About Roommates.com
It has been strange to see people speak about Section 230 and illegal discrimination as if it were somehow a new issue to arise. In fact, one of the seminal court cases that articulated the parameters of Section 230, the Roommates.com case, did so in the context of housing discrimination. It's worth taking a look at what happened in that litigation and how it bears on the current debate.Roommates.com was (and apparently remains) a specialized platform that does what it says on the tin: allow people to advertise for roommates. Back when the lawsuit began, it allowed people who were posting for roommates to include racial preferences in their ads, and it did so in two ways: (1) through a text box, where people could write anything about the roommate situation they were looking for, and (2) through answers to mandatory questions about roommate preferences.Roommates.com got sued by the Fair Housing Councils of the San Fernando Valley and San Diego for violating federal (FHA) and state (FEHA) fair housing law for allowing advertisers to express these discriminatory preferences. It pled a Section 230 defense, because the allegedly offending ads were user ads. But, in a notable Ninth Circuit decision, it both won and it lost.In sum, the court found that Section 230 indeed applied to the user expression supplied through the text box. That expression, for better or worse, was entirely created by the user. If something was wrong with it, it was the user who had made it wrongful and the user, as the information content provider, who could be held responsible—but not, per Section 230, the Roommates.com platform, which was the interactive computer service provider for purposes of the statute and therefore immune from liability for it.But the mandatory questions were another story. The court was concerned that, if these ads were illegally discriminatory, the platform had been a party to the creation of that illegality by prompting the user to express discriminatory preferences. And so the court found that Section 230 did not provide the platform a defense to any claim predicated on the content elicited by these questions.Even though it was a split and somewhat messy decision, the Roommates.com case has held up over the years and provided subsequent courts with some guidance for how to figure out when Section 230 should apply. There are still fights around the edges, but figuring out whether it should apply has basically boiled down to determining who imbued the content with its allegedly wrongful quality. If the platform, then it's on the hook as much as the user may be. But its contribution to wrongful content's creation still had to be more substantive than merely offering the user the opportunity to express something illegal.
Daily Deal: CaptionSaver Pro
CaptionSaver Pro will take care of your notes. It's a Chrome extension that automatically saves Google Meet live captions to Google Drive. Pro comes with features such as highlighting, timestamps, and auto-save to Google Drive to enhance the automated note-taking capabilities, so you can focus your attention on your meetings. It's on sale for $25.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
...191192193194195196197198199200...