On Monday Parler announced to the world that it was back with a new host (and a new interim CEO after the board fired founder and CEO John Matze a few weeks ago). The "board" is controlled by the (until recently, secret) other founder: Rebekah Mercer, who famously also funded Cambridge Analytica, a company built upon sucking up social media data and using it to influence elections. When Matze was fired, he told Reuters that the company was being run by two people since he'd been removed: Matthew Richardson and Mark Meckler.Richardson has many ties to the Mercers, and was associated with Cambridge Analytica and the Brexit effort. Meckler was, for a few years, considered one of the founding players and leading spokespeople for the "Tea Party" movement in the US, before splitting with that group and pushing for a new Constitutional Convention (at times in a "strange bedfellows" way with Larry Lessig). With the news on Monday that Parler was back up (sort of), it was also announced that Meckler had taken over as interim CEO.Given the role of Meckler, Richardson, and Mercer, you can bet that the site is still pushing to be the Trumpiest of social media sites. As for who is actually the new hosting firm, there's been some confusion in the press. The twitter account @donk_enby, who famously scraped and archived most of the older Parler before it was shut down by Amazon last month, originally said Parler's new hosting firm was CloudRoute, who it appears may just be a Microsoft Azure reseller of some kind. In a later tweet, @donk_enby mentions that another firm, SkySilik, seems to share an IP space with CloudRoute, perhaps renting IP addresses from CloudRoute.A few hours later, SkySilk admitted to being the new hosting company and put out a weird statement that suggests a somewhat naive team who had no idea what they were getting into:
We've talked for years about how telecom monopolies like Comcast and AT&T have ghost written laws in more than twenty states, banning or hamstringing towns and cities looking to build their own broadband networks. We've also noted with COVID clearly illustrating how broadband is essential for education, opportunity, employment, and healthcare, such restrictions are looking dumber than ever. Voters should have every right to make local infrastructure decisions for themselves, and if big ISPs and armchair free market policy wonks don't want that to happen, incumbent ISPs should provide faster, cheaper, better service.As the pandemic continues, some cities have found ways around such restrictions -- by focusing more specifically on serving struggling, low income Americans. Texas is one such state that long ago passed municipal restrictions, courtesy of Dallas-based AT&T. AT&T doesn't want to upgrade or repair many of its DSL lines, but it also doesn't want communities upgrading or building networks either lest it become a larger trend (too late). As a result, in San Antonio, an amazing 38% of homes still don't have residential broadband.The city's existing network can't really expand commercial service thanks to a law written by AT&T. But that law doesn't prohibit the city from servicing the poor by offering free service, something made possible by the recent CARES Act:
This week, our first place winner on the insightful side is Stephen T. Stone weighing in on one of the many comment-section incarnations of the neverending debate about conservative censorship:
This week, we announced the winners of Gaming Like It's 1925, our third annual game jam celebrating works that entered the public domain in the US this year. Over the next few weeks, we'll be taking a closer look at each of the winning games from the six categories (in no particular order), starting today with the winner of Best Visuals: ~THE GREAT GATSBY~ by Floatingtable Games.The first thing that strikes you about ~THE GREAT GATSBY~ is just how robust the graphics are for a game jam entry. It's a platformer presented in a retro pixel-art style — the designer explains that it has the same screen resolution as a Nintendo Game Boy, but one more color in its palette. The player is immediately presented with a beautiful title screen depicting one of the most iconic pieces of imagery from the novel:From there, the game reveals itself to be more than just the mechanical prototype one might expect from a platformer in a game jam — rather, it's a fully-formed (albeit very short) experience that includes an opening "cinematic", some RPG-style interactions with NPC characters including simple dialogue choices, two main platforming levels (the first of which requires you to retrace your steps, finding the path more challenging in reverse — a classic level design technique — and the second of which feels distinctly different and introduces a new kind of obstacle), and a clear conclusion. In other words, there's some genuine thought put into the game design here, and an effort to make the game "complete" that really paid off. But it's still the graphics that stand out the most, from the detailed cityscapes with parallax-animated skylines in the background and pixelated haze drifting through the air......to the interior scene with its own set of unique sprites, the stylish character portraits, and the simple, easily-understood interface elements:Note the attention to detail — it would have been easy and perfectly acceptable to slap the same simple window graphic from the outdoor scenes onto the interior wall, but instead we get a brand new custom sprite that includes the skyline visible outside in the distance. That kind of extra effort is apparent all throughout the graphics of the game, and that's why it was an easy pick for the Best Visuals award.Play ~THE GREAT GATSBY~ in your browser on Itch, and check out the other jam entries too. Congratulations to Floatingtable Games for the win! We'll be back next week with another game jam winner spotlight.
By now, you have likely heard about the recent hack into a Florida water treatment plant which resulted in the attacker remotely raising the levels of sodium hydroxide to 100 times the normal level for the city's water supply. While those changes were remediated manually by onsite staff, it should be noted that this represents an outside attacker attempting to literally poison an entire city's water supply. Once the dangerous part of all of this was over, attention rightfully turned to figuring out how in the world this happened.The answer, as is far too often the case, is poor security practices at the treatment plant.
Summary: Different platforms have different rules regarding “adult” content, but they often prove difficult to enforce. Even the US judicial system has declared that there is no easy way to define pornography, leading to Justice Potter Stewart’s famous line, “I know it when I see it.”Many, if not most, internet websites have rules regarding such adult content, and in 2017 Valve’s online game platform, Steam, started trying to get more serious about enforcing its rules, leading to some smaller independent games being banned from the platform. Over the next few months more and more games were removed, though some started pointing out that this policy and the removals were doing the most harm to independent game developers.In June of 2018, Valve announced that it had listened to various discussions on this and decided that it was going to take a very hands off approach to moderating content, including adult content. After admitting that there are widespread debates over this, the company said that it would basically allow absolutely anything on the platform, with very, very few exceptions:
It's easy to dislike and distrust Julian Assange. He's done many things to inspire both reactions. Still, it's important to separate out personal feelings towards the guy with the question of whether or not he broke US law with publishing the things he did via Wikileaks. For years, the Obama DOJ refused to indict him, in part due to the recognition that nearly all of Assange's activities were similar to the kinds of things that journalists do all the time. The Trump DOJ had no such restraint (even as some prosecutors warned of problems with the idea), and as we and others have pointed out the indictment is a huge threat to investigative journalism and things like source protection.Now that Biden is President, a whole bunch of civil rights groups have sent a letter to Acting Attorney General Monty Wilkinson, asking him to drop the case against Assange. The letter notes that many of the signatories do not agree with Assange or Wikileaks, but that doesn't mean the case is a good one:
The following is the Copia Institute's submission to the Oversight Board as it evaluates Facebook's decision to remove some of Trump's posts and his ability to post. While addressed to the Board, it's written for everyone thinking about how platforms moderate content.The Copia Institute has advocated for social media platforms to permit the greatest amount of speech possible, even when that speech is unpopular. At the same time, we have also defended the right of social media platforms to exercise editorial and associative discretion about the user expression it permits on its services. This case illustrates why we have done both. We therefore take no position on whether Facebook's decision to remove former-President Trump's posts and disable his ability to make further posts was the right decision for Facebook to make because choosing to do so or choosing not to is each defensible. Instead our goal is to explain why.Reasons to be wary of taking content down. We have long held the view that the reflex to remove online content, even odious content, is generally not a healthy one. Not only can it backfire and lead to the removal of content undeserving of deletion, but it can have the effect of preserving a false monoculture in online expression. Social media is richer and more valuable when it can reflect the full fabric of humanity, even when that means enabling speech that is provocative or threatening to hegemony. Perhaps especially then, because so much important, valid, and necessary speech can so easily be labeled that way. Preserving different ideas, even when controversial, ensures that there will be space for new and even better ones, whereas policing content for compliance with current norms only distorts those norms' development.Being too willing to remove content also has the effect of teaching the public that when it encounters speech that provokes the way to respond is to demand its suppression. Instead of a marketplace of ideas, this burgeoning tendency means that discourse becomes a battlefield, where the view that will prevail is the one that can amass enough censorial pressure to remove its opponent—even if it's the view with the most merit. The more Facebook feeds this unfortunate instinct by removing user speech, the more vulnerable it will be to further pressure demanding still more removals, even when it may be of speech society would benefit from. The reality is that there will always be disagreements over the worth of certain speech. As long as Facebook assumes the role of an arbitrator, it will always find itself in the middle of an unwinnable tug-of-war between conflicting views. To break this cycle, removals should be made with reluctance and only limited, specific, identifiable, and objective criteria to justify the exception. It may be hard to employ them consistently at scale, but more restraint will in the long run mean less error.Reasons to be wary of leaving content up. The unique challenge presented in this case is that the Facebook user at the time of the posts in question was the President of the United States. This fact cuts in multiple ways: as the holder of the highest political office in the country Trump's speech was of particular relevance to the public, and thus particularly worth facilitating. After all, even if Trump's posts were debauched, these were the views of the President, and it would not have served the public for him to be of this character and the public not to know.On the other hand, as the then-President of the United States his words had greater impact than any other user's. They could do, and did, more harm, thanks to the weight of authority they acquired from the imprimatur of his office. And those real-world effects provided a perfectly legitimate basis for Facebook to take steps to (a) mitigate that damage by removing posts and (b) end the association that had allowed him to leverage Facebook for those destructive ends.If Facebook concludes that anyone's use of its services is not in its interests, the interests of its user community, or the interests of the wider world Facebook and its users inhabit, it can absolutely decide to refuse that user continued access. And it can reach that conclusion based on wider context, beyond platform use. Facebook could for instance deny a confessed serial killer who only uses Facebook to publish poetry access to its service if it felt that the association ultimately served to enable the bad actor's bad acts. As with speech removals, such decisions should be made with reluctance and based on limited, specific, identifiable, and objective criteria, given the impact of such terminations. Just as continued access to Facebook may be unduly empowering for users, denying it can be equally disempowering. But in the case of Trump, as President he did not need Facebook to communicate to the public. He had access to other channels and Facebook no obligation to be conscripted to enable his mischief. Facebook has no obligation to enable anyone's mischief, whether they are a political leader or otherwise.Potential middle-grounds. When it comes to deciding whether to continue to provide Facebook's services to users and their expression, there is a certain amount of baby-splitting that can be done in response to the sorts of challenges raised by this case. For instance, Facebook does more than simply host speech that can be read by others; it provides tools for engagement such as comments and sharing and amplification through privileged display, and in some instances allows monetization. Withdrawing any or all of these additional user benefits is a viable option that may go a long way toward minimizing the problems of continuing to host problematic speech or a problematic user without the platform needing to resort to removing either entirely.Conclusion. Whether removing Trump's posts and further posting ability was the right decision or not depends on what sort of service Facebook wants to be and which choice it believes it best serves that purpose. Facebook can make these decisions any way it wants, but to minimize public criticism and maximize public cooperation how it makes them is what matters. These decisions should be transparent to the user community, scalable to apply to future situations, and predictable in how they would, to the extent they can be, since circumstances and judgment will inevitably evolve. Every choice will have consequences, some good and some bad. The choice for Facebook is really to affirmatively choose which ones it wants to favor. There may not be any one right answer, or even any truly right answer. In fact, in the end the best decision may have little to do with the actual choice that results but rather the process used to get there.
It's that time again — the judges' scores and comments are in, and we've selected the winners of our third annual public domain game jam, Gaming Like It's 1925! As you know, we asked game designers of all stripes to submit new creations based on works published in 1925 that entered the public domain in the US this year — and just as in the past two jams, people got very creative in terms of choosing source material and deciding what to do with it. Of course, there were also a lot of submissions based on what is probably the most famous newly-public-domain work this year, The Great Gatsby — but while everyone expected that, nobody expected just how unique some of those entries would be! So without further delay, here are the winners in all six categories of Gaming Like It's 1925:Best Analog Game — Fish Magic by David HarrisDavid Harris is our one and only returning winner this year: he won the same category in Gaming Like It's 1924 with his previous game, The 24th Kandinsky, which as the name suggests was based on the artwork of Wassily Kandinsky. This year's entry, Fish Magic, continues in a similar tradition, but now drawing inspiration from Paul Klee's 1925 painting of the same name. The game itself is very different, but just as captivating: it turns Klee's painting into a game board which players navigate to collect words, then tasks them with inventing new kinds of "fish magic" or "magic fish" with the words in their collection. Where The 24th Kandinsky was tailored to Kandinsky's abstract art, with players focused on manipulating the shapes and forms of his compositions, Fish Magic's gameplay is more suited to Klee's surreal and expressionist style, shifting the focus to the magical ideas and mysterious underwater world evoked by the titular painting. Our judges were immediately drawn to this clever and original premise, and impressed by how complete and well-thought-out the final product is, making Fish Magic a shoe-in for the Best Analog Game.Best Digital Game — Rhythm Action Gatsby by Robert TylerAnyone working on a game based on The Great Gastby for this year's jam knew they'd be facing competition, and would have to do something unexpected to truly stand out — and that's just what Robert Tyler did with Rhythm Action Gatsby. Rhythm action games are a simple premise, and it would have been easy to just slap one together, but this entry was lovingly crafted with an original music composition, recorded narration of a famous passage from the book, and carefully choreographed animations, all presented via a representation of the iconic cover art that we all recognize in a pretty, polished package — plus, bonus points for taking the time to include a basic accessibility option to turn off screen flashes. Our judges immediately found it cute, delightful, and genuinely fun, even taking multiple runs at the roughly-two-minute game to improve their scores, putting it straight to the top of the charts for the Best Digital Game.Best Adaptation — The Great Gatsby: The Tabletop Roleplaying Game by SegoliOne of the things we loved most about last year's entries was that, beyond just using newly-public-domain materials, several of them brought themes of copyright and culture into the games themselves. While there was less of that this year, The Great Gatsby: The Tabletop Roleplaying Game by Segoli puts these concepts at the core of its game mechanics in a fun and amusing way that won some of our judges over before the end of the first page of rules. The game is a robust, well-thought-out framework for improvising and roleplaying a new version of the story of The Great Gatsby, with the traditional setup of a Game Master and a group of players — with the twist that those players are encouraged to play as other public domain characters. Indeed, the comical character creation rules aren't about rolling dice to assign skill points, but about figuring out what's in the public domain where you're playing, and the core mechanic for player actions can be more or less challenging depending on whether the action invokes a still-copyrighted work. And yet despite all this playful copyright fun, the game also encourages a genuine exploration of the book and aims to produce great alternative versions of its story — all of which makes it the winner of Best Adaptation.Best Remix (Tie!) — Art Apart by Ryan Sullivan, and There Are No Eyes Here by jukelThe Best Remix category, for a game that draws on multiple 1925 works, is one of the most interesting and most challenging categories in the jam. This year, there wasn't a single stand-out winner, but rather two games that are at once very similar and very different, and both deserving of the prize.Art Apart by Ryan Sullivan is a game that, at first glance, nobody expected very much from — it's just a series of digital jigsaw puzzles of 1925 paintings. But once they dove in, our judges were pleasantly surprised by just how charming it was thanks to a great array of paintings and a selection of gentle background music (also from 1925, of course!) This attention to detail carries through in other features, like a timer with a "best time" memory and a full-featured interface that lets the user switch between puzzles and background tracks at will. Mostly, it's a showcase of how the act of mixing multiple creative works can be valuable in and of itself when someone takes the time to choose those works well.There Are No Eyes Here by jukel is its own kind of painting-based puzzle, taking an approach that is more focused on the elements of the artwork. Indeed, one wonders if the game was at least partly inspired by last year's The 24th Kandinsky, as it is also based on paintings by the famed Russian abstract artist, but this time ones from 1925. The game makes the elements of the paintings themselves into the levers of the puzzle, essentially becoming a spot-the-hidden-object game in which players locate the elements of the paintings that they can manipulate to complete each stage. It carefully mixes and matches elements of multiple Kandinsky paintings, forcing the player to carefully study their elements in a way most people haven't taken the time to do, and rewarding them with hand-crafted animations. It's a simple game that is as abstract and intriguing as the works it draws from.Best Deep Cut — Remembering Grußau by Max Fefer (HydroForge Games)Building on public domain works doesn't have to be all about chopping up and changing them, and games don't always have to achieve their goals in an oblique way. Sometimes, there are games like Remembering Grußau by Max Fefer/HydroForge Games that tell you exactly what they are: in this case, a guided reflection on the death of Jewish artist Felix Nussbaum and a work he painted in 1925, nearly twenty years before he was killed at Auschwitz. The game is calm, meditative, and deeply moving, remaining entirely focused on the painting and prompting the player to study it and consider its meaning with the knowledge of Nussbaum's life and death. It's the only Twine game among this year's winners, but it also goes beyond the browser-based interactive story, tasking players with writing a letter on paper and returning to the game after spending time to contemplate it. Our judges found it impactful and highly effective in its goals, and by drawing on one specific lesser-known work and truly exploring it to the fullest, it became the clear choice for Best Deep Cut.Best Visuals — ~THE GREAT GATSBY~ by Floatingtable GamesIn terms of its visual presentation, ~THE GREAT GATSBY~ by Floatingtable Games is one of the most polished submissions we've ever had in these jams. It's a simple, classic platformer — complete with double-jumps and deadly spike hazards, plus some story cutscenes — and while the gameplay won't blow any minds, the striking monochrome pixel graphics will catch plenty of eyes. The brief level loosely tells the story of the second chapter of The Great Gatsby, and from the warm brown color palette to the parallax cityscape backdrop to the expressive character portraits, everything on screen just looks great. Why turn The Great Gatsby into a retro-style platformer? Well, why not? If nothing else, it's a great way to win this year's prize for Best Visuals!The winning designers will be contacted via their Itch pages to arrange their prizes, so if you see your game listed here, keep an eye on your incoming comments!In the coming weeks, we'll be taking a closer look at each of these winners in a series of posts, but for now you can head on over to the game jam page to try out all these games as well as several other great entries that didn't quite make the cut. Congratulations to all our winners, and a huge thanks to everyone who submitted a game — and finally, another thanks to our amazing panel of judges:
The need for American Sign Language speakers is continuing to rise. This Complete 2021 American Sign Language Bundle includes Level 1, 2, and 3 and a bonus course for free: baby sign language. Learn the basics from the sign language alphabet to more advanced signs, such as for medical emergencies. This bundle is exactly what you need to become confident in sign language for many situations. It's on sale for $20.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
In June of last year -- as protests over police brutality occurred all over the nation -- Denver, Colorado rolled out a program that combined common sense with a slight "defunding" of its police department. It decided calls that might be better handled by social workers and mental health professionals should be handled by… social workers and mental health professionals.The city's STAR (Support Team Assistance Response) team was given the power to handle 911 calls that didn't appear to deal with criminal issues. Calls related to mental health or social issues were routed to STAR, allowing cops to handle actual crime and allowing people in crisis to avoid having to deal with people who tend to treat every problem like a crime problem.In its first three months, STAR handled 350 calls -- only a very small percentage of 911 calls. But the immediate developments appeared positive. A supposed indecent exposure call handled by STAR turned out to be a homeless woman changing clothes in an alley. A trespassing call turned out to be another homeless person setting up a tent near some homes. Suicidal persons were helped and taken to care centers. Homeless residents were taken to shelters. No one was arrested. No one was beaten, tased, or shot.The zero arrests streak continues. STAR has released its six-month report [PDF] and the calls it has handled have yet to result in an arrest, strongly suggesting police officers aren't the best personnel to handle crises like these -- unless the desired result is more people in holding cells.Granted, this is a very limited data set. At this point, STAR only has enough funding to support one van to handle calls during normal business hours: Monday-Friday from 10 am to 6 pm. Despite these limitations, the team handled 748 calls (about six calls per shift). Roughly a third of the calls handled came from police officers themselves, who requested STAR respond to an incident/call.Not only did none of the 748 calls result in an arrest, but STAR got things under control faster than law enforcement officers.
While recently departed FCC boss Ajit Pai was perhaps best known for ignoring the public and making shit up to dismantle FCC authority over telecom monopolies, many of his other policies have proven to be less sexy to talk about--but just as terrible.One of the biggest targets throughout Pai's four year tenure as boss was the FCC's Lifeline program, an effort started by Reagan and expanded by Bush Jr. that long enjoyed bipartisan support until Trumpism rolled into town. Lifeline doles out a measly $9.25 per month subsidy that low-income homes can use to help pay a tiny fraction of their wireless, phone, or broadband bills (enrolled participants have to chose one). The FCC, under former FCC boss Tom Wheeler, had voted to expand the service to cover broadband connections, something Pai (ever a champion to the poor) voted down.Despite constant pledges that one of his top priorities was fixing the "digital divide," Pai's tenure as boss included a notable number of efforts to scuttle the Lifeline program that weren't paid much attention to -- until a pandemic came to town. COVID-19 has shone a bright spotlight on the fact that 42 million Americans still can't access broadband (double official FCC estimates), and millions more can't afford service thanks to monopolization and limited competition.Under Chairman Ajit Pai's "leadership," the FCC voted 3-2 in late 2017 to eliminate a $25 additional Lifeline subsidy for low-income native populations on tribal land. As part of Pai's effort, he also banned smaller mobile carriers from participating in the Lifeline program. Pai's attempt to neuter Lifeline in tribal areas certainly hurt overall enrollment, but didn't always fare well in the courts. One ruling (pdf), for example, noting that Pai and his staff not only pulled their justifications completely out of their asses, but failed to do any meaningful research whatsoever into how the cuts would impact poor and tribal communities:
As you may recall, back in 2017 Epic Games went on something of a crusade against cheating in its online hit game Fortnite. While much of Epic's attention was focused on websites that sold cheating software for the game, the company also set its sights on individuals who were actively promoting the use of cheating software in online videos. One of those Epic sued was a 14 year old who, if I'm being frank, sounds like a bit of a jackass. While the teen, identified in court documents only as "C.R.", was having his own mother defend him in letters to the judge in the case, he was also then going around uploading still more videos advocating the use of cheating software and taunting Epic Games. Epic's lawyers defeated the teen's mother, which, real feather in their cap for that I suppose. And so the trial continued.Until recently, when, as Epic has done in other cases against underage targets for its litigation, the company and the defendant managed to come to a settlement.
Another public official is attempting to make the public records request process even more aggravating and expensive than it already is.In many cases, the public does what it's allowed to do: request records. And, in many cases, governments refuse to do what they're obligated to do. So, people sue. They dig into their own pockets and force the government to do what they were always supposed to do. And when they do this, the general public digs deep into their own pockets to pay government agencies to argue against the public's interests.This is diabolical enough. It's also, unfortunately, the standard M.O. for government agencies. Pay-to-play. Every FOIA request is a truth-or-dare game played on a field slanted towards the government, which has unlimited public funds to gamble with.But when just being dicks about isn't diabolical enough, government agencies and officials go further. When it's simply not enough to engage in litigation as defendants and argue against accountability and transparency, these entities go on the offensive.That's right. Government agencies and officials occasionally engage in proactive lawsuits, daring the defendants (i.e., citizens making public records requests) to prove they're entitled to the documents. This shifts the burden away from the government and onto the person with limited funds and almost nonexistent power. It's no different than demanding millions for the production of PDFs. It's an option deployed solely for the purpose of keeping everything under wraps.The latest participant in the "fuck the public and our obligations as public servants" is Louisiana's Attorney General.
So we've noted a few times how Elon Musk's Starlink is going to be a great thing for folks stuck out of reach of traditional broadband options. Though with a $600 first month price tag ($100 monthly bill and $500 hardware charge) it's not some magic bullet for curing the "digital divide." And without the capacity to service more densely populated areas, the service is only going to reach several million rural Americans. That's a good start, but it's only going to make a tiny dent for the 42 million Americans that lack access to any broadband, or the 83 million currently stuck under a broadband monopoly (usually Comcast). Starlink is going to be a good thing, but not transformative or truly disruptive to US telecom monopolies.There are a few other issues with the tech as well. One is the creation of light pollution that's harming scientific research (which US regulators have absolutely no plan to mitigate). Then there's the fact that Musk's Starlink recently gamed the broken FCC auction process to nab nearly a billion dollars it doesn't really deserve. Consumer group Free press did a good job breaking down how we're throwing a billion dollars at the second richest man on the planet via an FCC RDOF auction that's very broken, and by proxy easily exploitable by clever companies:
Last week we wrote about the Indian government threatening to jail Twitter employees after the company reinstated a long list of accounts that the government demanded be blocked (Twitter blocked them for a brief period of time, before reinstating them). The accounts included some Indian celebrities and journalists, who were talking about the headline news regarding farmer protests. The Mohdi government has proven to be incredibly thin-skinned about negative coverage, and despite Indian protections for free expression, was demanding out-and-out censorship of these accounts. The threats to lock up Twitter employees put the company in an impossible position -- and it has now agreed to geoblock (but not shut down) some accounts, but not journalists, activists and politicians.The company implies, strongly, that the demands from the Indian government deliberately mixed actual incendiary/dangerous content with mere political critics of the Mohdi administration -- and makes it clear that it's willing to take action on "harmful content" or accounts that legitimately violate Twitter's rules. But that it will not agree to do so for those whose speech it believes is protected under Indian freedom of expression principles:
It's been clear for some time that the FBI and DOJ's overly dramatic calls for encryption backdoors are unwarranted. Law enforcement still has plenty of options to deal with device encryption and end-to-end encrypted messaging services. Multiple reports have shown encryption is rarely an obstacle to investigations. And for all the noise the FBI has made about its supposedly huge stockpile of locked devices, it still has yet to hand over an accurate count of devices in its possession, more than two years after it discovered it had been using an inflated figure to back its "going dark" hysteria for months.An ongoing criminal case discussed by Thomas Forbes for Fortune provides more evidence law enforcement is not only finding ways to bypass device encryption, but access contents of end-to-end encrypted messages. This isn't the indictment of Signal (a popular encrypted messaging service) it first appears to be, though. The access point was the iPhone in law enforcement's possession which, despite still being locked, was subjected to a successful forensic extraction.
The Complete 2020 Learn Linux Bundle has 12 courses to help you learn Linux OS concepts and processes. You'll start with an introduction to Linux and progress to more advanced topics like shell scripting, data encryption, supporting virtual machines, and more. Other courses cover Red Hat Enterprise Linux 8 (RHEL 8), virtualizing Linux OS using Docker, AWS, and Azure, how to build and manage an enterprise Linux infrastructure, and much more. It's on sale for $59.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
Former Senator Orrin Hatch was so anti-technology, and supportive of the anti-technology recording industry, that former music tech startup entrepreneur and sci-fi author Rob Reid referred to him as "Senator Fido" in his comic novel about the music industry, because Senator "Fido" Hatch was such a lapdog of the recording industry that he would be willing to slip whatever anti-tech language they wanted into any new regulation. Even outside of the world of fiction, Hatch was way out there in his anti-technology ideas. In 2003, when he was Chair of the powerful Judiciary Committee, he floated the idea that copyright holders should invest in malware that would literally destroy the computers of anyone who opened an unauthorized file. The suggestion was so crazy that when an exec for an anti-piracy company at the hearing where Hatch raised this idea pushed back saying "no one is interested in destroying anyone's computer," Hatch immediately corrected him and said that, yes, indeed, Hatch himself was very interested in that idea:
As the FCC gets closer to restoring net neutrality, a new and bizarre GOP talking point has emerged. It goes something like this: if you're going to restore some modest rules holding telecom monopolies accountable, you just have to dismantle a law that protects free speech on the internet! This of course makes no coherent sense whatsoever, but that's not stopping those looking to demolish Section 230, a law that is integral to protecting speech online.Take FCC Commissioner Brendan Carr, for example. Despite having a post at the nation's top communications regulator, Carr is literally incapable of even acknowledging that US telecom monopolies exist. Or that said monopolization is directly responsible for the high broadband prices, spotty coverage, terrible customer service, and/or sluggish speeds everybody loathes. His tenure has been spent rubber stamping the every whim of Comcast and AT&T, yet, for no coherent reason whatsoever he's emerged as a major voice in the conversation about Section 230 and social media.This week, Carr had this to say at the INCOMPAS policy summit:
There is no shortage of critiques for Valve's online PC game store, Steam. That's to be expected, frankly, given how big the platform is. Still, on the ground with individual gamers, one of the most common complaints you hear will be that the sheer volume of games on Steam is somewhat paralyzing for customers deciding where to spend their money. Steam tried to combat this for years with its Steam Curators program, where gamers put their trust in curators to pare down game search results. It never really worked, though, as the program encountered the same issue as the game: the sheer volume of curators.And so nothing really got solved. Except for in China, it seems, where Steam recently launched with a grand total of 53 whole games available to buyers.
Summary: Following on its efforts in tamping down on election-related misinformation, Twitter's latest moderation efforts target misleading posts about COVID and the coronavirus, with a specific focus on vaccine related information.Despite being months into a global pandemic, there has been a lack of clear, consistent communication from all levels of government in the United States, which has given conspiracy theorists and anti-vaccination activists plenty of room to ply their dubious trades. Twitter is hoping to reduce exposure to tweets containing misleading information as the nation continues to deal with multiple COVID outbreaks.Since early in the pandemic, Twitter had been aggressive in moderating misleading content regarding how the virus spreads, unproven remedies and treatments, and other health related info. Its new policy expands on that, mainly to focus on false information and conspiracy theories regarding vaccines.Twitter won't be limiting itself to applying warnings to tweets with dubious content. The platform will force users to delete tweets that don't comply with its expanded code of conduct. Added to restrictions on misinformation about the spread of the disease and its morbidity rates are bans on false claims about immunization safety or COVID's dangers.Decisions for Twitter:
There's been a lot of consternation about online ads, sometimes even for good reason. The problem is that not all of the criticism is sound or well-directed. Worse, the antipathy towards ad tech, regardless of whether it is well-founded or not, is coalescing into yet more unwise, and undeserved, attacks on Section 230 and other expressive discretion the First Amendment protects. If these attacks are ultimately successful none of the problems currently lamented will be solved, but they will create lots of new ones.As always, effectively addressing actual policy challenges first requires a better understanding of what these challenges are. The reality is that there are at least three separate issues that are raised by online ads: those related to ad content itself, those related to audience targeting, and those related to audience tracking. They all require their own policy responses—and, as it happens, none of those policy responses call for doing anything to change Section 230. In fact, to the extent that Section 230 is even relevant, the best policy response will always require keeping it intact.With regard to ad content, Section 230 applies, and should apply, to the platforms that run advertiser-supplied ads for the same reasons it applies, and should apply, to the platforms hosting the other sorts of content created by users. After all, ad content is, in essence, just another form of user generated content (in fact, sometimes it's exactly like other forms of user content). And, as such, the principles behind having Section 230 apply to platforms hosting user-generated content in general also apply – and need to apply – here.For one thing, as with ordinary user-generated content, platforms are not going to be able to police all the ad content that may run on their site. One important benefit of online advertising versus offline is that it enables far more entities to advertise to far larger audiences than they would be able to afford in the offline space. Online ads may therefore sometimes be cheesy, low-budget affairs, but it's ultimately good for the consumer if it's not just large, well-resourced, corporate entities who get to compete for public attention. We should be wary of implementing any policy that might choke off this commercial diversity.Of course, the flip side to making it possible for many more actors to supply many more ads is that the supply of online ads is nearly infinite, and thus the volume is simply too great for platforms to be able to scrutinize all of them (or even most of them). Furthermore, even in cases where platforms might be able to examine an ad, it is still unlikely to have the expertise to review it for all possible legal issues that might arise in every jurisdiction where the ad may appear. Section 230 exists in large part to alleviate these impossible content policing burdens to make it possible for platforms to facilitate the appearance of any content at all.Nevertheless, Section 230 also exists to make it possible for platforms to try to police content anyway, to the extent that they can, by making it clear that they can't be held liable for any of those moderation efforts. And that's important if we want to encourage them to help eliminate ads of poor quality. We want platforms to be able to do the best they can to get rid of dubious ads, and that means we need to make it legally safe for them to try.The more we think they should take these steps, the more we need policy to ensure that it's possible for platforms to respond to this market expectation. And that means we need to hold onto Section 230 because it is what affords them this practical ability.What's more, Section 230 affords platforms all this critical protection regardless of whether they profit from carrying content or not. The statute does not condition its protection on whether a platform facilitates content in exchange for money, nor is there any sort of constitutional obligation for a platform to provide its services on a charitable basis in order to benefit from the editorial discretion the First Amendment grants it. Sure, some platforms do pointedly host user content for free, but every platform needs to have some way of keeping the lights on and servers running. And if the most effective way to keep their services free for some users to post their content is to charge others for theirs, it is an absolutely constitutionally permissible decision for a platform to make.In fact, it may even be good policy to encourage as well, as it keeps services available for users who can't afford to pay for access. Charging some users to facilitate their content doesn't inherently make the platform complicit in the ad content's creation, or otherwise responsible for imbuing it with whatever quality is objectionable. Even if that an advertiser has paid for algorithmic display priority, Section 230 should still apply just as it applies to any other algorithmically driven display decision the platform employs.But on the off-chance that the platform did take an active role in creating that objectionable content, Section 230 has never stood in the way of holding the platform responsible. What Section 230 simply says is that making it possible to post unlawful content is not the same as creating content; for the platform to be liable as an "information content provider," aka a content creator, it had to have done something significantly more to birth its wrongful essence than simply be a vehicle for someone else to express it.It's even true if the platform allows the advertiser to choose its audience. After all, the content has already been created. Audience targeting is something else entirely, but it's also something we should be wary of impinging upon.There may, of course, be situations where advertisers try to target certain types of ads (ex: jobs, housing offers) in harmful ways. And when they do it may be appropriate to sanction the advertiser for what may amount to illegally discriminatory behavior. But not every such targeting choice is wrongful; sometimes choosing narrow audiences based on protected status may even be beneficial. But if we change the law to allow platforms be held equally liable with the advertiser for their wrongful targeting choices, we will take away the ability for platforms to offer audience targeting for any reasons, even good ones, by making it legally unsafe in case the advertiser does it for bad ones.Furthermore, doing so will upend all advertising as we've known it, and in a way that's offensive to the First Amendment. There's a reason that certain things are advertised during prime time, or during sports broadcasts, or on late night tv, just as there's a reason that ads appearing in the New York Times are not necessarily the same ones running in Field & Stream or Ebony magazines. The Internet didn't suddenly make those choices possible; advertisers have always wanted the most bang for their buck, to reach the people most likely to be their ultimate customers as cost effectively as possible. And as a result they have always made choices about where to place their ads based on the demographics those ads likely reach. To now say that it should be illegal to allow advertisers to ever make such choices, simply because they may sometimes make these decisions wrongfully would disrupt decades upon decades of past practice and likely run afoul of the First Amendment, which generally protects the choice of whom to speak to. In fact, it protects it regardless of the medium in question, and there is no principled reason why an online platform should be any less protected than a broadcaster or some sort of printed periodical (especially not the former).Even if it would be better if advertisers weren't so selective—and it's a fair argument to make, and a fair policy to pursue—it's not an outcome we should use the weight of legal liability to try to force. It won't work, and it impinges on important constitutional freedoms we've come to count on. Rather, if there is any affirmative policy response to ad tech that is warranted it is likely with the third constituent part: audience tracking. But even so, any policy response will still need to be a careful one.There is nothing new about marketers wanting to fully understand their audiences; they have always tried to track them as well as the technology of the day would allow. What's new is how much better they now can. And the reality is that some of the tracking ability is intrusive and creepy, especially to the degree it happens without the audience being aware of how much of their behavior is being silently learned by strangers. There is room for policy to at minimum encourage, and potentially even require, such systems to be more transparent in how they learn about their audiences, tell others what they've learned, and give those audiences a chance to say no to much of it.But in considering the right regulatory response there are some important caveats. First, take Section 230 off the table. It has nothing to do with this regulatory problem, apart from enabling platforms that may use ad tech to exist at all. You don't fix ad tech by killing the entire Internet; any regulatory solution is only a solution when it targets the actual problem.Which leads to the next caution, because the regulatory schemes we've seen attempted so far (GDPR, CCPA, Prop. 24) are, even if well-intentioned, clunky, conflicting, and with plenty of overhead that compromises their effectiveness and imposes their own unintended and chilling costs, including on expression itself (and of more expression than just that of advertisers).Still, when people complain about online ads this is frequently the area they are complaining about and it is worth focused attention to solve. But it is tricky; given how easy it is for all online activity to leave digital footprints, as well as the many reasons we might want to allow those footprints to be measured and then those measurements to be used (even potentially for advertising), care is required to make sure we don't foreclose the good uses while aiming to suppress the bad. But for the right law, one that recognizes and reasonably reacts to the complexity of this policy challenge, there is an opportunity for a constructive regulatory response to this piece of the online ad tech puzzle. There is no quick fix – and ripping apart the Internet by doing anything to Section 230 is certainly not any kind of fix at all – but if something must be done about online advertising, this is the something that's worth the thoughtful policy attention to try to get right.
Remember when America spent a year and a half hyperventilating about a Chinese teen dancing app instead of securing American infrastructure from Russian hackers or other threats? Remember when a bunch of GOP officials with a long track record of not caring whatsoever about consumer privacy or internet security exploited xenophobic fears about the app to land political allies Oracle and Walmart a major windfall? Remember when 90% of the press couldn't be bothered to inform readers this was all performative cronyism by an unqualified nitwit? Good times.This morning the Wall Street Journal announced that the much hyped deal to sell ByteDance-owned TikTok to Oracle and Walmart is looking unsurprisingly dead in the wake of previous legal challenges and Trump's election loss. Instead, the government appears poised to do what made sense from the start: focus on the broader problem of lax privacy and dodgy security standards across the board in telecom/adtech/tech, instead of singling out a teen dancing app:
Last week Techdirt wrote about Australia's proposed News Media Bargaining Code. This is much worse than the already awful Article 15 of the EU Copyright Directive (formerly Article 11), which similarly proposes to force Internet companies to pay for the privilege of sending traffic to traditional news sites. A post on Infojustice has a good summary of the ways in which the Australians aim to do more harm to the online world than the Europeans:
The AI and Python Development eBook Bundle has 15 eBooks to help you master artificial intelligence. You'll learn the history of AI and its early applications and move on to learn about AI in the modern world where it's used in everything from neural nets to playing complex board games and more. You'll also learn about Python and TensorFlow. The bundle is on sale for $20.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
Cops tend to dislike being recorded. They don't care much for their own recording devices. They routinely disable equipment or conveniently "forget" to activate body cameras.And they dislike the recording devices everyone carries with them at all times: cellphones. Cellphone ubiquity means it's almost impossible for cops to prevent an incident or interaction from being recorded. Add these devices to the steadily-increasing deployment of internet-connected security cameras and there's really nowhere to hide anymore.Simply shutting down recordings or arresting citizens for pointing cameras at them is a very risky option. There's tons of case law on the books that says recording public officials is protected First Amendment activity. So, cops are getting creative. Some of the less creative efforts include shining bright flashlights at people holding cameras in hopes of ruining any footage collected. Sometimes officers just stand directly in front of people who are recording to block their view of searches or arrests taking place. Often the excuse is "crowd control," when it's actually just an attempt at narrative control.Now, here's the latest twist: cops have figured out a way to prevent recordings from being streamed or uploaded to social media services or video platforms like YouTube. Believe it or not, it involves a particularly pernicious abuse of intellectual property protections.
Late last year, Verizon announced it would be acquiring Tracfone for around $6.2 billion. As we noted when the deal was first announced, it was yet another example of the "growth for growth's sake" mindset that has long infected US industry, particularly the telecom sector. There are really no real benefits to be gleaned from further consolidation in the space (especially in the wake of a T-Mobile Sprint merger that immediately resulted in layoffs and reduced US wireless competition by around 25%). Yet we really adore pretending otherwise as the government rubber stamps deal after deal.In a letter (pdf) to the FCC, attorneys general from 16 states and the District of Columbia urged the agency to actually, you know, do its job and ask more questions about the deal. TracFone is among the biggest providers of Lifeline, the FCC program that provides services for about 1.7 million low-income subscribers in 43 states. Verizon is a lumbering media and telecom monopoly that views such programs (and the regulators that oversee them) as largely an irritant. Putting the TracFone contributions at risk during an historic economic and health crisis isn't particularly bright.As such, the states are wondering if the FCC might be able to take a few moments to make sure the deal doesn't harm those relying on the program:
While we've covered the Internet of Broken Things for some time, where companies fail to secure the devices they sell which connect to the internet, the entire genre sort of jumped the shark in October of last year. That's when Qiui, a Chinese company, was found to have sold a penis chastity lock that communicates with an API that was wide open and sans any password protection. The end result is that users of a device that locks up their private parts could enjoy those private parts entirely at the pleasure of nefarious third parties. Qiui pushed out a fix to the API... but didn't do so for existing users, only new devices. Why? Well, the company stated that pushing it out to existing devices would again cause them to all lock up, with no override available. Understandably, there wasn't a whole lot of interest in the company's devices at that point.But fear not, target market for penis chastity locks! Qiui says it's now totally safe to use the product again!
In a weird bit of performative nonsense, Senators Thom Tillis and Pat Leahy, along with Representatives Hakeem Jeffries and Nancy Mace, have come together to... try to help kids lock up culture under copyright. Specifically, they want a bill that would allow kids to register a copyright for free for participants in the Congressional Art Competition and the Congressional App Competition. It is not at all clear why this is necessary, other than to perpetuate the myth that you need a copyright to be creative.First, to be clear, any such unique and original artwork is already covered by copyright. For better or for worse (by which I mean, for worse), the US now says that copyright is automatic from the time the work is "fixed" in a tangible medium (and if you try to point out that computer code is not a tangible medium, it gets them very, very angry, so don't bother...). So no one needs to register their copyright to be protected. Not registering does limit the ability of the copyright holder to sue or to get statutory damages. But if anyone creating works for a Congressional Art Competition is seeking to sue others, well, that seems like a bigger problem right there.But here's the key point: copyright is supposed to be there solely as an incentive for creation. The entire setup and basis for copyright in the Constitution is so that Congress can create incentives to promote the progress of science and the useful arts (and, copyright was meant for the "science" part, patents are the "useful arts"). I can pretty much assure you that no one creating artwork or apps for a Congressional competition is doing so because they're incentivized by the copyright. They're doing so because of the competition itself and the desire to express themselves (and maybe get some attention for what they've done).So encouraging locking these things up is bizarre and counterproductive. More to the point, why aren't these elected officials suggesting that the artists and developers entering these competitions explore the many Creative Commons options to help get their works more widely known?The answer, tragically, is as obvious as it is cynical. This is all driven by the legacy copyright industries who keep trying to push the myth that copyright = creation. And these are their favorite elected officials. Hollywood backed Tillis strongly in the last election, in which he was expected to lose, so he clearly owes them. Leahy has always been extremely close to Hollywood. Beyond being the Senate supporter of SOPA (his version was PIPA), Hollywood always rewards Leahy by giving him small roles in every Batman film. His daughter is also a Vice President and top lobbyist for the Motion Picture Academy, Hollywood's top lobbying body.On the House side, the legacy copyright industry has been cultivating a close relationship with Jeffries for a while now, including setting up a neat fundraiser for him in which if you just pay him (and Jerry Nadler) $5k each you get to hang out with Jeffries at the Grammies. Nice work if you can get it. Nancy Mace is new to Congress, so she may just be along for the ride here.The problem with all of this is just how cynically corrupt this seems. Even if it's in the form of "soft corruption," the connection of a few Senators and Representatives pushing a misguided line of thinking -- that completely undermines the very basis for copyright law -- in favor of the myth pushed by Hollywood and the legacy recording industry, it just makes everyone actually respect copyright even less.This isn't what copyright is for, and it's shameful that these elected officials are pushing the myth forward.
The first batch of decisions about Facebook's content moderation from the recently-established Oversight Board has garnered lots of reactions, including many kneejerk ones — but there's plenty to discuss, so for this week's episode Mike is joined by Harvard Law's Evelyn Douek to talk about the decisions themselves and what they signal about the board as a whole.Follow the Techdirt Podcast on Soundcloud, subscribe via iTunes or Google Play, or grab the RSS feed. You can also keep up with all the latest episodes right here on Techdirt.
Congress is on the brink of destroying the internet as we know it.Bipartisanship in Congress is usually rare to see, but odd alliances have formed in the Capitol against Section 230, a law that regulates content moderation online which is in large part responsible for the incredible growth and diversity of the internet. Republicans accuse Facebook and Twitter of censoring conservative users on their platforms. Democrats accuse these companies of not doing enough in removing extremist or false content. While both sides agree that S230 has got to go, they’re at war with each other over who will drive regulatory efforts on content moderation. In the end, it won’t really matter who wins. Either way, the spoils of this war will be a gutted S230 or its repeal. That’s bad news for everyone.Before they ruin the internet entirely, Democrats and Republicans should take a step back and let industry standards catch up with the times.Removing Section 230 because of actors like Facebook and Twitter would mean harming other websites that haven’t done anything wrong and putting companies in the crossfire. On the other hand, too many new restrictions would cripple the competitive edge our tech sector has over the rest of the world. In both cases only larger companies like Facebook and Twitter would survive, while small businesses — like a family restaurant in Steubenville, Ohio, whose social media presence is driven entirely by customer reviews — would suffer and likely close.This doesn’t mean that nothing should be done. Something should be done, and soft law is the way.Soft law is not “law” in the normal sense. It refers to the diverse tools used by private or government bodies to guide how industries should develop. Common soft laws include industry standards created by public-private partnerships, the LEED rating system of the U.S. Green Building Council, and the guides on how to treat COVID by the Center for Disease Control. The uniqueness of soft law is that, instead of coming primarily from government regulators, it can come from anywhere. And instead of focusing on setting strict rules, it focuses on methods to attain ideal outcomes. This makes it “soft” because interpretation of the ‘law’ will differ between participants, who will not be fined for going their own way. Soft law provides guidance while encouraging innovation in reaching industry goals. In this way, it beats the rigidity of hard law.Soft law is already heavily utilized in artificial intelligence and automated vehicles, so legislators, regulators, and private companies advocating for this approach would have a strong precedent to point to as Section 230 talks continue. Moreover, this wouldn’t be the first time that we tried to regulate the internet with soft law. The early internet was ‘regulated’ by the Clinton administration through The Framework for Global Electronic Commerce, which established principles of how the federal government would regulate internet activities and how it expected the private sector to act. Most importantly, it stated that, “…governments should recognize the unique qualities of the Internet. The genius and explosive success of the Internet can be attributed in part to its decentralized nature and to its tradition of bottom-up governance.”As legislators look to revise regulations on the internet, it's essential they preserve that bottom-up governance that made the internet such an explosive success. To that end, rather than prescribing a one-size-fits-all approach to content moderation, the government should encourage companies to develop their own standards and make those standards publicly accessible. Instead of prescribing a single set of rules for the internet, the government should hold up companies developing their unique standards as models for the industry at large.A great example of one such model is the Oversight Board of Facebook, which recently announced its first series of case complaints against the company. The board, composed of former Prime Ministers, think tank leaders, and legal scholars, deliberated and overturned four out of five cases of censorship. Facebook released a statement saying they would abide by the decisions and work to create clearer content moderation policies. Facebook’s approach is innovative for tech giants like itself, but smaller companies require different standards for their audience. Nonprofits like Wikipedia handle this with their own open-source system that encourages volunteer administrators collaborating on content issues. Smaller companies like AllTrails bring moderation to their entire user-base to suggest new trail maps and edit current ones based on user feedback.Government needs to understand that what works for Facebook won't work for everyone else, and targeting Section 230 to fix all content moderation problems is the wrong approach. The key idea of Facebook’s Oversight Board, Wikipedia’s volunteer administrators, and AllTrails’ public moderation is that they all accomplish the same goal in very different ways. And that’s the essence of soft law. Protected by Section 230, and without an overarching government agency or document requiring them to reach a prescribed standard, companies should be able to create innovative methods in content moderation all on their own.Some argue that self-regulation is a big nothing burger — that it’s little more than a facade shielding companies from having to take any real responsibility for content posted on their sites. But that’s not true. Leaving content moderation solely to the companies makes them accountable to the public. By now we should all know just how compelling the public can be. For instance, last June public perception of Facebook’s ability to make good decisions on content moderation was overwhelmingly negative, with about 80% not trusting ‘Big Tech,’ but trusting the government even less. It’s no coincidence that Facebook launched its Oversight Board that summer. Other examples of companies imposing standards voluntarily to meet the public’s demand for accountability include Reddit’s “Transparency Report” which is issued every year allowing the public to see what content is being removed and the reasons for doing so. This report is a part of Reddit’s interpretation of the Santa Clara Principles, a soft law effort led by the Electronic Frontier Foundation, ACLU, and several other non-profits. Following these principles allows the public to hold companies accountable to their own promises, addressing a major issue in customer trust while maintaining the integrity of Section 230.Section 230 allowed entrepreneurs the protection and flexibility to explore new directions in tech that lead to some of the greatest economic and technological advancements in US history. Instead of gutting a law that made the internet what it is today, regulators should respect soft law alternatives brought by the private sector and encourage companies to find what works, helping users and businesses that rely on platforms currently protected by Section 230. Innovation is what will win the war of the web. We’ll only have a free internet as long as we can keep it.Luke is an economics graduate student at George Mason University focusing on entrepreneurship, health, and innovative technology. You can follow him on twitter @LiberLuke.
It has been strange to see people speak about Section 230 and illegal discrimination as if it were somehow a new issue to arise. In fact, one of the seminal court cases that articulated the parameters of Section 230, the Roommates.com case, did so in the context of housing discrimination. It's worth taking a look at what happened in that litigation and how it bears on the current debate.Roommates.com was (and apparently remains) a specialized platform that does what it says on the tin: allow people to advertise for roommates. Back when the lawsuit began, it allowed people who were posting for roommates to include racial preferences in their ads, and it did so in two ways: (1) through a text box, where people could write anything about the roommate situation they were looking for, and (2) through answers to mandatory questions about roommate preferences.Roommates.com got sued by the Fair Housing Councils of the San Fernando Valley and San Diego for violating federal (FHA) and state (FEHA) fair housing law for allowing advertisers to express these discriminatory preferences. It pled a Section 230 defense, because the allegedly offending ads were user ads. But, in a notable Ninth Circuit decision, it both won and it lost.In sum, the court found that Section 230 indeed applied to the user expression supplied through the text box. That expression, for better or worse, was entirely created by the user. If something was wrong with it, it was the user who had made it wrongful and the user, as the information content provider, who could be held responsible—but not, per Section 230, the Roommates.com platform, which was the interactive computer service provider for purposes of the statute and therefore immune from liability for it.But the mandatory questions were another story. The court was concerned that, if these ads were illegally discriminatory, the platform had been a party to the creation of that illegality by prompting the user to express discriminatory preferences. And so the court found that Section 230 did not provide the platform a defense to any claim predicated on the content elicited by these questions.Even though it was a split and somewhat messy decision, the Roommates.com case has held up over the years and provided subsequent courts with some guidance for how to figure out when Section 230 should apply. There are still fights around the edges, but figuring out whether it should apply has basically boiled down to determining who imbued the content with its allegedly wrongful quality. If the platform, then it's on the hook as much as the user may be. But its contribution to wrongful content's creation still had to be more substantive than merely offering the user the opportunity to express something illegal.
CaptionSaver Pro will take care of your notes. It's a Chrome extension that automatically saves Google Meet live captions to Google Drive. Pro comes with features such as highlighting, timestamps, and auto-save to Google Drive to enhance the automated note-taking capabilities, so you can focus your attention on your meetings. It's on sale for $25.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
Last week, Senator Amy Klobuchuar introduced a major antitrust reform bill, entitled the Competition and Antitrust Law Enforcement Reform Act. This isn't much of a surprise, as Democrats have made it quite clear that they seek to use antitrust much more aggressively than it's been used over the past few decades. I'm a big believer in the need for more competition, in general, but often worry that antitrust is not the best way to get there.The bill will put more budget and power in the hands of the DOJ and the FTC, and also would change the legal standards for anticompetitive mergers, as well as put the burden on merging companies to prove that they are not violating antitrust, rather than as it stands now, with the burden being on the DOJ to show that the merger violates the law. Better funding the DOJ and the FTC on competition issues strikes me as a sensible move here (more the FTC than the DOJ, but no need to get that picky). However, a lot of the rest of the bill seems like it could have the opposite of the intended effect.I get the thinking behind this, but as structured, it appears like it could have significant unintended consequences that actually decreases competition rather than increases it. In a lot of ways, the key thing this bill would do is to significantly reduce merger and acquisition activity. It has two main mechanism that would basically kill a significant number of deals:
After the Trump FCC effectively neutered itself at telecom lobbyist behest, numerous states jumped in to fill the consumer protection void. California, for example, passed some net neutrality rules in 2018 that largely mirrored the FCC's discarded consumer protections. There's a strange contingent of folks who try to claim that because the internet didn't immediately explode in a rainbow of fireworks, the net neutrality repeal must not have been a big deal. But a major reason why ISPs didn't behave worse (than they already are) is because they didn't want to violate new state laws.That said, they did yeoman's work to try and thwart these state efforts too. Including convincing Billy Barr's DOJ to file suit against California to prevent the popular bill from ever becoming law. You know, "states rights!" and all that.The DOJ's central argument was that California's attempt to protect consumers was somehow "anti-consumer" and "extreme" (it was neither). The suit leaned on language the FCC included in its repeal (at industry behest) claiming that states couldn't step in and protect consumers in the wake of federal apathy. The courts so far haven't looked too kindly upon that logic, arguing that the FCC can't abdicate its authority over telecom consumer protection, then try to lean on that non-existent authority to try to tell states what to do.This week the DOJ's ham-fisted effort to curry favor with US telecom monopolies fell apart completely when the Biden DOJ quietly pulled out of the lawsuit. It was a move quickly applauded by new FCC Commissioner Jessica Rosenworcel:
Clearview has screwed with the wrong people. The reprehensible facial recognition AI company that sells access to its database of scraped photos and personal info managed to raise the ire of some of the most restrained and polite people in the world, as Kashmir Hill reports for the New York Times.
For some reason, we, the people, keep having to shell out cash to employ a lot of unreasonable law enforcement officers.We've already seen some federal courts respond to violent law enforcement responses to the mere presence of journalists and legal observers during protests. The targeting of non-participants by law enforcement has been met with injunctions and harsh words for the officers participating in these attacks.Much of what's been covered here deals with months of ongoing protests in Portland, Oregon and violent responses by federal officers. But this appeals court ruling (via Mike Scarcella) shows the problem isn't confined to the Northwest or federal law enforcement. Cops are attacking journalists in other cities as they try to do nothing more than cover highly newsworthy events.And the problem isn't new either. This case [PDF], handled by the Eighth Circuit Court of Appeals, deals with an attack on three Al Jazeera reporters covering protests in Ferguson, Missouri following the killing of Michael Brown.Local law enforcement officers may not have been wearing cameras, but the journalists brought their own. The events that transpired were captured in the course of their attempted coverage of Ferguson protests. Fortunately, this footage exists. The version of events offered by the sued deputy is a lie. Here's what was captured by Al Jazeera cameras:
As we've been noting in posts throughout the day, today is the day that, 25 years ago, then President Bill Clinton signed into law the Telecommunications Act of 1996. That large telco bill included, among many other things, the Communications Decency Act, a dangerous censorial bill written by Senator James Exon. However, buried in the CDA was a separate bill, written by now Senator Ron Wyden and then Representative Chris Cox, the Internet Freedom and Family Empowerment Act, which today is generally known as Section 230 of the CDA. A legal challenge later tossed out all of Exon's bill as blatantly unconstitutional.However, on the day of the signing, most of the internet activist space wasn't even thinking about Section 230. They were greatly concerned by Exon's parts of the CDA and some other provisions in the Telecommunications Act that they feared could cause more harm than good. This inspired John Perry Barlow to write his now famous Declaration of the Independence of Cyberspace, which was also released 25 years ago today. It's worth reading and reflecting on it 25 years later:
Never forget the IoT device you invite into your home may become the state's witness. That's one of the unfortunate conclusions that can be drawn from Amazon's latest transparency report.Amazon has its own digital assistant, Alexa. On top of that, it has its acquisitions. One of its more notable gets is Ring. Ring is most famous for its doorbells -- something that seems innocuous until you examine the attached camera and the company's 2,000 partnerships with law enforcement agencies.Ring is in the business of selling cameras. That the doorbell may alert you to people on your doorstep is incidental. Cameras on the inside. Cameras on the outside. All in the name of "security." And it's only as secure as the people pitching them to consumers. Ring's lax security efforts have led to harassment and swatting, the latter of which tends to end up with people dead.Malicious dipshits have been using credentials harvested from multitudinous breaches to harass people with Ring cameras. The worst of these involve false reports to law enforcement about activity requiring armed response. That no one has ended up dead is a miracle, rather than an indicator of law enforcement restraint.Ring wants you to hand over footage to law enforcement agencies. That's why it partners with agencies to hand out cameras for free and instructs officers how to obtain footage without a warrant. That's also why it stays ahead in the PR game, handling press releases and public statements it feels law enforcement officials are too clumsy to handle on their own.And gather footage law enforcement does, as Zack Whittaker reports for TechCrunch. Omnipresent IoT devices give law enforcement plenty of recordings and other information -- with or without the consent of device owners and with or without the warrants they would normally need.
I know that Section 230 is very much under attack these days, and I've seen so many people cheer when we point out that dumping 230 could take away (or at least, drastically alter) the sites we love and appreciate every day. I think this is because of a natural tendency of many people to focus on the negative side of things in existence, and to ignore all of the good that has resulted from them. In some ways, I think it's a variation on the famous Douglas Adams quote:
With both Twitter and Facebook banning Donald Trump's account last month, after he inspired a mob of goons to ransack the Capitol, there has been something of an eerie quiet in the world. Having spent years making sure that every one of his often disconnected-from-reality tweets makes headlines or ruins many peoples' days, the sudden quiet has been kind of odd.Many people have wondered why he hasn't gone elsewhere. While Parler is still down (but expected to return soon), many were surprised that Trump never used it, since his base basically adopted it as their own. However, late last week it was revealed that the Trump Organization had been negotiating (while Trump was President) with Parler for him to first take a huge equity ownership stake in Parler before joining the platform. For whatever reasons the agreement did not materialize.However, a Daily Beast article, mostly about Trump's views on Liz Cheney, drops a little hint about how Trump has been dealing with his inability to tweet: he's writing out what he would have tweeted on paper and hoping someone else will tweet it for him:
The All-In-One 2021 Super-Sized Ethical Hacking Bundle has 18 courses to help you learn all you need to know about cyber security. You'll learn about ethical hacking, pen testing, securing wireless networks, bug bounties, and more. Courses cover Python, Metasploit, Burp, BitNinja and others. It's on sale for $43.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
Get your tickets for Section 230 Matters before February 23rd »Twenty five years ago today, then President Bill Clinton signed the 1996 Telecommunications Act into law. There was a lot in it, including the Communications Decency Act. And, buried within the Communications Decency Act was a part that was originally the Internet Freedom and Family Empowerment Act, written by then Representatives Chris Cox and Ron Wyden, but which is now generally known as Section 230. The rest of the CDA was tossed out as unconstitutional in an important early judicial review of internet regulations, but Section 230 survived. That means we've now made it 25 years with Section 230, and its key "26 words" helping to protect and enable an open internet. For reasons that don't fully make sense, Section 230 is now under assault from both major political parties (though often for diametrically opposed reasons!).However, while we still have it, we thought it would be nice to throw Section 230 (and the open internet) a 25th birthday party -- and to have both Senator Ron Wyden and Chris Cox come to talk about Section 230, its past and (hopefully) future. So on Tuesday, Feb. 23rd at 12:30pm PT / 3:30pm ET we're hosting Section 230 Matters, which is both a celebration of Section 230, and a fundraiser for Techdirt, so that we can continue to report on Section 230, free speech, the open internet, and more. While the event is, of course, virtual, we're using a wonderful platform called Remo that simulates the experience of actually attending an event. You get to sit at a "table" and talk with the other people at your table, and can move around and talk with and network with other attendees.The event will consist of some open networking/conversation, some table discussions about 230, and the main presentation of me moderating the conversation with both Cox and Wyden. It should be a really fun time, a chance to celebrate the open internet, and a chance to help support Techdirt and allow us to keep doing what we do.Get your tickets for Section 230 Matters before February 23rd »
First there was the Securus and LocationSmart scandal, which showcased how cellular carriers and data brokers buy and sell your daily movement data with only a fleeting effort to ensure all of the subsequent buyers and sellers of that data adhere to basic privacy and security standards. Then there was the blockbuster report by Motherboard showing how this data routinely ends up in the hands of everyone from bail bondsman to stalkers, again, with only a fleeting effort made to ensure the data itself is used ethically and responsibly.Throughout it all, government has refused to lift a finger to address the problem, presumably because lobbyists don't want government upsetting the profitable apple cart, government is too busy freely buying access to this data itself, or too many folks still labor under the illusion that this sort of widespread dysfunction will be fixed by utterly unaccountable telecom or adtech markets.Enter the New York Times, which in late 2019 grabbed a hold of a massive location data set from a broker, highlighting the scope of our lax location data standards (and the fact that "anonymized" data is usually anything but). This week, they've done another deep dive into the location data collected from rioting MAGA insurrectionists at the Capitol. It's a worthwhile read, and illustrates all the same lessons, including, once again, that "anonymized" data isn't real thing:
This week, our first place winner on the insightful side is an anonymous commenter pointing out how, in criticisms of online speech and demands for regulation, people often forget that some of the things they complain about, like "influencing elections", are exactly what all speech is for:
View all of this year's entries on itch.io »Our public domain game jam, Gaming Like It's 1925, has come to a close, and the entries are now being reviewed by our amazing panel of judges. They need a bit of time to work through all the games, but while you wait, you can check out the entries for yourself.We got more entries than last year, though there are a couple that don't quite qualify for the jam because they aren't clearly based on 1925 works. The designers had lots of clever and creative ideas this year, and some of the games are nicely polished. As expected, we got lots of entries based on The Great Gatsby, but plenty of designers also explored other corners of the public domain and built games based on 1925 art, poetry, film, and music. All the games are either playable in the browser or downloadable as PDFs and other game materials, and you can dig through them all over on the game jam page.Once again, a big thanks to all the designers who submitted games this year, and to all our judges who are reviewing the entries and selecting winners in six categories, which we'll announce later this month. And if you didn't manage to get an entry in this year, it's never to early to start looking into works that will enter the public domain in 2022, when we'll be back with Gaming Like It's 1926!
There's a rule in IT: don't test on live systems in production. There's debate over this, of course, but the general idea is that testing on live systems is a great way to screw up something with the live system, rather than some test environment. The more important the system is, the more true that mantra becomes.Which brings us to the Texas Amber Alert system. See, Texans subscribed to get Amber Alerts via email got one last week that seemed a little... off.First... terrifying. As someone who absolutely hates horror movies because I'm a big scared wimp, getting this alert is pure nightmare fuel. But it's also sort of funny, except that this kind of testing on the live Amber Alert system is pretty dumb. The whole thing apparently happened due to a test being run on the system and it accidentally got sent out to email subscribers. Give the folks responsible for this high marks for going into detail on the joke, though.
Summary: Dealing with content moderation during real-time chats always presents an interesting challenge. Whether it’s being able to police language in real time, or dealing with trolling and harassment, chat has always been one of the most difficult content moderation challenges going back to its earliest days.In 2016, Twitch decided to enable a new feature for its users: an “emote-only” mode for the chat. Emotes, on Twitch, are basically a custom set of what are more traditionally called emoji on most other websites/platforms. With Twitch, though, they are almost entirely custom, and users at certain levels are able to add their own.Emote-only is one of a bunch of different modes and features that Twitch streamers can use to try to tame their chat. Twitch itself suggests using this as a way to stop harassment in the chats.Turning on and off the feature is a choice for the streamer themselves, rather than Twitch. It’s just one of a few tools that Twitch users can enable to deal with potentially harassing behavior in the chat alongside their streams.Decisions for Twitch:
Just because Senators Warner, Hirono, and Klobuchar are apparently oblivious to how their SAFE TECH bill would destroy the Internet doesn't mean everyone else should ignore how it does. These are Senators drafting legislation, and they should understand the effect the words they employ will have.Mike has already summarized much of the awfulness they propose, and why it is so awful, but it's worth taking a closer look at some of the individually odious provisions. This post focuses in particular on how their bill obliterates the entire Internet economy.In sum, and without exaggeration: this bill would require every Internet service be a self-funded, charitable venture always offered for free.The offending language is here: