On Monday morning, Protocol hosted an interesting discussion on Reimagining Section 230 with two of its reporters, Emily Birnbaum and Issie Lapowsky. It started with those two reporters interviewing Senator Mark Warner about his SAFE TECH Act, which I've explained is one of the worst 230 bills I've seen and would effectively end the open internet. For what it's worth, since posting that I've heard from a few people that Senator Warner's staffers are now completely making up lies about me to discredit my analysis, while refusing to engage on the substance, so that's nice. Either way I was curious to see what Warner had to say.The Warner section begins at 12 minutes into the video if you want to just watch that part and it's... weird. It's hard to watch this and not come to the conclusion that Senator Warner doesn't understand what he's talking about. At all. It's clear that some people have told him about two cases in which he disagrees with the outcome (Grindr and Armslist), but that no one has bothered to explain to him any of the specifics of either those cases, or what his law would actually do. He also doesn't seem to understand how 230 works now, or how various internet websites actually handle content moderation. It starts out with him (clearly reading off a talking point list put in front of him) claiming that Section 230 has "turned into a get out of jail free card for large online providers to do nothing for foreseeable, obvious and repeated misuse of their platform."Um. Who is he talking about? There are, certainly, a few smaller platforms -- notably Gab and Parler -- that have chosen to do little. But the "large online platforms" -- namely Facebook, Twitter, and YouTube -- all have huge trust & safety efforts to deal with very difficult questions. Not a single one of them is doing "nothing." Each of them has struggled, obviously, in figuring out what to do, but it's not because of Section 230 giving them a "get out of jail free card." It's because they -- unlike Senator Warner, apparently -- recognize that every decision has tradeoffs and consequences and error bars. And if you're too aggressive in one area, it comes back to bite you somewhere else.One of the key points that many of us have tried to raise over the years is that any regulation in this area should be humble in recognizing that we're asking private companies to solve big societal problems that governments have spent centuries trying, and failing, to solve. Yet, Warner just goes on the attack -- as if Facebook is magically why bad stuff happens online.Warner claims -- falsely -- that his bill would not restrict anyone's free speech rights. Warner argues that Section 230 protects scammers, but that's... not true? Scammers still remain liable for any scam. Also, I'm not even sure what he's talking about because he says he wants to stop scamming by advertisers. Again, scamming by advertisers is already illegal. He says he doesn't want the violation of civil rights laws -- but, again, that's already illegal for those doing the discriminating. The whole point of 230 is to put the liability on the actual responsible party. Then he says that we need Section 230 to correct the flaws of the Grindr ruling -- but it sounds like Warner doesn't even understand what happened in that case.His entire explanation is a mess, which also explains why his bill is a mess. Birnbaum asks Warner who from the internet companies he consulted with in crafting the bill. This is actually a really important question -- because when Warner released the bill, he said that it was developed with the help of civil rights groups, but never mentioned anyone with any actual expertise or knowledge about content moderation, and that shows in the clueless way the bill is crafted. Warner's answer is... not encouraging. He says he talked with Facebook and Google's policy people. And that's a problem, because as we recently described, the internet is way more than Facebook and Google. Indeed, this bill would help Facebook and Google by basically making it close to impossible for new competitors to exist, while leaving the market to those two. Perhaps the worst way to get an idea of what any 230 proposal would do is to only talk to Facebook and Google.Thankfully, Birnbaum immediately pushed back on that point, saying g that many critics have noted that smaller platforms would inevitably be harmed by Warner's bill, and asking if Warner had spoken to any of these smaller platforms. His answer is revealing. And not in a good way. First, he ignores Birnbaum's question, and then claims that when Section 230 was written it was designed to protect startups, and that now it's being "abused" by big companies. This is false. And Section 230's authors have said this is false (and one of them is a colleague of Warner's in the Senate, so it's ridiculous that he's flat out misrepresenting things here). Section 230 was passed to protect Prodigy -- which was a service owned by IBM and Sears. Neither of those were startups.
What do you get when you cross a whiteboard and a notebook? Wipebook’s technology transforms conventional paper into reusable and erasable surfaces. It has 10 double sided pages or 20 surfaces: 10 graph and 10 ruled. It's the perfect tool for thinkers, doers, and problem solvers. Use the Mini Wipebook to work things out, save to the cloud, and wipe old sketches completely clean. The Wipebook Scan App saves your work and uploads it to your favorite cloud services like Google Drive, Evernote, Dropbox, and OneDrive. This 2-pack is on sale for $52.95.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
The CEOs of Facebook, Google, and Twitter will once again testify before Congress this Thursday, this time on disinformation. Here’s what I hope they will say:Thank you Mister Chairman and Madam Ranking Member.While no honest CEO would ever say that he or she enjoys testifying before Congress, I recognize that hearings like this play an important role -- in holding us accountable, illuminating our blind spots, and increasing public understanding of our work.Some policymakers accuse us of asserting too much editorial control and removing too much content. Others say that we don’t remove enough incendiary content. Our platforms see millions of user-generated posts every day -- on a global scale -- but questions at these hearings often focus on how one of our thousands of employees handled a single individual post.As a company we could surely do a better job of explaining -- privately and publicly -- our calls in controversial cases. Because it’s sometimes difficult to explain in time-limited hearing answers the reasons behind individual content decisions, we will soon launch a new public website that will explain in detail our decisions on cases in which there is considerable public interest. Today, I’ll focus my remarks on how we view content moderation generally.Not “neutral”In past hearings, I and my CEO counterparts have adopted an approach of highlighting our companies’ economic and social impact, answering questions deferentially, and promising to answer detailed follow up questions in writing. While this approach maximizes comity, I’ve come to believe that it can sometimes leave a false impression of how we operate.So today I’d like to take a new approach: leveling with you.In particular, in the past I have told you that our service is “neutral.” My intent was to convey that we don’t pick political sides, or allow commercial influence over our editorial content.But I’ve come to believe that characterizing our service as “neutral” was a mistake. We are not a purely neutral speech platform, and virtually no user-generated-content service is.Our philosophyIn general, we start with a Western, small-d democratic approach of allowing a broad range of human expression and views. From there, our products reflect our subjective -- but scientifically informed -- judgments about what information and speech our users will find most relevant, most delightful, most topical, or of the highest quality.We aspire for our services to be utilized by billions of people around the globe, and we don’t ever relish limiting anyone’s speech. And while we generally reflect an American free speech norm, we recognize that norm is not shared by much of the world -- so we must abide by more restrictive speech laws in many countries where we operate.Even within the United States, however, we choose to forbid certain types of speech which are legal, but which we have chosen to keep off our service: incitements to violence, hate speech, Holocaust denial, and adult pornography, just to name a few.We make these decisions based not on the law, but on what kind of service we want to be for our users.While some people claim to want “neutral” online speech platforms, we have seen that services with little or no content moderation whatsoever -- such as Gab and Parler -- become dominated by trolling, obscenities, and conspiracy theories. Most consumers reject this chaotic, noisy mess.In contrast, we believe that millions of people use our service because they value our approach of airing a variety of views, but avoiding an “anything goes'' cesspool.We realize that some people won’t like our rules, and go elsewhere. I’m glad that consumers have choices like Gab and Parler, and that the open Internet makes them possible. But we want our service to be something different: a pleasant experience for the widest possible audience.Complicated info landscape means tough callsWhen we first started our service decades ago, content moderation was a much less fractious topic. Today, we face a more complicated speech and information landscape including foreign propaganda, bots, disinformation, misinformation, conspiracy theories, deepfakes, distrust of institutions, and a fractured media landscape. It challenges all of us who are in the information business.All user-generated content services are grappling with new challenges to our default of allowing most speech. For example, we have recently chosen to take a more aggressive posture toward election- and vaccine-related disinformation because those of us who run our company ultimately don’t feel comfortable with our platform being an instrument to undermine democracy or public health.As much as we aim to create consistent rules and policies, many of the most difficult content questions we face are ones we’ve never seen before, or involve elected officials -- so the questions often end up on my desk as CEO.Despite the popularity of our services, I recognize that I’m not a democratically elected policymaker. I’m a leader of a private enterprise. None of us company leaders takes pleasure in making speech decisions that inevitably upset some portion of our user base - or world leaders. We may make the wrong call.But our desire to make our platform a positive experience for millions of people sometimes demands that we make difficult decisions to limit or block certain types of controversial (but legal) content. The First Amendment prevents the government from making those extra-legal speech decisions for us. So it’s appropriate that I make these tough calls, because each decision reflects and shapes what kind of service we want to be for our users.Long-term experience over short-term trafficSome of our critics assert that we are driven solely by “engagement metrics” or “monetizing outrage” like heated political speech.While we use our editorial judgment to deliver what we hope are joyful experiences to our users, it would be foolish for us to be ruled by weekly engagement metrics. If platforms like ours prioritized quick-hit, sugar-high content that polarizes our users, it might drive short term usage but it would destroy people’s long-term trust and desire to return to our service. People would give up on our service if it’s not making them happy.We believe that most consumers want user-generated-content services like ours to maintain some degree of editorial control. But we also believe that as you move further down the Internet “stack” -- from applications towards ours toward app stores, then cloud hosting, then DNS providers, and finally ISPs -- most people support a norm of progressively less content moderation at each layer.In other words, our users may not want to see controversial speech on our service -- but they don’t necessarily support disappearing it from the Internet altogether.I fully understand that not everyone will agree with our content policies, and that some people feel disrespected by our decisions. I empathize with those that feel overlooked or discriminated against, and I am glad that the open Internet allows people to seek out alternatives to our service. But that doesn’t mean that the US government can or should deny our company’s freedom to moderate our own services.First Amendment and CDA 230Some have suggested that social media sites are the “new public square” and that services should be forbidden by the government to block anyone’s speech. But such a rule would violate our company’s own First Amendment rights of editorial judgment within our services. Our legal freedom to prioritize certain content is no different than that of the New York Times or Breitbart.Some critics attack Section 230 of the Communications Decency Act as a “giveaway” to tech companies, but their real beef is with the First Amendment.Others allege that Section 230’s liability protections are conditioned on our service following a false standard of political “neutrality.” But Section 230 doesn’t require this, and in fact it incentivizes platforms like ours to moderate inappropriate content.Section 230 is primarily a legal routing mechanism for defamation claims -- making the speaker responsible, not the platform. Holding speakers directly accountable for their own defamatory speech ultimately helps encourage their own personal responsibility for a healthier Internet.For example, if car rental companies always paid for their renters’ red light tickets instead of making the renter pay, all renters would keep running red lights. Direct consequences improve behavior.If Section 230 were revoked, our defamation liability exposure would likely require us to be much more conservative about who and what types of content we allowed to post on our services. This would likely inhibit a much broader range of potentially “controversial” speech, but more importantly would impose disproportionate legal and compliance burdens on much smaller platforms.Operating responsibly -- and humblyWe’re aware of the privileged position our service occupies. We aim to use our influence for good, and to act responsibly in the best interests of society and our users. But we screw up sometimes, we have blind spots, and our services, like all tools, get misused by a very small slice of our users. Our service is run by human beings, and we ask for grace as we remedy our mistakes.We value the public’s feedback on our content policies, especially from those whose life experiences differ from those of our employees. We listen. Some people call this “working the refs,” but if done respectfully I think it can be healthy, constructive, and enlightening.By the same token, we have a responsibility to our millions of users to make our service the kind of positive experience they want to return to again and again. That means utilizing our own constitutional freedom to make editorial judgments. I respect that some will disagree with our judgments, just as I hope you will respect our goal of creating a service that millions of people enjoy.Thank you for the opportunity to appear here today.Adam Kovacevich is a former public policy executive for Google and Lime, former Democratic congressional and campaign aide, and a longtime tech policy strategist based in Washington, DC.
On the one hand, you have a wireless industry falsely claiming that 5G is a near mystical revolution in communications, something that's never been true (especially in the US). Then on the other hand you have oodles of internet crackpots who think 5G is causing COVID or killing people on the daily, something that has also never been true. In reality, most claims of 5G health harms are based on a false 20 year old graph, and an overwhelming majority of scientists have made it clear that 5G is not killing you (in fact several incarnations are less powerful than 4G).Last week, more evidence emerged that indicates that no, 5G isn't killing you. Researchers from the Australian Radiation Protection and Nuclear Safety Agency (ARPANSA) and the Swinburne University of Technology in Australia both released studies last week in the Journal of Exposure Science and Environmental Epidemiology. Both studies are among the first to look exclusively at 5G, and the only people who'll be surprised by their findings get all of their news from email forwards and YouTube. From an ARPANSA press statement on its first study's findings:
Remember Sharyl Attkisson? If not, she is a former CNN and CBS journalist who made something of a name for herself both in reporting on the Obama administration, often critically, as well as for accusing that same administration of hacking into her computer and home network. Whatever you think of her reporting, her lawsuit against Eric Holder and the Justice Department over the hacking claims was crazy-pants. Essentially, she took a bunch of the same technological glitches all of us deal with on a daily basis -- flickering television screens, a stuck backspace key on her computer -- and wove that into a giant conspiracy against her and her reporting. She made a big deal in the suit, and her subsequent book on the matter, over some "computer experts" she relied on to confirm that she was a victim of government hacking, except those experts remained largely anonymous and were even, in some cases, third party people she'd never met. For that and other reasons related to how quickly she managed to do initial discovery, the case was tossed by the courts in 2019.That didn't stop Attkisson's crusade against the government, however. In 2020, she filed suit against Rod Rosenstein, again accusing the government of spying on her and her family. To back this up, she again relied on an anonymous source, but that source has since been revealed. And, well...
Legal scholarship sucks. It’s interminably long. It’s relentlessly boring. And it’s confusingly esoteric. But the worst thing about legal scholarship is the footnotes. Every sentence gets one. Banal statement of historical fact? Footnote. Recitation of hornbook law? Footnote. General observation about scholarly consensus? Footnote. Original observation? Footnote as well, I guess.It’s a mess. In theory, legal scholarship should be free as a bird. After all, it’s one of the only academic disciplines to have avoided peer review. But in practice, it’s every bit as formalistic as any other academic discipline, just in a slightly different way. You can check out of Hotel Academia, but you can’t leave.Most academic disciplines use peer review to evaluate the quality of articles submitted for publication. In a nutshell, anonymous scholars working in the same area read the article and decide whether it’s good enough to publish. Sounds great, except for the fact that the people reviewing an article have a slew of perverse incentives. After all, what if the article makes arguments you dislike? Even worse, what if it criticizes you? And if you are going to recommend publication, why not insist on citations to your own work? After all, it’s obviously relevant and important.But the problems with peer review run even deeper. For better or worse, it does a pretty good job of ensuring that articles don’t jump the shark and conform to the conventional wisdom of the discipline. Of course, conformity can be a virtue. But it can also help camouflage flaws. Peer review is good at catching outliers, but not so good at catching liars. As documented by websites like Retraction Watch, plenty of scholars have sailed through the peer review process by just fabricating data to support appealing conclusions. Diederik Stapel, eat your heart out!Anyway, legal scholarship is an outlier, because there’s no peer review. Of course, it still has gatekeepers. But unusually, the people deciding which articles to publish are students, not professors. Why? Historical accident. Law was a profession long before it became an academic discipline, and law schools are a relatively recent invention. Law students invented the law review in the late 19th century, and legal scholars just ran with it.Asking law students to evaluate the quality of legal scholarship and decide what to publish isn’t ideal. They don’t know anything about legal scholarship. They don’t even know all that much about the law yet. But they aren’t stupid! After all, they’re in law school. So they rely on heuristics to help them decide what to publish. One important heuristic is prestige. The more impressive the author’s credentials, the more promising the article. Or at least, chasing prestige is always a safe choice, a lesson well-observed by many practicing lawyers as well.Another key heuristic is footnotes. Indeed, footnotes are almost the raison d’etre of legal scholarship. An article with no footnotes is a non-starter. An article with only a few footnotes is suspect. But an article with a whole slew of footnotes is enticing, especially if they’re already properly Bluebooked. After all, much of the labor of the law review editor is checking footnotes, correcting footnotes, adding footnotes, and adding to footnotes. So many footnotes!Most law review articles have hundreds of footnotes. Indeed, the footnotes often overwhelm the text. It’s not uncommon for law review articles to have entire pages that consist of nothing but a footnote.It’s a struggle. Footnotes can be immensely helpful. They bolster the author’s credibility by signaling expertise and point readers to useful sources of additional information. What’s more, they implicitly endorse the scholarship they cite and elevate the profile of its author. Every citation matters, every citation is good. But how to know what to cite? And even more vexing, how to know when a citation is missing? So much scholarship gets published, it’s impossible to read it all, let alone remember what you’ve read. It’s easy to miss or forget something relevant and important. Legal scholars tend to cite anything that comes to mind and hope for the best.There’s gotta be a better way. Thankfully, in 2020, Rob Anderson and Trent Wenzel created ScholarSift, a computer program that uses machine learning to analyze legal scholarship and identify the most relevant articles. Anderson is a law professor at Pepperdine University Caruso School of Law and Wenzel is a software developer. They teamed up to produce a platform intended to make legal scholarship more efficient. Essentially, ScholarSift tells authors which articles they should be citing, and tells editors whether an article is novel.It works really well. As far as I can tell, ScholarSift is kind of like Turnitin in reverse. It compares the text of a law review article to a huge database of law review articles and tells you which ones are similar. Unsurprisingly, it turns out that machine learning is really good at identifying relevant scholarship. And ScholarSift seems to do a better job at identifying relevant scholarship than pricey legacy platforms like Westlaw and Lexis.One of the many cool things about ScholarSift is its potential to make legal scholarship more equitable. In legal scholarship, as everywhere, fame begets fame. All too often, fame means the usual suspects get all the attention, and it’s a struggle for marginalized scholars to get the attention they deserve. Unlike other kinds of machine learning programs, which seem almost designed to reinforce unfortunate prejudices, ScholarSift seems to do the opposite, highlighting authors who might otherwise be overlooked. That’s important and valuable. I think Anderson and Wenzel are on to something, and I agree that ScholarSift could improve citation practices in legal scholarship.But I also wonder whether the implications of ScholarSift are even more radical than they imagine? The primary point of footnotes is to identify relevant sources that readers will find helpful. That’s great. And yet, it can also be overwhelming. Often, people would rather just read the article, and ignore the sources, which can become distracting, even overwhelming. Anderson and Wenzel argue that ScholarSift can tell authors which articles to cite. I wonder if it couldn’t also make citations pointless. After all, readers can use ScholarSift, just as well as authors.Maybe ScholarSift could free legal scholarship from the burden of oppressive footnotes? Why bother including a litany of relevant sources when a computer program can generate it automatically? Maybe legal scholarship could adopt a new norm in which authors only cite works a computer wouldn’t flag as relevant? Apparently, it’s still possible. I recently published an essay titled “Deodand.” I’m told that ScholarSift generated no suggestions about what it should cite. But I still thought of some. The citation is dead; long live the citation.Brian L. Frye is Spears-Gilbert Professor of Law at the University of Kentucky College of Law
The CBP loves its drones. It can't say why. I mean, it may lend them out to whoever comes asking for one, but there's very little data linking hundreds of drone flights to better border security. Even the DHS called the CBP's drone program an insecure mess -- one made worse by the CBP's lenient lending policies, which allowed its drones to stray far from the borders to provide dubious assistance to local law enforcement agencies.The CBP's thirst for drones -- with or without border security gains -- is unslakeable. Thomas Brewster reports for Forbes that the agency is very much still in the drone business. It may no longer be using Defense Department surplus to fail at doing its job, but it's still willing to spend taxpayer money to achieve negligible gains in border security. And if the new capabilities present new constitutional issues, oh well.
Cops lie.This is undeniable. But why do cops lie? There seems to be little reason for it. Qualified immunity protects them against all but their most egregious rights violations. Internal investigations routinely clear them for all but their most egregious acts of misconduct. And police union contracts make it almost impossible to fire bad cops, no matter what they've done.So, why do they lie? If I had to guess, it's because they've been granted so much deference by those adjudicating their behavior that "my word against theirs" has pretty much become the standard for legal proceedings. If a cop can push a narrative without more pushback than the opposing party's sworn statements, the cop is probably going to win.This reliance on unreliable narrators has been threatened by the ubiquity of recording devices. Some devices -- body cameras, dashcams -- are owned by cops. And, no surprise, they often "fail" to activate these devices when some shady shit is going down.But there are tons of cameras cops don't control. Every smartphone has a camera. And nearly every person encountering cops has a smartphone. Then there's the plethora of home security cameras whose price point has dropped so precipitously they're now considered as accessible as tap water.The cops can control their own footage. And they do. But they can't control everyone else's. And that's where they slip up. A narrative is only as good as its supporting evidence. Cops refuse to bring their own, especially when it contradicts their narrative. But they can't stop citizens from recording their actions. This is a fact that has yet to achieve critical mass in the law enforcement community. A cop's word is only as good as its supporting facts. Going to court with alternative facts -- especially ones contradicted by nearby recording devices is a bad idea. (h/t TheUrbanDragon)But that still doesn't stop cops from lying to courts. Cops in Lake Wales, Florida tried to claim a driver attacked them during a traffic stop -- something that could have resulted in a conviction on multiple felony charges. But camera footage obtained from a home security camera across the street from the traffic stop undermined the officers' sworn perjury:
As you'll likely recall, at the very end of last year, Senator Thom Tillis, the head of the intellectual property subcommittee in the Senate, slipped a felony streaming bill into the grand funding omnibus. As we noted at the time, this bill -- which was a pure gift to Hollywood -- was never actually introduced, debated, or voted on separately. It was just introduced and immediately slipped into the omnibus. This came almost a decade after Senators had tried to pass a similar bill, connected to the SOPA/PIPA. You may even recall when Senator Amy Klobuchar introduced such a bill in 2011, Justin Bieber actually suggested that maybe Senator Klobuchar should be locked up for trying to turn streaming into a felony.Of course, this whole thing was a gift to the entertainment industry, who has been a big supporter of Senator Tillis. With the flipping of the Senate, now Senator Leahy has become the chair of the IP subcommittee. As you'll also likely recall, he was the driving force behind the PIPA half of SOPA/PIPA, and has also been a close ally of Hollywood. So close, in fact, that they give him a cameo in every Batman film. Oh, and his daughter is literally one of Hollywood's top lobbyists in DC.So I guess it's no surprise that Tillis and Leahy have now teamed up to ask new Attorney General Merrick Garland to start locking up those streamers. In a letter sent to Garland, they claim the following:
In the CompTIA Security Infrastructure Expert Bundle, you'll get comprehensive preparation to sit four crucial CompTIA exams: Security+, CySA+, CASP, and PenTest+. You'll learn how to implement cryptographic techniques, how to analyze vulnerabilities, how to respond to cyber incidents with a forensics toolkit, and much more. It's on sale for $30.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
Two years ago, Supreme Court Justice Clarence Thomas shocked a lot of people by arguing -- somewhat out of nowhere -- that the Supreme Court should revisit the NY Times v. Sullivan ruling. If you're unaware, that 1964 ruling is perhaps the most important and fundamental Supreme Court ruling regarding the 1st Amendment. It's the case that established a few key principles and tests that are incredibly important in stopping vexatious, censorial SLAPP suits -- often by those in power, against those who criticize.Now, a DC Circuit appeals court judge -- and close friend of Thomas's -- is suggesting that the court toss that standard. And his reasons are... um... something quite incredible. Apparently, he's mad that the media and big tech are mean to Republicans, and he's worried that Fox News and Rupert Murdoch aren't doing enough to fight back against those evil libs, who are "abusing" the 1st Amendment to spew lies about Republicans. As you'll see, the case in question isn't even about the media, the internet, or Democrats/Republicans at all. It's about a permit in Liberia to drill for oil. Really. But there's some background to go through first.The key part of the Sullivan case is that, if the plaintiff is considered a "public figure," then they need to show "actual malice" to prove defamation. The actual malice standard is widely misunderstood. As I've heard it said, "actual malice" requires no actual malice. It doesn't mean that the person making the statements really dislikes who they're talking about. It means that the person making the statements knew that the statements were false, or made the statements "with reckless disregard for the truth." Once again, "reckless disregard for the truth" has a specific meaning that is not what you might think. In various cases, the Supreme Court has made it clear that this means that the person either had a "high degree of awareness" that the statements are probably false or "entertained serious doubts as to the truth" of the statements. In other words, it's not just that they didn't do due diligence. It's that they did, found evidence suggesting the content was false, and then still published anyway.This is, obviously, a high bar to get over. But that's on purpose. That's how defamation law fits under the 1st Amendment (some might argue that defamation law itself should violate the 1st Amendment as it is, blatantly, law regarding speech -- but by limiting it to the most egregious situations, the courts have carved out how the two can fit together). Five years ago, 1st Amendment lawyer Ken White noted that there was no real concerted effort to change this standard, and it seemed unlikely that many judges would consider it.
In just the last five years or so AT&T has been: fined $18.6 million for helping rip off programs for the hearing impaired; fined $10.4 million for ripping off a program for low-income families; fined $105 million for helping "crammers" by intentionally making such bogus charges more difficult to see on customer bills; and fined $60 million for lying to customers about the definition of "unlimited" data. This is just a few of AT&T's adventures in regulatory oversight, and in most instances AT&T lawyers are usually able to lower the fines, or eliminate them entirely, after years of litigation.AT&T's latest scandal, like the rest of them, won't make many sexy headlines, but it's every bit as bad. Theodore Marcus, a lawyer at AT&T, emerged this week to accuse the telecom giant of systemically ripping off US schools via the FCC's E-Rate program. According to Marcus, this occurred for years, and tended to harm schools in the nation's most marginalized communities. And when he informed AT&T executives of this they... did nothing:
This week, our first place winner on the insightful side is Blake C. Stacey with a response to the return of the PACT Act, and especially its traffic thresholds for regulations:
Today, we finish our journey through the winners of the third annual public domain game jam, Gaming Like It's 1925. We've covered ~THE GREAT GATSBY~, The Great Gatsby Tabletop Roleplaying Game, Art Apart and There Are No Eyes Here, Remembering Grußau, and Rhythm Action Gatsby, and now it's time for the final winner: Best Analog Game recipient Fish Magic by David Harris.David Harris is our one returning winner this year, having topped the same category in Gaming Like It's 1924 with the game The 24th Kandinsky. This year's entry is at once similar and very different: like that previous game, Fish Magic is about exploring the work of a famous painter, but it takes an entirely new approach to doing so. And that change of approach underlines what makes both games so compelling: their mechanics are carefully crafted to perfectly suit the artworks at their core. Where The 24th Kandinsky was about manipulating the shapes and colors of Kandinsky's abstract art, Fish Magic is about letting the evocative surrealism of the titular painting by Paul Klee spark your imagination. To that end, the painting becomes the game board, and is populated by words randomly selected from a list, poetically divided into the "domains" of Celestial, Earthly, and Aquatic:The players take turns moving between nodes on the board, taking a word from each one to build a collection, which they can then use to build phrases when they are ready. The goal is to convince the other players that your constructed phrase represents either a type of "magic fish", or a type of "fish magic". Points are gained by winning the support of other players for your fish magic or your magic fish, and reduced according to how many extra words you have sitting in your collection, thus encouraging players to be extra creative and find ways to make convincing phrases with the words they have, rather than just chasing the ones they want.If you're wondering what exactly makes for a good type of fish magic or magic fish, or what that even means — well, that's kind of the point, and exactly why this approach to the game is so perfect for the source material! Paul Klee's painting is appreciated for its magical depiction of a mysterious and intriguing underwater world, and the way its techniques — a layer of black paint scratched off to reveal vibrant colours underneath, and a square of muslin glued to the center of the canvas — suggest wondrous depths obscured by a hazy curtain. Fish Magic the painting provokes imagination and flights of fancy, and Fish Magic the game adds just enough mechanical scaffolding to make this process explicit and collaborative.Anyone could slap a board layout on a famous painting, add some rules, and call it a game — but it takes a real appreciation for the painting, and a real intent to do something meaningful with it, to craft such a simple premise that so perfectly aligns with the source material. Like The 24th Kandinsky last year, just a quick read of the rules was enough to make our judges eager to play, and it was an easy choice for the Best Analog Game.You can get all the materials for Fish Magic on Itch, and check out the other jam entries too. Congratulations to David Harris for the win!And that's a wrap on our series of winner spotlights for Gaming Like It's 1925. Another congratulations to all the winners, and a big thanks to every designer who submitted an entry. Keep on mining that public domain, and start perusing lists of works that will be entering the public domain next year when we'll be back with Gaming Like It's 1926!
The Deus Ex franchise has found its way onto Techdirt's pages a couple of times in the past. If you're not familiar with the series, it's a cyberpunk-ish take on the near future with broad themes around human augmentation, and the weaving of broad and famous conspiracy theories. That perhaps makes it somewhat ironic that several of our posts dealing with the franchise have to do with mass media outlets getting confused into thinking its augmentation stories were real life, or the conspiracy theories that centered around leaks for the original game's sequel were true. The conspiracy theories woven into the original Deus Ex storyline were of the grand variety: takeover of government by biomedical companies pushing a vaccine for a sickness it created, the illuminati, FEMA takeovers, AI-driven surveillance of the public, etc.And it's the fact that such conspiracy-driven thinking today led Warren Spector, the creator of the series, to recently state that he probably wouldn't have created the game today if given the chance.
Summary: After Amazon refused to continue hosting Parler, the Twitter competitor favored by the American far-right, former Parler users looking to communicate with each other -- but dodge strict moderation -- adopted Telegram as their go-to service. Following the attack on the Capitol building in Washington, DC, chat app Telegram added 25 million users in a little over 72 hours.Telegram has long been home to far-right groups, who often find their communications options limited by moderation policies that, unsurprisingly, remove violent or hateful content. Telegram's moderation is comparatively more lax than several of its social media competitors, making it the app of choice for far right personalities.But Telegram appears to be attempting to handle the influx of users -- along with an influx of disturbing content. Some channels broadcasting extremist content have been removed by Telegram as the increasingly-popular chat service flexes its (until now rarely used) moderation muscle. According to the service, at least fifteen channels were removed by Telegram moderators, some of which were filled with white supremacist content.Unfortunately, policing the service remains difficult. While Telegram claims to have blocked "dozens" of channels containing "calls to violence," journalists have had little trouble finding similarly violent content on the service, which either has eluded moderation or is being ignored by Telegram. While Telegram appears responsive to some notifications of potentially-illegal content, it also appears to be inconsistent in applying its own rule against inciting violence.Decisions to be made by Telegram:
On one side, you've got wireless carriers implying that 5G is some type of cancer curing miracle (it's not). On the other hand, we have oodles of conspiracy theorists, celebrities, malicious governments, and various grifters trying to claim 5G is some kind of rampant health menace (it's not). In reality, 5G's not actually interesting enough to warrant either position, but that's clearly not stopping anybody in the post-truth era.But it's all fun and games until somebody gets hurt.Over the last year or two, conspiracy theory-driven attacks in both the UK and US have ramped up not just on telecom infrastructure, but on telecom workers themselves. From burning down cellular towers to putting razor blades and needles under utility pole posters to injure workers, it's getting exceptionally dumb and dangerous. To the point where gangs of people have threatened telecom workers who don't even work in wireless.As the Intercept notes, the rise in attacks has finally gotten the attention of law enforcement. In New York, law enforcement has finally keyed into the fact that the conspiracy theories have fused white supremacists and Q Anon dipshittery into one problematic mess that's resulting in concrete harm. White supremacists (here and abroad) have apparently figured out they can amplify and contribute to the conspiracy theories to generate more chaos for the American institutions they're eager to demolish. All stuff that's being amplified in turn by governments like Iran and Russia eager for the same outcome.While superficially a lot of these folks have the coherence of mud, in many cases the attacks are very elaborate, and specifically targeted:
There was a time when a key part of the Republicans' political platform was for "tort reform" and reducing the ability of civil lawsuits to be brought against companies. The argument they made (and to which they still give lip service) is that too much liability leads to a barrage of frivolous nuisance litigation, which only benefits greedy trial lawyers. Apparently, that concept has been tossed out the window -- as with so many Republican principles -- if you mention "big tech." The latest example of this is a new Section 230 reform bill introduced by Representative Jim Banks called the "Stop Shielding Culpable Platforms Act" which would massively increase liability on any company that hosts user content online.Banks trumpeted his own confusion on this issue earlier in the week by tweeting -- falsely -- that "Section 230 knowingly lets Big Tech distribute child pornography without fear of legal repercussions." This is wrong. Child sexual abuse material (CSAM) is very, very, very much illegal and any website hosting it faces serious liability issues. Section 230 does not cover federal criminal law, and CSAM violates federal criminal law. Furthermore, federal law requires every website to report the discovery of CSAM to the CyberTipline run by NCMEC.The law is pretty clear here and you'd think that a sitting member of Congress could, perhaps, have had someone look it up?
As a bunch of US lawmakers keep threatening new laws that would force websites to remove more content, we should note just how much such moves reflect what is happening in China. The NY Times reports that Microsoft is in hot water in China, because LinkedIn apparently has been too slow to block content that displeases the Chinese government. As the article notes, LinkedIn is the one major US social network that is allowed in China -- but only if it follows China's Great Firewall censorship rules.If you're not familiar with how that works, it's not that the government tells you what to take down -- it's just that the government makes it clear that if you let something through that you shouldn't, you're going to hear about it, and risk punishment. And it appears that's exactly what's happened to Microsoft:
The Complete Become a UI/UX Designer Bundle has 9 courses UI/UX design, sales funnels, and business development. You'll learn about the phases of web development, the UX design process, different UI design types, and more. You'll also learn about freelancing, starting your own business, and how to create highly profitable sales funnels. It's on sale for $35.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
Giant US ISPs have long (ab)used the lack of competition in the broadband market by imposing completely arbitrary and unnecessary monthly usage caps and overage fees. They've also taken to exempting their own content from these arbitrary limits while still penalizing competitors -- allowing them to tilt the playing field in their favor (or the favor of other deep pocketed giants). For example, an AT&T broadband customer who uses AT&T's own streaming services (like HBO Max) faces no bandwidth penalties or fees. If that same customer uses Netflix or a competitor they're socked with surcharges.When the FCC passed net neutrality rules in 2015, it failed to recognize how this "zero rating" could be abused anticompetitively. They were just starting to figure this out and shift policy positions when Donald Trump was elected and net neutrality rules were killed. However, in the wake of net neutrality's federal repeal, states like California (much like the EU) passed their own net neutrality rules that genuinely prohibit zero rating.More specifically, California's rules prohibited a company like AT&T from taking money from, say, ESPN, to exempt just ESPN content from caps. Why? One, again, caps are bogus artificial constructs that serve no technical function. And two, if a deep-pocketed giant like ESPN can afford to bypass your pointless cellular restrictions, and smaller online sports website can't, ESPN just gained an unfair competitive advantage (once AT&T gets its slice of the pie, of course).AT&T, as you might expect, doesn't like this loss of revenue and power (the only two things this has ever been about for them). As such, the company took to their policy blog this week to whine incessantly about how unfair this all is. The company says that as a result of the rules it's backing off its "sponsored data" zero rating plan not just in California, but in other states. It will also no longer let companies buy cap-exempt, "zero rated" status. That's a good thing for internet competition, startups, innovation, and consumers. But this being AT&T, of course the company claims the exact opposite:
Asset forfeiture means taking everything that isn't nailed down. Why bother being selective? In most cases, it's pure profit for the law enforcement agency that performs the seizure. And since forfeitures are so rarely successfully challenged, it's pretty much a foolproof way to make a little extra cash. The citizens who happened to be in the wrong place at the wrong time (in their own houses with their own possessions) are acceptable collateral damage.We're in the middle of a war against drugs. Collateral damage should be expected. That's the viewpoint of drug warriors, even when the "acceptable" collateral damage means nothing more than law enforcement officers taking stuff just because they can.Here's a rare successful motion for a return of property -- one filed against the Bay County (FL) Sheriff's Office by a person who had his stuff taken even though it was his father being charged with criminal acts. The son -- whose father had all charges dropped after passing away -- took on the Office and secured a ruling that should finally give him back what was taken from him. (via FourthAmendment.com)Unfortunately, there are still some hurdles standing between the plaintiff and the 75-inch TV and PlayStation 4 taken by the Sheriff's Office during a raid of his father's house. One set of hurdles has already been cleared. But it involved getting the Office to not only admit it was lying about taking the property, but also admitting it had likely liquidated the seized items before it had legal permission to do so.Here's how the Florida Court of Appeals details the events [PDF] leading up to its findings in favor of the plaintiff.
The Backend Developer Bootcamp Bundle has 5 courses that take you behind the scenes of web development. You'll learn C#, SQL, .NET Core, and more. You'll learn how to write clean code, how to build APIs, how to use Object Oriented Programming, and more. It's on sale for $30.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
A few weeks ago, Stanford's Daphne Keller -- one of the foremost experts on internet regulation -- highlighted how so much of the effort at internet reform seems to treat "the internet" as if it was entirely made up of Facebook, Google and Twitter. These may be the most visible sites to some, but they still make up only a small part of the overall internet (granted: sometimes it seems that Facebook and, to an only slightly lesser extent, Google, would like to change that, and become "the internet" for most people). Keller pointed out that the more that people -- especially journalists -- talk about the internet as if it were just those three companies, the more it becomes a self-fulfilling prophecy, in part because it drives regulation that is uniquely focused on the apparently "problems" associated with those sites (often mis- and disinformation).I was reminded of this now, with the reintroduction of the PACT Act. As I noted in my writeup about the bill, one of the biggest problems is that it treats the internet as if every website is basically Google, Facebook, and Twitter. The demands that it puts on websites aren't a huge deal for those three companies -- as they mostly meet the criteria already. The only real change it would make for those sites is that they'd maybe have to beef up their customer support staff to have telephone support.But for tons of other companies -- including Techdirt -- the bill is an utter disaster. It treats us the same as it treats Facebook, and acts like we need to put in place a massive, expensive customer service/content moderation operation that wouldn't make any sense, and would only serve to enable our resident trolls to demand that we have to provide a detailed explanation why the community voted down their comments.In that same thread, Keller suggested something that I think would be quite useful. Saying that there should be a sort of "test suite" of websites that anyone proposing internet regulation should have to explore how the regulations would effect those sites.
We've noted repeatedly how the United States has an unhealthy fascination with the growth for growth's sake mindset. That's best exemplified by our near-endless adoration of megamergers in sectors like telecom, which result in extremely harmful monopolization and consolidation problems that are extremely obvious, but we choose to ignore anyway. Time after time after time in telecom (and banking, and airlines, and...), companies promise a universe of investment, job creation, innovation, and synergies in exchange for merger regulatory approval. And time after time after time, reality shows that these pre-merger promises are meaningless and harmful... unless you're one of the few investors or executives who benefit.Despite the steady drumbeat of market, employment, and consumer harms from such mergers, we insist on learning nothing from the experience here in the States; the job and competition killing Sprint T-Mobile deal being just the latest in a long line of examples of companies promising all manner of job growth, competition, and innovation, right before the exact opposite happens. With no meaningful penalty whatsoever, unless you're a consumer or employee.While you might think things are different up in Canada, often they're even worse. This week, two of the nation's biggest telecom giants, Shaw and Rogers, announced they'd struck a new $26 billion deal that would dramatically reduce overall competition in the already not-particularly-competitive Canadian telecom market. The announcement is rife will all manner of dodgy claims of amazing "synergies" and job creation:
Late last year, we discussed a disappointing move by GOG to delist well-reviewed horror PC game Devotion from its platform. Making it all very odd were the facts that GOG had just announced that morning that the game would be available that day, as well as Devotion's previous delisting from Steam. The reason for the multiple delistings was never perfectly spelled out in either case, but the game includes a reference to China's President Xi and the never ending joke that he resembles Winnie the Pooh. GOG, instead of being open about that being the obvious reason to delist the game, instead said it made the move after receiving "messages from gamers." Groan.Well, fortunately, this is 2021, which means instead of the game dying on the doorstep of well-entrenched gatekeepers, developer Red Candle Games can instead just release the game itself on its own website.
Summary: In the fall of 2019, Disney launched its Disney+ streaming service to instant acclaim. While it offered up access to the extensive Disney catalog (including all of its Marvel, Star Wars, and 21st Century Fox archives), the first big new hit for the service was a TV series set in the Star Wars universe called The Mandalorian, which featured a character regularly referred to as “Baby Yoda.”Baby Yoda was a clear hit online, and people quickly made animated gif images of the character, helping spread more interest in The Mandalorian and the Disney+ service. However, soon after Vulture Magazine put up a story that was all just Baby Yoda GIFs, it was discovered that Giphy, a company that has built a repository of GIFs, had taken all of the Baby Yoda GIFs down. This caused many to complain, blaming Disney, highlighting that such GIFs were clearly fair use.Many people assumed that Disney was behind the takedown of the Baby Yoda GIFs. This may be a natural assumption since Disney, above and beyond almost any other company, has a decades-long reputation for aggressively enforcing its copyright. The Washington Post even wrote up an entire article scolding Disney for “not understanding fans.”That article noted that it was possible that Giphy pre-emptively decided to take down the images, but pointed out that this was, in some ways, even worse. This would mean that Disney’s own reputation as an aggressive enforcer of copyrights would lead another company to take action even without an official DMCA takedown notice.Giphy itself has always lived in something of a gray area regarding copyright, since many of the GIFs are from popular culture, including TVs and movies. While there is a strong argument that these are fair use, the company has claimed that most of its content is licensed, and said that it does not rely on fair use.Decisions to be made by Giphy:
Never let it be said that cops are not open-minded.Sure, everyone with a darker-than-white skin tone moving around in any part of the city deemed unsafe by the same people charged with keeping it safe are almost always considered de facto criminals, but cops are still very willing to explore alternate avenues when it comes to arresting and criminally charging people.Let's take a look at cops and their willingness to suspend their disbelief. Anyone accused of a crime is inherently untrustworthy: guilty until proven innocent. This includes people they've killed for doing nothing more than, say, threatening to kill themselves. The only good criminals are those who are willing to work with cops. These criminals have reputations that are unassailable and cops are willing to fabricate the paperwork needed to keep assailing of their reputations to a minimum.Cops and prosecutors have, for years, relied on "experts" who were often no better than YouTube conspiracy theorists. For years, law enforcement has said things like bite marks, hair samples… even mass-produced clothing should be admitted as damning evidence of criminal acts. And everyone indulged them.We've finally reached the critical mass needed to turn criticism of cop means and methods into mobilization. Years after it should have been apparent this was abject bullshit, the Texas Rangers are finally abandoning an investigative "technique" that has done little more than propel the storylines of horror movies since its inception.
Last summer we wrote about the PACT Act from Senators Brian Schatz and John Thune -- one of the rare bipartisan attempts to reform Section 230. As I noted then, unlike most other 230 reform bills, this one seemed to at least come with good intentions, though it was horribly confused about almost everything in actual execution. If you want to read a truly comprehensive takedown of the many, many problems with the PACT Act, Prof. Eric Goldman's analysis is pretty devastating and basically explains how the drafters of the bill tried to cram in a bunch of totally unrelated things, and did so in an incredibly sloppy fashion. As Goldman concludes:
Everyone loves buying location data. Sure, the Supreme Court may have said a thing or two about obtaining this data from cell service providers but it failed to say anything specific about buying it from third-party data brokers. Oh well! Any port in an unsettled Constitutional storm, I guess.The DEA buys this data. So does ICE and the CBP. The Defense Department does it. So does the Secret Service and, at least once, so did the IRS. Data harvested from apps ends up in the hands of companies like Venntel and Babel Street. These companies sell access to this data to a variety of government agencies, allowing them to bypass warrant requirements and phone companies. Sure, the data may not be as accurate as that gathered from cell towers, but it's still obviously very useful, otherwise these brokers wouldn't have so many powerful customers.The latest news on the purchasing of location data comes to us via Joseph Cox and Motherboard -- both of which have been instrumental in breaking news about the government's new source of third-party data capable of tracking people's movements.So, who's using this data now? Well, it's a government agency overseeing a very captive audience.
The Complete 2021 Beginner to Expert Guitar Lessons Bundle has 14 courses to help you master playing guitar. By the end of these courses, you'll be playing chords in songs, soloing, strumming various patterns, reading the guitar tab, and generally understanding your guitar. You'll be able to teach yourself any song you want to learn. Courses will also introduce you to electric guitar, blues and jazz guitar, and more. It's on sale for $30.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
So last year, when everybody was freaking out over TikTok, we noted that TikTok was likely the least of the internet's security and privacy issues. In part because TikTok wasn't doing anything that wasn't being done by thousands of other companies in a country that can't be bothered to pass even a basic privacy law for the internet. Also, any real security and privacy solutions need to take a much broader view.For example, while countless people freaked out about TikTok, none of those same folks seem bothered by the parade of nasty vulnerabilities in the nation's telecom networks, whether we're talking about the SS7 flaw that lets governments and bad actors spy on wireless users around the planet or the constant drumbeat of location data scandals that keep revealing how your granular location data is being sold to any nitwit with a nickel. Or the largely nonexistent privacy and security standards in the internet of broken things. Or the dodgy security in our satellite communications networks.Point being, hysteria over the potential threat of a Chinese app packed with dancing tweens trumped any real concerns about widespread, long-standing security vulnerabilities and privacy issues, particularly in telecom. This week this apathy was once again on display after reporters found that a gaping flaw in the SMS standard lets hackers take over phone numbers in minutes by simply paying a company to reroute text messages. All for around $16:
It's been a delayed reaction, but legislators are finally trying to do something about the horrific outcomes that result from advances in technology colliding with laws that have been on the books for decades. Smartphones are omnipresent and teens are using them just like adults use them. Sexting -- the sending of explicit images to willing recipients -- shouldn't be illegal. And yet it is because some of those participating in this consensual distribution of explicit images are minors.Operating under the belief that no one engages in sexual acts until they reach the age of consent, law enforcement has managed to turn this form of communication into a lifetime of misery for participants. Perhaps the most disturbing aspect of using child porn laws to prosecute minors for sexting is the fact that actual sexual acts would be legal under the same set of laws.Rather than allow parents to handle sexting by minors, prosecutors have stepped in to turn consenting teens into sexual predators, even if they've done nothing more than send images of themselves to another teen. There's a massive logical leap that needs to be made to turn a teen photographing their own body into their own child pornographer, but cops and prosecutors have been willing to bridge that gap over reality to prematurely end these teens' lives. Charges stemming from child porn charges -- even when the teen has done nothing but "exploit" themselves -- come with a lifetime of downsides, thanks to sex offender statutes.Maryland's legislature is trying to mitigate the damage done by existing laws -- ones passed by legislators who could not have possibly foreseen teens willingly (and easily) distributing sexual images amongst themselves. The absence of any actual child pornographer isn't something addressed by child porn laws, so the Maryland legislature has decided to make it a bit more difficult for prosecutors to convert questionable judgment calls by teens to criminal charges.
The inability of someone to understand the idea/expression dichotomy in copyright law strikes again! For those of you not familiar with this nuance to copyright law, it essentially boils down to creative expression being a valid target for copyright protection, whereas broader ideas are not. In other words, the creator of Batman can absolutely have a copyright on Batman as a character, but cannot copyright a superhero who is basically a rich crazy guy who fights crime in a cape and cowl with a symbol of an animal on his chest. You get it.Katrina Parrott, who came up with some original emojis of a more diverse nature than previously made, does not get it. She sued Apple late last year, claiming copyright infringement after Apple came out with its own diverse emojis.
The Pasco County (FL) Sheriff's Office is being sued over its targeted harassment program -- one it likes to call "predictive policing."Predictive policing is pretty much garbage everywhere, since it relies on stats generated by biased policing to generate even more biased policing. In Pasco County, however, it's a plague willingly inflicted on residents by a sheriff (Chris Nocco) who has apparently described the ultimate goal of the program as "making [people] miserable until they move or sue."Well, Pasco County's getting one of these outcomes, after years of hassling residents who happen to find themselves labelled as criminals or possible criminals by the Sheriff's faulty software. Under the guise of "fighting crime," Sheriff's deputies make multiple visits to residences deemed troublesome, ticketing them for unmowed lawns, missing mailbox numbers, or for "allowing" teens to smoke on their property.This program has bled over into the area's schools, subjecting minors to the same scrutiny for failing to maintain high grades or steady attendance. In one case, a 15-year-old on probation was "visited" by deputies 21 times in six months. Since 2015, 12,500 "checks" have been performed as part of Office's predictive policing program.The Institute for Justice is representing four plaintiffs, including Robert Jones -- a target of the program who did both things the Office wanted: moved and sued.
Textiles have been around for such a long time that we barely think about them. The making of fabric is one of the oldest crafts, and has played a major role in human civilization for thousands of years — and that might lead one to assume that there's nothing left to be learned from fabric's history. But they'd be wrong. This week we're joined by Virginia Postrel, whose book The Fabric Of Civilization: How Textiles Made The World is a fascinating look at how textiles have pushed and shaped the history of innovation, and how the story of fabric can teach us important lessons about today's biggest challenges around innovation.Follow the Techdirt Podcast on Soundcloud, subscribe via iTunes, or grab the RSS feed. You can also keep up with all the latest episodes right here on Techdirt.
Back when Netflix was a pesky upstart trying to claw subscribers away from entrenched cable providers, the company had a pretty lax approach to users that shared streaming passwords. At one point CEO Reed Hastings went so far as to say he "loved" password sharing, seeing it as akin to free advertising. The idea was that as kids or friends got on more stable footing (left home to job hunt, whatever), they'd inevitably get hooked on the service and purchase their own subscription. Execs at HBO (at least before the AT&T acquisition) have stated it doesn't really hurt these companies' bottom lines in part because, much like with traditional piracy, there's no guarantee these users would actually subscribe if they lost access.In the last year or two, as Netflix's dominance grew, the company's position on the subject unsurprisingly started to toughen. And last week, the company began testing a system that would nudge password sharing subscribers to get their own account:
We've talked a lot in the past about how almost no one seems to actually understand privacy, and that leads to a lot of bad policy-making, including policy-making that impacts the 1st Amendment and other concepts that we hold sacred. Sometimes, it creates truly bizarre scenarios, like the arguments being made by Texas's Attorney General in the latest amended antitrust complaint against Google.As you'll likely recall, back in December, Texas's Attorney General Ken Paxton -- along with nine other states -- filed an antitrust lawsuit against Google. There were some bits in the laws that suggested some potentially serious claims, but the key pars were heavily redacted. Of the non-redacted parts there were really embarrassing mistakes, including claiming that Facebook allowing WhatsApp users to backup their accounts to Google Drive was giving Google a "backdoor" into WhatsApp communications.That makes the latest amended complaint even more bizarre. It attacks Google for doing more to protect its users' privacy. As you remember, a couple weeks ago, Google noted that as it got rid of 3rd party cookies in Chrome, it wasn't going to replace it with some other form of tracking. This is, clearly, good for privacy. It is, also, good for Google, since it's better positioned to weather a changing ad market that doesn't rely on 3rd party cookies tracking you everywhere you go.So the new amended complaint takes a move that is clearly good for everyone's privacy and whines that this is an antitrust violation.
The 2021 Premium Python Bootcamp Bundle is your ultimate guide to learning Python. With 13 courses, you'll start with the very basics and work your way towards more advanced skills like game design, web automation, and more. It's on sale for $35.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
Law enforcement loves loves LOVES third parties. Anyone one step removed from someone they're investigating generally isn't covered by the Fourth Amendment, which means no one needs a warrant or probable cause to go fishing for "third party" data.But when it comes to the accused, what's easy for law enforcement is seldom simple for regular citizens. Third parties obtain tons of personal data when interacting with customers and users. But when a regular person asks for this information, third parties apparently feel free to blow them off. That's the case when someone's trying to do nothing more than dispute something on their credit record. And it's also the case when someone's life is literally on the line.This cavalier approach to record keeping might finally cost a third party some money. A man falsely accused of murder is taking car rental agency Hertz to court for sitting on a receipt that would have cleared him for several years.
The British government is looking to literally silence dissent. Protests are a fact of life. There hasn't been a government yet that's been able to avoid them. But governments still do all they can to prevent them from reaching critical mass. In Hong Kong, the Chinese government has turned protesting into a national security crime with life sentences. In the United States, legislators are still trying to find ways to shut people up without violating their long-protected right to be verbally and demonstratively angry at their government.Over in the UK, the government wants people to shut up. So, the Home Office has crafted a bill that would do exactly that: criminalize the "noise" protesters make. The bill would amend the 1986's Public Order Act to make it a crime to do the one thing demonstrations and protests are supposed to do: draw the public's attention. Here's Ian Dunt, writing for Politics.co.uk.
Late last year, we discussed Microsoft's acquisition of Zenimax, the parent company of Bethesda, and what it would mean for the studio's beloved franchises. At particular issue, given that this is Microsoft we're talking about, was whether new or existing franchises would be exclusive to Xbox consoles and/or PC. The communication out of Microsoft has been anything but helpful in this respect. First, Xbox chief Phil Spencer and Bethesda's Todd Howard made vague statements that mostly amounted to: man, we don't have to make Bethesda games exclusives and it's hard to imagine us doing so. Only a few weeks later, another Microsoft representative clarified that while the company may have plans to make Bethesda games "first or best" on Microsoft platforms, "that's not a point about being exclusive." This naturally led most to believe that Microsoft might have timed release windows on other platforms, but wouldn't be locking any specific titles down.What a difference a few months can make, it seems. With the acquisition officially complete, Microsoft put out a "welcome" announcement to the Bethesda team, which included this fun bit to be consumed by the public.
The Kansas City Police Department has managed to turn a few heads -- and not in the good way -- with an internal PowerPoint that may as well have been titled "So, You've Killed Someone." The document was obtained during discovery in a wrongful death suit against the KCPD. Back in 2019, Officer Dylan Pifer shot and killed Terrance Bridges, claiming he thought Bridges was trying to pull a gun from his sweatshirt pocket. No gun was found on Bridges.The presentation [PDF] obtained from Bridges' family's lawyer by the Kansas City Star advises cops of two things: police shootings should be handled like routine criminal investigations to eliminate claims of bias. And police shootings should be handled nothing like routine criminal investigations because they involve cops.The opening slide makes it clear what the priority is in investigations of shootings by cops: preserving the narrative. It even has the number one next to it.
Despite bottomless pockets and all but owning state and federal regulators for the last four years, telecom continues to stumble with adaptation in the streaming video era. Verizon's attempt to pivot from curmudgeonly old phone company to sexy new media brand fell flat on its face. AT&T's plan to spend $200 billion on the Time Warner and DirecTV mergers to dominate the television space has resulted in them losing 8 million pay TV subscribers in just the last four years. In short, pampered telecom monopolies aren't finding that getting ahead in more competitive markets to be particularly easy.Comcast too isn't having a great time of it, despite dumping the company's resources into its new Peacock streaming platform. A new filing this week indicates that Comcast lost $914 million on the venture just last year alone. Some of these losses were expected as Comcast shuffles resources around NBC Universal, pours money into new projects, and streamlines the company's overall structure, but it's worth noting that Comcast remains somewhat cagey about how many paying customers are actually signed up:
The good news is that Iowa prosecutors' attempt to jail a journalist for being present at a protest has failed. Andrea Sahouri -- who was arrested while covering a George Floyd protest in Des Moines last summer -- has been acquitted of all charges by a jury. But the fact that she was prosecuted at all is still problematic.Sahouri was arrested by Des Moines police officers while apparently walking away from the scene of a protest. Officers at the scene broadcast conflicting orders from their squad cars. While one loudspeaker told protesters to disperse, another told protesters to "protest peacefully." Officer Luke Wilson performed the arrest. Unfortunately, it took place out of view of nearby CCTV cameras. That shouldn't have been a problem since Officer Wilson was wearing a body camera. But he "forgot" to ensure it was recording before he began arresting people.The prosecution of Sahouri was handled in bad faith. Prosecutors sought to bar any mention of her employment as a Des Moines Register journalist during the court case. They claimed this case had nothing to do with press freedom -- that it only involved someone disobeying a lawful order to disperse. They claimed this despite recordings of the PD's arrival on scene showing officers issuing conflicting orders to protesters.
The DOJ has indicted another company for supposedly making it easier for criminals to elude law enforcement. The true target, though, isn't the company whose principals have been indicted, but encryption itself.A couple of years ago the DOJ decided to bring RICO charges against Phantom Secure, a cellphone provider that catered to the criminal element with "uncrackable" phones/messaging services built on existing Blackberry hardware/software.The FBI approached Phantom Secure, asking for an encryption backdoor that would allow it to snoop on its customers. Phantom Secure declined the FBI's advances. Its phones -- originally marketed to professionals desirous of additional security -- were soon marketed to criminals, a market sector that truly valued the security options offered by Phantom.But rejecting the FBI and selling to criminals causes problems. The DOJ went after Phantom Secure, arresting the owner and charging him with a bunch of RICO and RICO-adjacent crimes.It is happening again. The DOJ has decided encryption is a crime when companies offering encrypted communications choose to sell to people the DOJ considers to be criminals.Here's the DOJ's portrayal of its crime-fighting efforts -- one supported by people who rarely find a sandwich they don't think can be criminally charged.
The Premium 2021 Project and Quality Management Bundle has 22 courses to help you learn how to handle any project and deliver efficient results. You'll be introduced to the fundamentals of project and product management, the keys to being a successful leader, and how to prepare for various certification exams. Courses cover Six Sigma, Agile, Jira, Lean, and more. It's on sale for $46.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
The Washington Post tech columnist Geoffrey Fowler recently had a very interesting article about how Amazon won't allow the ebooks it publishes to be lent out from libraries. As someone who regularly borrows ebooks from my local libraries, I find this disappointing -- especially since, as Fowler notes, Amazon really is the company that made ebooks popular. But, when it comes to libraries, Amazon won't let libraries lend those ebooks out:
Last week the House unveiled (a previous version of this story incorrectly stated the bill had been passed) the Accessible, Affordable Internet for All Act. The bill, which died last year after Mitch McConnell's Senate refused to hold a vote on it, includes a lot of great things, including spending $94 billion on expanding broadband into underserved areas. There's a ton of other helpful things in the proposal, like boosting the definition of broadband to 100 Mbps down (and upstream), requiring "dig once" policies that deploy fiber conduit alongside any new highway bills, and even a provision requiring the FCC to create rules forcing ISPs be transparent about how much they actually charge for monthly service.A summary (pdf) of the bill offers some additional detail, such as the fact the bill includes a mandate that the government (specifically the Office of Internet Connectivity and Growth within the NTIA) more fully study the impact of affordability on broadband access. In the wake of allegations that the FCC's subsidy auction process is a corrupted and exploited mess, the law also lays down a lot of groundwork to make the subsidization of broadband access more transparent, equitable, and accountable to genuine oversight with an eye on affordability (instead of exclusively focusing on access, which is the DC norm):
Last week the House unveiled (a previous version of this story incorrectly stated the bill had been passed) the Accessible, Affordable Internet for All Act. The bill, which died last year after Mitch McConnell's Senate refused to hold a vote on it, includes a lot of great things, including spending $94 billion on expanding broadband into underserved areas. There's a ton of other helpful things in the proposal, like boosting the definition of broadband to 100 Mbps down (and upstream), requiring "dig once" policies that deploy fiber conduit alongside any new highway bills, and even a provision requiring the FCC to create rules forcing ISPs be transparent about how much they actually charge for monthly service.A summary (pdf) of the bill offers some additional detail, such as the fact the bill includes a mandate that the government (specifically the Office of Internet Connectivity and Growth within the NTIA) more fully study the impact of affordability on broadband access. In the wake of allegations that the FCC's subsidy auction process is a corrupted and exploited mess, the law also lays down a lot of groundwork to make the subsidization of broadband access more transparent, equitable, and accountable to genuine oversight with an eye on affordability (instead of exclusively focusing on access, which is the DC norm):