The Royal Canadian Mounted Police have eyes everywhere. That's according to documents obtained via public records requests by The Tyee, which published selections from the 3,000 pages it has spent more than a year suing to obtain.The RCMP has made news previously for doing things like sidestepping warrant requirements for obtaining user data from ISPs and dropping criminal cases rather than discuss its not-so-secret Stingray devices in court. It's making headlines again, but not the sort it wants. A presentation contained in the document stash provides more details on "Project Wide Awake" -- an advanced social media monitoring program first uncovered by The Tyee more than a year ago.The program is named after a project named in an X-Men comic book. The fact that the RCMP chose this name for its social media monitoring program is more than a little chilling.
In AT&T executives' heads, the 2015, $67 billion acquisition of DirecTV and the 2018 $86 billion acquisition of Time Warner were supposed to be the cornerstones of the company's efforts to dominate video and online video advertising. Instead, the megadeals made AT&T possibly one of the most heavily indebted companies in the world. To recoup that debt, AT&T quickly ramped up its efforts to nickel-and-dime users at every opportunity, from bogus new wireless fees to price hikes on both its streaming and traditional video services.This, in turn, wound up driving a customer exodus. In fact, AT&T has lost more than 8 million TV subscribers in just the last three years alone. Not exactly the kind of sector domination the company had in mind.Last year, "activist" investors at Elliott Management began making a stink about AT&T's obsession with mindless merger mania. Not that it hurt consumers or misdirected funds away from network investment, mind you, just that the debt was dragging down the firm's $3.2 billion investment in AT&T stock. In response, AT&T forced its CEO to "retire," and the company, at Elliott's behest, greatly accelerated mass employee firings and customer service offshoring. AT&T's since fired more than 42,000 employees in just the last few years, despite a $42 billion Trump tax break AT&T promised would result in "thousands of new, high paying jobs," and billions more in regulator favors ranging from the death of broadband privacy rules to the dismantling of net neutrality.Now it appears the moves were enough to give Elliott what it wanted. After raising a massive stink throughout much of 2019, the company this week quietly offloaded its entire stake in AT&T:
I won't write up a big summary of the ongoing turmoil in the Twitch community for this post. If you need to be brought up to speed, go see Part 1 or our previous posts on the platform. The only summary you really need is that the past few months have seen Twitch piss nearly everyone off by doing two things. First, it bowed to the RIAA over DMCA notices and nuked a ton of creator content without warning. Second, Twitch began experimenting with very intrusive ads, along with other methods for monetizing creator content. The PR communication coming from Twitch over all of this has been wanting, to say the least.But now it looks like Twitch is looking to tie a bow around both controversies to continue to piss off its talent even more, having announced that the once-sought-after Twitch Affiliate status, earned through a streamer's ability to get consistent eyeballs, has now been reduced to a pay-to-play scheme involving at least one record label.Here's the text from Twitch's Affiliate site detailing who qualifies.
I'm beginning to wonder if the folks that run Twitch are secretly attempting to commit corporate suicide. The past several weeks have seen the popular streaming platform embroiled in controversy. It began when, in response to the RIAA labels DMCA attacks on streamers, Twitch took the unprecedented step to simply nuke a zillion hours of recorded content without warning its creators. In the wake of that, the platform kept essentially silent on its actions, simply advising its creators that they should "learn about copyright". In lieu of any real crisis communication, Twitch instead rolled out the release of a new emoji, pissing everyone off. Then came Twitch's apology, where the Amazon-owned platform acknowledged that it really should have had a method for letting streamers know which content was accused of infringement instead of nuking it all, while also continuing the DMCApocalypse, getting so granular as to allow streamers to be targeted by DMCA claims on game music and sound effects, including on videos that had already been taken down.With its creators and patrons both in full revolt, it probably wasn't the best timing that Twitch's GlitchCon remote convention took place mid-November. Complaints about the convention were far-reaching, but much of it centered on the coin spent promoting it instead of Amazon simply licensing music so streamers could stream, along with the terse commentary on the turmoil itself.We'll start with the promotion of the event.
Devin Nunes is one of the most vocal supporters of Parler, regularly insisting that he supports Parler because Parler supports free speech (of course, as we've highlighted, Parler blocks users quite frequently, contrary to its marketing claims). Of course, Nunes is a free speech hypocrite. As we've highlighted over the last few years, he seems to have an itchy trigger finger when it comes to suing the media and various critics for their free speech, in a variety of SLAPP lawsuits -- with no clear answer yet on who is actually paying for these lawsuits designed to stifle and suppress free speech.Earlier this year, Nunes sued the Washington Post and reporter Shane Harris in the Eastern District of Virginia. That case was was transferred to the federal district court in DC where it continues to move forward (slowly). Now Nunes, with his regular lawyer Steven Biss, have sued the Washington Post yet again, this time with reporter Ellen Nakashima. Once again, it was filed in the Eastern District of Virginia, meaning that the Washington Post is likely to go through the same process again to try to transfer the case to the DC court.Like so many Nunes/Biss SLAPP suits, this one is... bad. At issue is the news from right after the election that a Trump loyalist and former Nunes staffer had been made the NSA's General Counsel apparently over the objections of the NSA's own director. This has raised a bunch of alarms for a variety of reasons -- and is seen as evidence that for all of the bullshit talk of "the deep state" being out to get Trump, he's spending his last couple months in office trying to construct his own deep state.It was Ellen Nakashima at the Washington Post who broke the story of the Ellis appointment, and that's the article that Nunes is now suing over. The lawsuit -- somewhat laughably -- argues that two sentences in the article are defamatory. Neither are defamatory. These are the two sentences:
This week, we've got another panel discussion for you, with Mike joining Georgetown Law fellow Gigi Sohn and panel moderator Zach Graves of the Lincoln Network (both also former podcast guests) at the Reboot 2020 conference to discuss the "techlash" — the public opinion backlash against big tech — and try to figure out what exactly it is, and where it's going in the future.Follow the Techdirt Podcast on Soundcloud, subscribe via iTunes or Google Play, or grab the RSS feed. You can also keep up with all the latest episodes right here on Techdirt.
I’m certainly not the firstperson (especiallyon Techdirt) to point out that if conservatives arereally concerned about online censorship, they should be puttingcopyright law under the microscope, rather than, or at least inaddition to, Section 230.The New York Post debacle andgating President Trump’s post-election tweets are the mostrecent arrows in the quiver for anti-tech conservatives. It doesn’thave anything to do with copyright (though Hunter Biden’semails, if they’re real, are eligible for copyrightprotection). But whenever Section 230 is used as a synecdoche for themore general laws that govern what private tech companies can andcan’t do on their sites, I cannot help but ask myself, “whyaren’t conservatives up in arms about copyright law?”I haven’t done a full accountingof all conservative run-ins with online content moderation policies. Still, at least for the President, the only instances something hehas posted was taken down–not had a warning labelattached, but properly removed–were for copyright infringement.In one case, Trump erroneouslyblamed Twitter and Section 230 for the removal of avideo on copyright grounds.Trump’s campaign hasalso gotten into legal trouble by playing music towhich he doesn’t have the rights at rallies, and conservativefigures have been on the receiving end of clearlybogus claims of copyright infringement. Of course,this isn’t to dismiss other cases where content has beenremoved, whatever you may think of them. My point is this: Putyourself in the shoes of a right-winger online, and you’d thinkcopyright would get at least as much airtime as Section 230, or anyairtime at all. Yet such criticisms are nowhere to be found.Why is this the case? I have a fewtheories, though none are particularly satisfying:One: Copyright is Private PropertyIam emphatically against this position, but manyconservatives subscribe to the belief that copyright is property anddeserves the same moral treatment as tilled land or gathered acornsappropriated by mixingone’s labor with it. My disagreements with thisposition aside, it’s an idea that must be taken seriously onthe merits and, more relevant to this discussion, because it’sa sincerely held belief.From this vantage point, it’seasy to see why the right isn’t up in arms about DMCA takedownnotices, automated copyright systems, or artists not allowing theirsongs to be used at political rallies. If someone owns theirproperty, they have a claim against the world to exclude others fromits use. You’re under no obligation to host a political rally(especially one supporting positions with which you disagree) on yourfront yard. You can own content in the same way you own your land.Thus you can restrict the use of your work.This is a straightforward position, butone which contradicts claims of unlawful or unjustified censorship bytech platforms. Twitter and Facebook own their websites in the sameway I own my work or someone else owns their lawn. If preventingsomeone from speaking by using one of these is censorship, they mustall be considered censorship.Though the treatment of works protectedby copyright as property seems like an easy way to separate copyrightenforcement from content moderation, Twitter has just as strong aclaim to ownership of its website as a photographer does to a photoor an artist to a song. Whether or not enforcing one’scopyright constitutes censorship, both these views run into anall-or-nothing wall.Two: ChinaThe terms “thief” and“infringer” are often used interchangeably. Still, if ifyou’re criticizing the unauthorized user of a copy who youdon’t like for other reasons, you’re more likely to callthem a thief due to the negative connotation associated with theword. A thief deprives someone of the fruits of their labor, while aninfringer sounds like someone who forgot to check the right box onform E-7A.And that’s what the U.S. has donein the case of intellectual property violations by Chinese actors.Allegations of theft cover more than just copyright, extending to awide range of behaviors ranging from outright espionage tostrong-arming business partners into transferring technology. And,while there’s no shortage of bootleggers operating out in theopen in China, those complaining about Chinese IP theft are moreconcerned about patents and trade secrets than works protected bycopyright.All that being said, when grievancesare aired about the Chinese government, complaints of intellectualproperty theft inevitably come up alongside far more serious chargesagainst the regime. This tweet from Senator Pat Toomey (R-PA) bestillustrates this dynamic:
Get your Fire In A Crowded Theatre gear in the Techdirt store on Threadless »You've heard it said, usually in defense of some sort of restriction on free speech, and often by people who really should know better: "You can't yell fire in a crowded theatre!" There are a whole lot of reasons that it's a terrible phrase that should have died a long time ago (see Popehat's thorough explanation) but they won't all fit on a t-shirt, so our gear offers a simple rebuttal. It's an old favorite design that we're relaunching today in our Threadless store: You Can Yell Fire In A Crowded Theatre.As always, the design is available on t-shirts, hoodies, sweaters and other apparel — plus various cool accessories and home items including buttons, phone cases (for many iPhone and Galaxy models), mugs, tote bags, notebooks, and of course face masks.Check out this and our other gear in the Techdirt store on Threadless »
Whole lot of people complaining about Section 230 at the moment. And it's a whole lot of people who should know better. Do you want to become Europe? Because this is how you become Europe.In 2019, the Court of Justice of the European Union picked up a libel lawsuit handed to it by an Austrian court. The case dealt with a politician's thin skin and supposedly defamatory content… you know, the sort of kneejerk reaction we've come to expect from authoritarians and bullies running countries with horrendous track records on human rights. But this is Austria, which is generally considered to be part of the "free world," rather than a despotic dictatorship whose top politicians are to be viewed as gods among men -- at gunpoint, if necessary.Even in the "free world," politicians far too often seem unable to handle criticism responsibly. There's really not much in this case that lends itself to any honest definition of the term "libel." Political rhetoric is superheated stuff, so a lawsuit over being called a "lousy traitor" on Facebook -- as Green Party politician Eva Glawischnig was -- should be considered an unactionable overreaction to normal online discourse. She was also called a "corrupt tramp" and a member of a "fascist party," which is a little more specific but well within the realm of opinion, rather than false statements portrayed as facts. Presumably even the person who posted the comments doesn't truly believe the politician is a sex worker who engages in the illegal acquisition of goods and services and/or is an actual facist.None of this matters in Austria. And none of this matters in the rest of the world either, according to the Court of Justice for the European Union (CJEU). Last summer, the CJEU discussed the Austrian lawsuit and opined that maybe Europe should control what content anyone gets to see anywhere else in the world. A few months later, it solidified its shaky thinking, opining that the worldwide reach of the internet justified extraterritorial censorship.
This refurbished Apple MacBook Air gives you superb tech features for a more affordable price. It has an Intel Core i5 1.6GHz processor and Intel HD Graphics 6000 video adapter for fast and high PC performance. It comes with a 13.3" widescreen TFT LED-backlit active-matrix glossy display and 1440x900 native resolution, giving you photos, videos, and texts in a clearer and more detailed view. With a 54Wh Li-Poly battery, this laptop can last up to 12 hours. All of these features are packed in a razor-thin lightweight aluminum casing, making it easier for you to take it anywhere. It's on sale at $630 for 128GB or at $660 for 256GB. Use the code SAVE15NOV to get an additional 15% off of this and other items throughout the store.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
The timing on this is quite incredible. On Monday, Georgia's (Republican) Secretary of State, Brad Raffensperger, spoke out, saying that Senator Lindsey Graham had called him and implied that Raffensperger should look to throw out ballots that were legally cast in the state. On Tuesday morning, in trying to defend his efforts to undermine the election, Graham tried to shake off his calls with Raffensperger as no big deal, saying that he also spoke to Arizona and Nevada election officials. This does not make things better. Indeed, it actually seems to make things worse (and that's even after Arizona's Secretary of State, Katie Hobbs, claimed that Graham's claims were "false" and she never spoke to him.All of this certainly seems like cause for concern about election interference and tampering. Indeed, it's the kind of thing a good government would at least investigate. And, in a stroke of good timing, the Senate Judiciary Committee was all set up on Wednesday to host a hearing about the 2020 Election and "suppression." Except... this hearing was organized and chaired by the very same Senator Lindsey Graham, and was yet another dog and pony show of internet CEOs having to defend specific content moderation choices.Now a sane person who loosely follows the news might be saying "wait, didn't we just do that last month?" And you'd be right. Just a few weeks ago, there was an almost identical hearing. Both hearings had Facebook CEO Mark Zuckerberg and Twitter CEO Jack Dorsey (the earlier hearing also had Google's Sundar Pichai). Both hearings featured a bunch of grandstanding and often clueless Senators demanding to know specific answers to why the websites did or did not moderate specific pieces of content.But this time it was the Senate Judiciary Committee, as compared to the Senate Commerce Committee last time. There were a few overlapping guests -- including Senators Ted Cruz, Mike Lee, and Marsha Blackburn. This one also included Senator Josh Hawley who grandstands with the best of them over this issue. Cruz and Lee basically did a warmed over, half-baked rehash of their performances from a few weeks ago. Hawley's performance was particularly stupid. He claimed to have heard from a "whistleblower" inside Facebook and posted two grainy screenshots of internal Facebook tools. One was its "Tasks" tool, which is a general company-wide task manager tool, which Hawley used to imply that Facebook, Twitter and Google are some how colluding to figure out which users, hashtags, and content they're going to suppress.This is not how any of this works. Hawley demanded that Zuckerberg turn over every mention of Google or Twitter in their Tasks tool, and Zuck quite reasonably pointed out that he couldn't commit to that without knowing what sort of sensitive information might be involved. This is basically the equivalent of Hawley asking for every email that mentions Twitter or Google. It's an insane and intrusive request, though he threatened to subpoena the company if Zuckerberg wouldn't comply. Hawley then demanded to know if any Facebook employees ever communicate with Twitter or Google.Zuckerberg, again, quite reasonably, pointed out that he's sure that people who work in trust and safety at some point or another know of people in similar roles at other companies and he's sure at some point or another some of them communicate with each other, but that's quite different than plotting over what content to block as Hawley kept insisting. Hawley then trotted out another screenshot of some other internal tool that Zuckerberg says he didn't recognize and thus couldn't answer any questions about -- which Hawley again pretended to be some damning evasiveness from the CEO. What it actually suggested is that this is not a very important tool, and Hawley is clearly overstating what it's used for.Oh, and Hawley, ridiculously, insisted on calling the trust and safety teams at these companies "censorship teams," and implying that they deliberately try to silence ideological content (they do not). Of course, what's truly crazy is that many of the half-dozen or so different Section 230 reform bills that Hawley has introduced in the Senate would actually require more content takedowns than we have today. But you can't be a demagoguing populist without demagoguing while the cameras are on, and Hawley played his part.If you'd like to read my play-by-play response to the entire hearing as it happened, I have a very long Twitter thread:
We've noted for years how broadband providers have increasingly imposed arbitrary, confusing, and punitive usage caps and overage fees to cash in on the lack of competition in US broadband. Not only have industry executives admitted these limits aren't technically necessary, they've increasingly been abused to hamstring competitors. AT&T, for example, doesn't impose the limits on its broadband customers who use its streaming video service (DirecTV Now), but will impose the added charges if you use a competitor like Netflix.For more than a decade ISPs have slowly but surely imposed such limits hoping that consumers wouldn't notice (think of the frog in the pot of boiling water metaphor with you as the frog). But with most folks stuck at home during an historic health and economic crisis, bandwidth usage (and thereby profits gleaned by usage caps) has grown significantly. In fact, data from OpenVault indicates that the number of broadband "power users," or users who consume more than a terabyte per month, has doubled over the past year:
As we noted last week, it was widely expected that sooner or later Donald Trump would turn his post-election temper tantrum towards Chris Krebs, the widely respected director of the Cybersecurity and Infrastructure Security Agency (CISA). Krebs had been standing firm in reporting that there was no evidence to support the widespread conspiracy theories about hacked voting machines. CISA had been proactively debunking these claims.On Tuesday morning, Krebs tweeted about how election security experts all agreed that there was no evidence of manipulated elections -- directly contradicting the ongoing unsubstantiated claims of the President and his enablers:
Several weeks back, we discussed how Hugo Boss, German upscale clothier, had opposed the trademark application for an artist who has taken to teaching online art classes during the pandemic. At issue was John Charles' decision to apply for a trademark on the phrase he used to sign off at the end of these classes: "Be Boss, Be Kind." That he had begun selling shirts and hats with the slogan on it, alongside the trademark application, was enough to get Hugo Boss' lawyers working on opposing the application and sending a legal threat letter to Charles, despite the fact that any claims about potential customer confusion between the two entities is laughable at best.As we noted at the time, while any legal letter such as this is at least mildly scary for someone like Charles, it should be stated that Hugo Boss wasn't overly threatening in the letter. Instead, the letter stated that the company would be opposing the trademark application, but was willing to drop the matter entirely if that application was withdrawn. In public comments, too, Hugo Boss made it clear that it was looking for an amicable resolution to the situation.And that, almost certainly in large part to the swift public backlash that occurred, is precisely what happened.
The COVIDian dystopia continues. After a brief respite, infections and deaths have surged, strongly suggesting the "we're not doing anything about it" plan adopted by many states is fattening the curve. With infections spreading once again, the ushering of children back to school seems to have been short-sighted.But not all the kids are in school. Some are still engaged in distance learning. For many, this means nothing more than logging in and completing posted assignments using suites of tools that slurp up plenty of user data. For others, it feels more being forced to bring their schools home. In an effort to stop cheating and ensure "attendance," schools are deploying spyware that makes the most of built-in cameras, biometric scanning, and a host of other intrusions that make staying home at least as irritating as actually being in school.The EFF covered some of these disturbing developments back in August, when some schools were kicking off their school years. Bad news abounded.
A rather interesting First Amendment opinion has been handed down by a federal court in Arizona. (h/t Volokh Conspiracy)At the heart of it is new mandates for data sharing and data protection by car dealers. In 2019, the Arizona state legislature passed the Dealer Data Security Law, which mandates changes to dealer management systems (DMSs), including the institution of protective measures to limit breaches or leaks of sensitive data held by car dealers.The law also requires DMS providers to integrate with third parties (like the dealerships themselves) and adopt standardized processes that will facilitate these integrations and improve compatibility between systems. The plaintiffs -- two DMS providers -- sued the state's Attorney General (along with the Arizona Automobile Dealers Association) claiming this new law violated the Constitution by compelling speech, namely the creation of new computer code and documentation.And so, this law and its good intentions (more compatibility, better protection of sensitive data) is possibly on its way to being declared unconstitutional. As the court sees it [PDF], compelling the production of code violates the First Amendment.
As school districts are facing the new school year under conditions drastically changed by COVID-19, the digital divide is deepening education inequality in the US.Many families struggle to meet the requirements of remote schooling as millions of students around the US lack access to a broadband internet connection. We’ve learned in pandemic times that our health depends on those of others and we are only as strong as our weakest links. Still, inequalities arising from the lack of internet and technological access mean that online learning poses insurmountable challenges to many households worldwide, leaving many children behind.Rachel Cooper, a teacher in rural Sacramento Valley reported to the Atlantic, “It’s rough, some kids are using their phones to log into class, but the screens are too small to do work on. Some kids’ internet cuts out in the middle of class, and others don’t log on at all. I’ve had several students already say that they were really nervous they were going to fall farther behind in a specific subject because they think distance learning is going to be really difficult.”Many Households Left BehindWhile the US is considered to be at the forefront of technological innovation, the Federal Communications Commission estimated that 21 million Americans lack a high-speed internet connection. In fact, researchers at Broadband Now found that the actual number is double the FCC’s figure. The disparity in the FCC’s numbers is a direct result of relying on internet service providers (ISPs) to self-report. This allows providers to claim they serve the population of an entire block even if they serve just one household on that block.The right to internet access was historically never prioritized by the US government, and was mostly left to be managed by private ISPs. ISPs gained even more freedom under the Trump administration, when federal regulation got looser. In 2017, net neutrality regulations were abolished, allowing broadband companies to decide where to build out their infrastructure and how much to charge for their services. This was the reversal of the 2015 decision by the Obama administration to have stronger oversight over ISPs and it generally reflected the Trump administration’s view that regulation by the market will lead to better results and yield more innovation.This decision, however, largely led to systemic issues like digital redlining, a practice of creating and perpetuating inequities between already marginalized groups specifically through the use of digital technologies. An example of digital redlining is when ISPs deliberately won’t serve certain geographical areas and low-income neighborhoods because they are not considered profitable.“Unlike rural areas, where providers receive a subsidy to serve a high-cost area, no subsidies exist to encourage providers to serve or upgrade urban neighborhoods despite the perceived lack of profit,” Gene Kimmelman, senior advisor at the think tank Public Knowledge testified. “Either we should build new programs explicitly designed to create competing providers in these underserved neighborhoods or legislation should require universal service standards or other anti-redlining measures enforced at either the state level or by the FCC.”Some of the unconnected families live in areas that are not serviced by providers, but others simply can’t afford to pay for an internet connection. The average cost of internet service in the United States is about $68 per month (compared to Europe’s average of $44) which is simply a cost that not all households can bear.Short-term Solutions for Bridging the GapAt the advent of COVID-19 and remote schooling, many school districts organized Wi-Fi-equipped buses to drive around areas where disconnected students live. In Albuquerque, New Mexico, Public Schools with the help of the City of Albuquerque were providing “drive-up mobile Wi-Fi units at a number of APS schools and other public locations.” These drive-up mobile units were usable up to a 100 feet radius, which meant that internet users could remain in their cars to aid social distancing.Some school districts have also tried to subsidize Internet access for disconnected students, often with funds from the government's $2.2 trillion coronavirus aid package under the CARES Act. Additionally, many school districts have purchased and distributed 4G wireless hotspots or paid for discounted wired Internet services for low-income families, such as Comcast's $9.95 per month Internet Essentials package, which now connects approximately 200,000 students. However, these efforts are mostly initiated by school districts and ISPs “free” offers are usually limited and capped, thus sub-par to the realistic broadband needs of students learning online.In Need of Long-term RegulationThese measures taken by school districts are good short-term fixes, they will not solve the digital divide for the future when online schooling will be a common practice.Long-term solutions need federal or municipal investments and have to come from non-commercial efforts. For example, Congress could encourage municipal broadband to intervene where private companies do not see worthwhile business opportunities. However, competition between ISPs and municipal broadband networks is limited by state law in more than half of US states, and municipal broadband can not be set up in areas that are already served by one private ISP.This means that many communities are left with ISPs that provide poor quality and expensive services because of the lack of competition, and are unable to pursue municipal broadband because their area is considered “served”. Some states also require municipal broadband services to match prices of the local ISP, further limiting competition for private providers.Global OutlookAccording to a new UNICEF report, 463 million children globally were unable to access remote learning when schools closed due to COVID-19. The report highlights significant inequality across regions: sub-Saharan Africa is the most affected, where at least half of all students cannot be reached with remote learning.The governments of South Korea and Sweden are building national broadband infrastructures and letting ISPs use them. In the US, essential services like broadband connections are left to be managed by private ISPs, leading to whole areas and neighborhoods with no or poor internet connection. What will be the long-term systemic consequences of children in rural and low-income households struggling to keep up with online curricula?Andrea Kelemen is a Berlin-based writer and content strategist exploring topics related to technology ethics and the cultural effects of digital media for FairShake. IRL, she likes swimming, dancing and deconstructing objectifying dualisms with both human and non-human agents.
A few weeks ago, the RIAA hurled a DMCA takedown notice at an unlikely target: GitHub. The code site was ordered to take down its repositories of youtube-dl, software that allowed users to download local copies of video and audio hosted at YouTube and other sites.The RIAA made some noise about copyright infringement (citing notes in the code pointing to Vevo videos uploaded by major labels) before getting down to business. This was a Section 1201 complaint -- one that claimed the software illegally circumvented copyright protection schemes applied to videos by YouTube.The takedown notice demanded removal of the code, ignoring that fact there are plenty of non-infringing uses for a tool like this. It ignored Supreme Court precedent stating that tools with significant non-infringing uses cannot be considered de facto tools of infringement. It also ignored the reality of the internet: that targeting one code repository wouldn't erase anything from dozens of other sites hosting the same code or the fact that engaging in an overblown, unjustified takedown demand would only increase demand (and use) of the software.Youtube-dl is a tool used by plenty of non-infringers. It isn't just for downloading Taylor Swift videos (to use one of the RIAA's examples). As Parker Higgins pointed out, plenty of journalists and accountability activists use the software to create local copies of videos so they can be examined in far more detail than YouTube's rudimentary tools allow.
Project Management Professional (PMP) is the most important industry-recognized certification for project managers. Whether you are new to Project Management or on your way to completing your 35 hours of study to sit for your PMP Certification Exam, the PMP 6 Certification Training Course will give you the skills you need. This class is delivered on-demand and allows you to spend as much time as you need on each of the subject areas. The course comes with games, over 1,000 exam prep questions and much more. It's on sale for $79. Don't forget to use the code SAVE15NOV to get an additional 15% off of this course and other items throughout the store.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
UK Parliament Member Damian Collins has been pushing dangerous nonsense about social media content moderation for a while now. A couple years ago he held a theatrical hearing on fake news that was marred by the fact that Collins himself was spreading fake news. Last year, he announced incredibly dangerous ideas about "stopping fake news" on websites.And now he's doing something even stupider. According to the Financial Times, Collins is working with Boris Johnson on forcing a "duty of impartiality" on websites, saying that they cannot moderate political content:
Back in March, the Trump FCC put on a big show about a new "Keep America Connected Pledge" to help broadband users during COVID. In it, the FCC proudly proclaimed that it had gotten hundreds of ISPs to suspend usage caps and late fees, and agree to not disconnect users who couldn't pay for essential broadband service during a pandemic. The problem: the 60 day pledge was entirely voluntary, temporary, and because the FCC just got done obliterating its authority over ISPs at lobbyist behest (as part of its net neutrality repeal), was impossible to actually enforce. It was regulatory theater.The rather meaningless pledge has since expired despite the pandemic only getting worse. And because this FCC doesn't actually care about consumer protection (it literally doesn't even collect data on who is getting kicked offline for nonpayment), many ISPs simply ignored the pledge, and kicked users offline anyway; even disabled Americans who were told repeatedly by their ISPs that they wouldn't be booted offline for nonpayment during the crisis. Meanwhile, most ISPs have also restored their bullshit, arbitrary usage caps, making them a pretty additional penny during a crisis.Meanwhile, because the FCC's broadband availability and pricing data collection is a joke, it's proving harder than ever for local municipalities to help the public during this crisis. With broadband now essential for survival during COVID, many towns and cities are struggling to ensure Americans can get online, and working blind thanks to federal government incompetence and a lack of transparency in the broadband sector.Government leaders in Philadelphia, for example, can't get accurate low-income broadband household penetration data from either the FCC or Comcast, so they're literally having to go around to ask families if they've got service and how much they pay:
Techdirt has been writing about trade agreements for many years. The reason is simple: as digital technology permeates ever more aspects of modern life, so international trade deals reflect this by including sections that have an important impact on the online world. A new trade agreement between Japan and the UK (pdf) is a good example. It is essentially a copy of the earlier trade deal between the EU and Japan (pdf) -- because of Brexit, UK negotiators have not had the time or resources to draw up their own independent text, which typically requires years of drafting and negotiation. But significantly, the Japan-UK agreement adds several major sections purely about digital matters. All are terrible for the general public, as a briefing document from the UK-based Open Rights Group explains.One issue concerns transfers of personal data between the UK and Japan. In the EU, this is governed by the well-known and relatively stringent GDPR. In fact, in order to achieve "adequacy" -- essentially, legal permission to receive EU personal data -- Japan has had to strengthen its data protection laws:
What a few weeks for Twitch. You will recall that the platform went about pissing a ton of its talent and viewers off by nuking a metric ton of video content on the site in response to a flood of DMCA takedown notices, most of them from the RIAA. And this truly was the nuclear option, far different from the notice/counternotice system most platforms use. In fact, it was so extraordinary that it arguably lost Twitch its DMCA safe harbor. Regardless, when the company then followed up with a message to all Twitch creators that they should go educate themselves on matters of copyright and proactively delete any recordings or clips that might run afoul of copyright law, it created a cluster-fuck with virtually nobody having any idea how or what they should be doing. In response to the turmoil, Twitch brilliantly rolled out an announcement for a new emoji.And it just keeps getting worse. This week, Twitch has finally come out with an apology to its talent, noting that the company, bought by Amazon in 2014, probably should have been able to provide better tools and a system that wouldn't have required the mass deletion of millions of hours of recorded content.
The Baltimore PD can still use its flying spies, says the Fourth Circuit Court of Appeals. The aerial surveillance program -- first "introduced" on accident in 2016 -- allows the PD to track the movement of people across the entire city, thanks to high-powered cameras mounted on airplanes. The surveillance system (created by Persistent Surveillance Systems) can capture 32-square miles. People and vehicles are reduced to pixels despite the power of the 192-million-megapixel cameras, but combining this footage with street-level surveillance allows the PD to deanonymize moving pixels observed near crime scenes.The entire system was paid for by a private donor, allowing the PD to sidestep its transparency obligations to the public. After the initial run ended, the PD resurrected it -- this time following the proper processes for introducing new surveillance systems to the city.Earlier this year, a federal court rejected requests for an injunction, stating that the observations of moving pixels didn't amount to a Constitutional violation. Even though these pixels could be identified using ground-based surveillance, the court didn't see anything in the system that amounted to persistent, intrusive surveillance with Fourth Amendment implications.The case went to the Fourth Circuit Court of Appeals. During oral arguments, the judges appeared mostly sympathetic to the city's arguments, claiming it was almost impossible to violate the rights of unidentified pixels whose movements have been observed on public streets.The Appeals Court has delivered its decision [PDF]. And, as expected, it has declared the program to be Constitutional. The opinion opens with something suggesting the judges feel the ends justify the means.
Back in March you may remember that we wrote about yet another ridiculous SLAPP suit filed by the Donald Trump campaign (using lawyer Charles Harder, who, you may also remember, was the lawyer in the lawsuit against us as well). Harder's track record in these performative cases continues to be... rather lacking. Last week, you may have missed that amidst all the other legal disputes Trump's campaign was losing, this particular case was also dismissed -- though, not quite as easily as I had expected. And it does leave it open for an amended complaint to be filed, though I still can't see how it passes muster.If you don't recall, this particular lawsuit was about an opinion piece on CNN by Larry Noble, a former general counsel for the Federal Election Commission, who laid out a detailed analysis of the Mueller report about Russian interference in the 2016 election, and how it likely violated Federal Elections laws. The article expressed Noble's opinions, based on clearly disclosed facts. And that, by definition, should not be defamatory. District court judge Michael L. Brown -- who was appointed to the bench by Trump -- rejects the complaint, but not because it was opinion and therefore not defamatory.The case focuses on a single statement in Noble's CNN article:
by Konstantinos Komaitis and Farzaneh Badiei on (#5AEZH)
InAugust 2012, YouTube briefly tookdown avideo that had been uploaded by NASA. The video, which depicted alanding on Mars, was caught by YouTube’s Content ID system as apotential copyright infringement case but, like everything else NASAcreates, it was in the public domain. Then, in 2016, YouTube’sautomated algorithms removedanother video, this time a lecture by a Harvard Law professor, whichincluded snippets of various songs ranging from 15 to roughly 40seconds. Of course, use of copyright for educational purposes isperfectly legal. Examples of unwarranted content takedowns are notlimited to only these two. Automated algorithms have been responsiblefor taking down perfectly legitimate content that relates tomarginalizedgroups,politicalspeechor the mere existence of information that relates to warcrimes.But,the over-blocking of content through automated filters is only onepart of the problem. A few years ago, automated filtering wassomewhat limited in popularity, being used by a handful of companies;but, over the years, they have become increasingly the go-totechnical tool for policy makers wanting to address any content issue-- whether it is copyrighted or any other form of objectionablecontent. In particular, in the last few years, Europe has beenchampioningupload filters as a solution for the management of content. Althoughnever explicitly mentioned, upload filters started appearing as earlyas 2018 in various Commission documents but became a tangible policytool in 2019 with the promulgation of the Copyright Directive.Broadlyspeaking, upload filters are technology tools that platforms, such asFacebook and YouTube, use to check whether content published by theirusers falls within any of the categories for objectionable content.They are not new - YouTube’s Content ID system dates back to2007; they are also not cheap - YouTube’s Content ID has cost areported$100 million to make. Finally, they are ineffectiveasmachine learning tools will always over-block or under-block content.But,even with these limitations, upload filters continue to be thepreferred option for content policy making. Partly, this is due tothe fact that policy makers depend on online platforms to offertechnology solutions that can scale and can moderate content enmasse. Another reason is that elimination of content and take-downsis perceived to be easier and has an instant effect. In a world wheremore than 500 hours of content are uploadedhourlyon YouTube or 350 million photos are posteddaily onFacebook, technology solutions such as upload filters appear moredesirable than the alternative of leaving the content up. A thirdreason is the computer-engineering bias of the industry. What thismeans is that typically when you build programmed systems, you followa pretty-much predetermined route: you identify a gap, buildsomething to fill that gap (and, hopefully, in the process make moneyat it) and, then you iteratively fix bugs in the program as they areuncovered. Notice that in this process, the question of whether theproblem is best solved through building a software is never asked.This has been the case with the ‘upload filters’software.Asonline platforms become key infrastructure for users, however, themoderation practices they adopt are not only about content removal. Through such techniques, online platforms undertake agovernance function, which must ensure the productive, pro-social andlawful interaction of their users. Governments have depended onplatforms carrying out this function for quite some time but, overthe past few years, they have become increasingly interested insetting the rules for social network governance. To this end, thereseems to be a trend of several new regionaland nationalpolicies that mandate upload filters for content moderation.Whatis at stake?Theuse of upload filters and the legislative efforts to promote them andmake them compulsory is having a major effect on Internetinfrastructure. One of the core properties of the Internet is that itis based on an open architecture of interoperable and reusablebuilding blocks. In addition to this open architecture, technologybuilding blocks work together collectively to provide services to endusers. At the same time, each building block delivers a specificfunction. All this allows for fast and permissionless innovationeverywhere.User-generated-contentplatforms are now inserting deep in their networks automatedfiltering mechanisms to deliver services to their users. Platformswith significant market power have convened a forum called the GlobalInternet Forum to Counter Terrorism (GIFCT),through which approved participants (but not everyone) collaborate tocreate shared upload filters. The idea is that these filters areinteroperable amongst platforms, which, primafacie,is good for openness and inclusiveness. But, allowing the designchoices of filters to be made by a handful of companies turns theminto defactostandards bodies. This provides neither inclusivity nor openness. Tothis end, it is worrisome that some governments appear keen toempowerand perhaps anointthis industry consortiumas a permanent institutionfor anyone who accepts content from users and republishes it.In effect, this makes an industry consortium, with its designassumptions, a legally-required and permanent feature of Internetinfrastructure.Conveningclosed consortiums, like the GIFCT, combined with governments’urge to make upload filters mandatory can violate some of the mostimportant Internet architecture principles: ultimately, uploadfilters are not based on collaborative, open, voluntarystandards but on closed, proprietary ones, owned by specificcompanies. Therefore, unlike traditional building blocks, theseupload filters end up not being interoperable. Smaller onlineplatforms will need to license them. New entrants may find thebarriers to entry too high. This,once again,tilts the scales in favor of large, incumbent market players anddisadvantages an innovator with a new approach to these problems.Moreover,mandating GIFCT tools or any other technology, determines the designassumptions underpinning that upload filter framework. Upload filtersfunction as a sort of panopticon device that is operated by socialmedia companies. But, if the idea is to design a social media systemthat is inherently resistant to this sort of surveillance, thenupload filters are not going to work because the communications areprotected from users. In effect, that means that mandating GIFCTtools, further determines what sort of system design is acceptable ornot. This makes the regulation invasive because it undermines the"general purpose" nature of the Internet, meaning somepurposes get ruled out under this approach.Thecurrent policy objective of upload filters is twofold: regulatingcontent and taming the dominance by certain players. These arelegitimate objectives. But, as technology tools, upload filters failon both counts: not only do they have limitationsin moderating content effectively, but they also cementthe dominant position of big technology companies. Given the costs ofcreating such a tool and the requirement for online platforms to havesystems that ensure the fast, rigorous and efficient takedown ofcontent, there is a trend emerging where smaller players depend onthe systems of bigger ones.Ultimately,upload filters are imperfect and not even an effective solution toour Internet and social media governance problems. They don’treduce the risk of recidivism and only eliminate the problems, nottheir recurrence. Aside from the fact that upload filters cannotsolve societal problems, mandated upload filters can adversely affectInternet architecture. Generally, the Internet’s architecturecan be impactedby unnecessary technology tools, like deep packet inspection, DNSblocking or upload filters. These tools produce consequences that runcounter to the benefits expected by the Internet: they compromise itsflexibility and do not allow the Internet to continuously serve adiverse and constantly evolving community of users and applications.Instead, they require significant changes to the networks in order tosupport their use.Overall,there is a real risk that upload filters become a permanent featureof the Internet architecture and online dialogue. This is not asociety that any of us should want to live in - a society wherespeech is determined by software that will never be able to grasp thesubtlety of human communication.KonstantinosKomaitis is the Senior Director, Policy Strategy at the InternetSocietyFarzanehBadiei is the Director of the Social Media Governance Initiative atYale Law School.
The never-ending amount of election related litigation keeps on coming. The Trump campaign is still heavily invested in lawsuits -- a practice it started before the election and it hasn't scaled back now that its boy has been handed an L.Georgia remains a hotly contested state, thanks in part to pressure applied by the outbound President and his many minions. It will recount all five million votes, which Trump appears to believe will reverse Biden's 14,000-vote lead.Georgia has long been a victim of its Governor, dating back to his days as the Secretary of State. During Brian Kemp's tenure as an elected official, voting in Georgia has been little more than his political plaything. Issues with the state's voting tech were ignored in favor of Kemp's indulgence in wild speculation, culminating with his baseless claims the Democratic National Committee had hacked the state's voter registration system. A Georgia Bureau of Investigation investigation found no evidence supporting Kemp's ridiculous assertions.Now, as Courthouse News reports, the DNC is suing Kemp over his election related bullshit. This isn't a defamation suit, even though it's possible to see that claim being raised. Instead, the DNC alleges the then Secretary of State violated federal election laws with his claims of DNC hacking and his decision to air his speculation hours before the polls opened in 2018.The lawsuit [PDF] opens with this stinging sentence, which highlights one the many problems with allowing Brian Kemp to oversee the 2018 election.
The Complete 2020 CompTIA Certification Training Bundle has 14 courses to teach you IT Fundamentals, Infrastructure, and Cybersecurity. The courses cover what you need to know in order to pass various CompTIA certification exams such as A+, Network+, Server+, Linux+, and more. You'll learn hardware basics, troubleshooting, software installation, security, and networking. It's on sale for $69 and save an additional 15% on this and other deals throughout the month using the code SAVE15NOV.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
We've long mentioned how incumbent ISPs like AT&T and Comcast have spent millions of dollars quite literally buying shitty, protectionist laws in around twenty states that either ban or heavily hamstring towns and cities from building their own broadband networks. In some cases these laws ban municipalities from even engaging in public/private partnerships. It's a scenario where ISPs get to have their cake and eat it too; they often refuse to upgrade their networks in under-served areas (particularly true among telcos offering DSL), but also get to write shitty laws preventing these under-served towns from doing anything about it.This dance of dysfunction has been particularly interesting in Colorado, however. While lobbyists for Comcast and CenturyLink managed to convince state leaders to pass such a law (SB 152) in 2005, the legislation contains a provision that lets individual Colorado towns and cities ignore the measure with a simple referendum. With frustration mounting over sub-standard broadband and awful customer service, more than 100 towns and cities have done so thus far. And that was before a pandemic highlighted the urgent importance of broadband for public safety.The trend continued this month, when the vast majority of Colorado voters (82%) voted to opt out of the state law restricting community broadband. According to the Institute for Local Self-Reliance, several other communities voted along the same lines, and more than 140 Colorado communities have done the same in the fifteen years since the Colorado law was passed:
Five Years AgoThis week in 2015, we looked at early warnings of the EU's all-out attack on hyperlinks, while the silly Monkey Selfie lawsuit was winding forward, and a new surprise player entered the copyright fight over Happy Birthday. The MPAA's attempt to sneak SOPA in the back door was rejected, but the agency was getting cozy with the House Judiciary Committee. And we looked at the unsurprising trio of industries that most loved the TPP agreement.Ten Years AgoThis week in 2010, the USPTO was going in the wrong direction when it came to standards for patents, while we were sad to see the MIT Tech Review come out in favor of patent trolls. We saw some examples of overly draconian punishment with a sentence of 30 months in prison and over $50,000 in fines for a DDoS attack, an arrest in Japan for a leak of a new Pokemon character, and a university promising to report file sharing to police and warning students about five-year prison terms — so it was a good week to also take a look at just how insane statutory damages for file sharing are.Fifteen Years AgoThis week in 2005, Sony was not-really-dealing with the fallout from the previous week's rootkit fiasco. As a class-action lawsuit was being prepared, the company was flubbing its media response and claiming rootkits aren't a problem because most people don't know what they are — never mind the fact that virus writers were already taking advantage of Sony's technology to hide their tracks. This prompted some to take a deeper dive into Sony's EULA, and find some ridiculous provisions like requiring you to delete all your music if you go bankrupt. Finally, at the end of the week, the company was browbeaten into "temporarily" stopping production of the rootkits, though apologies or admissions of wrongdoing were not forthcoming.
The election is over and, no matter the current administration's flailings, Joe Biden is now President Elect. It was, well, quite a campaign season, filled with loud interruptions, a deluge of lies, and some of the most bizarre presidential behavior on record. And, rather than running on his own record, the Trump Campaign mostly went 100% negative, filling the digital space with all kinds of hits on Biden.One of those was a crudely put together video that showed a Trump/Pence train zipping by on some tracks, with a Biden hand-car chugging behind him. On the Biden train car were fun references to smelling hair and other childish digs. Some clips of Biden speaking made up the audio for the spot, along with the hit song from 1983 "Electric Avenue." Tweeted out on Trump's personal Twitter account, it turns out that nobody had licensed the song for the video, leading Eddy Grant to sue the President.Trump's defense in a motion to dismiss is... fair use. How? Well...
Summary:Shortly after protests began in Kenosha, Wisconsin over the shooting of Jacob Blake by police officers, armed citizens began showing up ostensibly to protect businesses and homes from violent protesters. One of these citizens was Kyle Rittenhouse, an Illinois native who traveled to Kenosha as a self-appointed peacekeeping force.Following an altercation at a Kenosha car dealership, Rittenhouse shot three protesters, killing two of them. Shortly after it became apparent Rittenhouse was going to be criminally-charged, fundraisers for his legal defense were set up in his name at GoFundMe.As controversy continued to swirl, GoFundMe deleted the fundraisers from its platform and refunded all donations. When asked for the reason, GoFundMe stated the fundraisers had violated its terms of service. While nothing was specifically cited by GoFundMe as the violation triggering the removals, its terms of service allow it to remove "any other activity" the site deems "unacceptable."Decisions to be made by GoFundMe:
Yesterday we noted that TikTok had made a filing with the government asking what the fuck was going on with the supposed ban on their application that was supposed to go into effect this week. While a court had issued an injunction saying the Commerce Department couldn't put the ban into effect, the Trump administration basically hadn't said anything since then, and the ban was set to go into effect yesterday.Late yesterday, the Commerce Department put out a notice basically saying that it's complying with the injunction issued by the court, and therefore not implementing the executive order and the ban:
Early in the pandemic, one of our MediaJustice Network members reached out to us in hopes we could support a group of high school students in Baltimore who were trying to amplify their campaign. The students are leaders in a Latinx and immigrant student organization called Students Organizing for a Multicultural and Open Society (SOMOS), and this was their first time organizing for digital equity.When school ended last year, SOMOS realized that many of their fellow Baltimore city schoolmates who’d relied on Comcast’s Internet Essentials discount program didn’t have a connection fast or reliable enough for online school. Whenever they could get into virtual classes, they’d often get kicked off multiple times a day and sometimes multiple times during a single class. Households with multiple students or family members working from home had to schedule who could be online, when and for how long. Families were put in impossible situations, forced to negotiate whose education or work was more important, and who would have to sacrifice and fall behind.Philadelphia MediaJustice Network member Movement Alliance Project (MAP) had been working with allies and students on a #ParkingLotWifi campaign highlighting stories that have become all too familiar in 2020: parents and students sitting in parking lots (like Taco Bell) just to get access to WiFi for online school. MAP’s campaign targeted Comcast, demanding the telecom giant open up their residential wifi hotspots to the public so students and community members could access the Internet during the pandemic from the safety of their homes. The SOMOS students adopted this demand and added two of their own addressing Comcast’s Internet Essentials plan:
Yesterday afternoon the Twitter account of the US's Immigrations and Customs Enforcement (ICE) briefly disappeared from the internet. Was it... anti-conservative bias? Nope. Was it ICE doing more stupid shit in locking up children and separating them from their parents? Nope. Was it ICE's willingness to seize domain names with no evidence, claiming "counterfeit"? Nope. It was that ICE had changed the "birthday" on its account to make it so that its "age" was less than 13. Thanks to the ridiculousness of the Child Online Privacy Protection Act (COPPA), which has basically served only to have parents teach their kids it's okay to lie online in order to use any internet service, most websites say you can't use the service if you're under 13 years old. ICE changed its "birthdate" to be less than 13, thereby making it... shall we say, something of a "stranded minor" and Twitter automatically, well, "separated it" from its account.Might be nice for ICE to get a sense of how that feels.Of course, what this really highlights is the idiocy of COPPA and how nearly every website tries to deal with its requirements. As we noted, Twitter, like many internet sites outright bars kids under 13 to avoid COPPA's rules. Twitter does note in its forms that you need to put in your own date of birth, even if your account is "for your business, event, or even your cat."But... that's bizarre. For accounts like this, whose birthday matters? Many such accounts are managed by multiple people. Whose birthday gets put in there? The answer is, of course, that a birthday is just made up. And, if you make it up, apparently you need to make up one that is older than 13 to avoid this COPPA-based "separation."Anyway, ICE figured stuff out and made a little joke about it:Frankly, I'd rather they focused on actually helping asylum seekers find protection in our country rather than tossing them out, and maybe put some of that effort into reuniting the 666 kids back to the families they've lost track of.
The Complete MATLAB Programming Bundle has 7 courses to help you get started with MATLAB. MATLAB is a leading software in numerical computing and building algorithms that are widely used by engineers, programmers, researchers, teachers, colleges, and entrepreneurs. You'll begin by using it in some elementary mathematics problems, move on to produce 2D and 3D graphs, and build your own algorithms. You'll learn how to simulate power electronic circuits, solar energy systems, synchronous generators, and more. It's on sale for $35.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
One of the few parts of the federal government that hasn't dissolved into a complete partisan trash heap was the newly created Cybersecurity and Infrastructure Security Agency (CISA), a division of Homeland Security that was created as part of the problematic CISA bill five years ago. While we were disappointed in many aspects of the bill itself, as an organization CISA has done some pretty good work in coordinating and dealing with cybersecurity threats. Throughout the tech industry I've heard nothing but good things about CISA as a government organization, and its director Chris Krebs (as well as the staff of CISA). Indeed, I've heard from many companies preparing for this year's election how useful CISA has been in providing clear and useful information regarding potential cybersecurity threats.Relatedly, CISA has an excellent Rumor Control page that debunks various myths about potential cybersecurity risks regarding the election. It's very good and very thorough. And, in fact, it debunks many of the myths that various Trumpists have been spreading around social media in pretty clear and concise language:It appears that the White House has finally realized this exists and decided it's not a good look for its own organization to be debunking the very same myths that the White House itself is trying to boost and spread as real. The White House apparently asked Krebs to have the page changed, and Krebs rejected the request. He's now telling colleagues he expects to be fired for standing up for the truth and against nonsense about election hacks.
With the Biden victory, FCC boss Ajit Pai is being urged to pause all controversial rulemaking, including the agency's absurd and now likely doomed attempt to regulate social media and undermine Section 230 via the FCC. With a Biden win, Pai's guaranteed to lose his spot as top commissioner, and is likely to exit the agency altogether.Energy and Commerce Committee heads Frank Pallone and Mike Doyle this week wrote to Federal Communications Commission (FCC) Chairman Ajit Pai and Federal Trade Commission (FTC) Chairman Joseph Simons, urging them to, as is tradition, pause any controversial rulemaking in preparation for the incoming Biden administration:
The Trump Campaign is back in court, hoping to reclaim a presidency Donald Trump has lost. It spent plenty of time in court prior to the election, hoping to prevent as many people as possible from voting. Now, it's doing the same thing, insisting (without evidence) there's voter fraud everywhere.Immediately following Election Day, the Trump campaign opened its own voter fraud hotlines. People who thought they observed voting fraud were encouraged to call the campaign or submit sworn statements via a handful of websites. Both offerings were immediately swamped by pranksters and other non-fans of Trump, tying up phone lines and filling the webform coffers with useless things like, say, the script from "Bee Movie."Undeterred by a lack of credible fraud accusations, the Trump campaign still attempted to submit some of its mostly unvetted webform garbage as "evidence" in its Arizona lawsuit. As Adam Klasfield reports, the judge wasn't impressed by the campaign's attempt to portray a bunch of statements from internet randos as something worth the court's time and attention.
In the aftermath of our recent election, with all of the exuberance on one side and the laughable claims of stolen elections on the other, one underlying concern discussed before the election seems to have gone by the wayside: what happens in the last days of the Trump presidency if he loses? You heard the most prevalent concerns in the immediate runup to election day, which typically amounted to wondering aloud what unhinged or corrupt shit Dear Leader would get up to when his Dear-Leadership suddenly carried an expiration date? It was, frankly, a fair concern to have.But there is a flip side to that fear: what will other countries do in the final days of the Trump presidency, particularly those that have gotten used to his lax attitude towards authoritarianism, human rights abuses, and most of the goings-on around the world? Would Russia attempt to gobble up more previously-Soviet territory, a la Crimea? Would Saudi Arabia carry out more brutal attacks on journalists critical of the Saudi Royal Family? Would China give up its slow-crawl dismantling of democracy in Hong Kong and just try to take over?Well, on that last one at least, we now know the answer is yes. In fact, it was only in the wake of the election in America being called for President Elect Biden that China rushed through a resolution to oust four pro-democracy members of the Hong Kong government, seemingly for being too anti-Beijing.
Back in July, we noted that after years of living with a pathetically weak anti-SLAPP bill, the NY legislature had finally approved a more significant anti-SLAPP bill. It's incredible that it has taken this long, given that much of the media industry is based in New York, and for so many years has been open to a barrage of ridiculous SLAPP suits, since the old law only covered speech made in the process of petitioning the government. Also, unlike most anti-SLAPP bills, New York's did not have automatic fee shifting, which would make the vexatious litigant have to pay for the legal costs of the defendant.For unclear reasons, the bill sat on Governor Andrew Cuomo's desk unsigned for months. However, that finally ended yesterday as he has now signed the bill into law:
This recent decision [PDF] by the First Circuit Court of Appeals details a law enforcement enabled nightmare -- one that saw the plaintiff shot by the same person who had raped her earlier… and someone the police were supposed to be trying to locate. So much for the "Thin Blue Line." The line never materialized here and, in fact, took affirmative steps to erase what little line there actually was.There's a lot to take in here. It goes from horrific to terrifying to cataclysmic in a hurry. And, according to these allegations -- supported by officers' own statements and reports -- the detectives handling the case seemingly went out of their way to make things worse for the woman reporting a rape.Here's how it starts. And it's hard to believe it gets worse from here. But it does.
In many ways, Zoom is an incredible success story. A relative unknown before the pandemic, the company's userbase exploded from 10 million pre-pandemic to 300 million users worldwide as of last April. One problem: like so many modern tech companies, its security and privacy practices weren't up to snuff. Researchers found that the company's "end-to-end encryption" didn't actually exist. The company also came under fire for features that let employers track employees' attention levels, and for sharing data with Facebook that wasn't revealed in the company's privacy policies.While the company has taken great strides to improve most of these problems, the company received a bit of a wrist slap by the FTC this week for misleading marketing and "a series of deceptive and unfair practices that undermined the security of its users." A settlement (pdf) and related announcement make it clear that the company repeatedly misled consumers with its marketing, particularly on the issue of end-to-end encryption:
A few weeks ago we had a story about the RIAA getting GitHub to remove YouTube-dl using a bizarre form of copyright takedown. The RIAA claimed that the tool violated rules against circumventing DRM. Over at Freedom of the Press Foundation, Parker Higgins has highlighted how often this tool is used legitimately for journalism purposes, which is important. Under the Betamax standard, tools with substantial non-infringing uses should not run afoul of copyright law. Higgins' writeup is reposted here with permission.The popular free software project “YouTube-dl” was removed from GitHub following a legal notice from the Recording Industry Association of America claiming it violates U.S. copyright law. According to the RIAA, the tool's “clear purpose” includes reproducing and distributing “music videos and sound recordings... without authorization.”In fact, YouTube-dl is a powerful general purpose media tool that allows users to make local copies of media from a very broad range of sites. That versatility has secured it a place in the toolkits of many reporters, newsroom developers, and archivists. For now, the code remains available to download through YouTube-dl's own site, but the disruption of its development hub and the RIAA saber-rattling jeopardizes both the future of the software and the myriad journalistic workflows that depend on it.Numerous reporters told Freedom of the Press Foundation that they rely on YouTube-dl when reporting on extremist or controversial content. Øyvind Bye Skille, a journalist who has used YouTube-dl at the Norwegian Broadcasting Corporation and as a fact checker with Faktisk.no, said, “I have also used it to secure a good quality copy of video content from YouTube, Twitter, etc., in case the content gets taken down when we start reporting on it.” Skille pointed to a specific instance of videos connected to the terrorist murder of a Norwegian woman in Morocco. “Downloading the content does not necessarily mean we will re-publish it, but it is often important to secure it for documentation and further internal investigations.”Justin Ling, a freelance investigative reporter who often covers security and extremism for outlets including Foreign Policy and VICE News, described the scenario of reporting on the rise of conspiracy theories as the relevant posts face removal and bans. YouTube “has been a crucial hub for QAnon organizing and propaganda: I've often used YouTube-dl to store those videos for my own benefit. Good thing, too, as YouTube often, without warning, mass-removes that sort of content, which can be ruinous for those of us using those YouTube accounts to trace the spread of these conspiracies.”In other cases, local copies are necessary to conduct more rigorous analysis than is possible online, and journalists turn to YouTube-dl for the highest quality copy of the video available. John Bolger, a software developer and systems administrator who does freelance and data journalism, recounted the experience of reporting an award-winning investigation as the News Editor of the college paper the Hunter Envoy in 2012. In that story, the Envoy used video evidence to contradict official reports denying a police presence at an on-campus Occupy Wall Street protest.“In order to reach my conclusions about the NYPD’s involvement... I had to watch this video hundreds of times—in slow motion, zoomed in, and looping over critical moments—in order to analyze the video I had to watch and manipulate it in ways that are just not possible” using the web interface. YouTube-dl is one effective method for downloading the video at the maximum possible resolution.Jake, a member of the Chicago-based transparency group Lucy Parsons Labs, uses YouTube-dl to save copies of recorded incidents involving police use of force or abusive behavior. Once copied, the videos can be stored in an archive or modified before publication, such as by blurring the faces of bystanders or victims. “We have sometimes been able to take a closer look at individual frames after downloading with YouTube-dl to identify officers when they are not wearing their badges intentionally or obfuscating them with things to avoid accountability.”One misinformation researcher told Freedom of the Press Foundation about using YouTube-dl to create a baseline for machine learning models developed to do automated real-time fact-checking. “While our production systems are designed to be used on live video streams, it's not feasible to test on live video. YouTube-dl allows us to greatly increase the speed of our research development and allow us to be able to actually test our software on a day-to-day basis, not just when politicians happen to have a speech.”Similarly, a number of reporters described using YouTube-dl for nuts-and-bolts workflows such as transcribing videos they’re covering. Jeremy Gray, a data scientist with The Globe and Mail, described a Slack tool he provides to journalists to allow them to automatically transcribe their own interviews and, until Friday, to transcribe YouTube videos from a URL. “It used YouTube-dl, and now that part is broken.” Another journalist, who works at a “small-ish public media newsroom,” described a common situation where a reporter needs “a recording of a public meeting for a story but is on deadline and doesn’t want the hassle of recording the parts they want it in real time or wants the full file for something like AI transcription.”That same journalist described how YouTube-dl helps address the challenge of incorporating user-generated content on-air. In the immediate aftermath of an earthquake, their newsroom began expansive continuous coverage and sought to include photos and videos that locals had recorded. “We are scrupulous about making sure we get permission (and the person granting it actually owns the copyright), but especially right after an earthquake asking people to send the video to us specifically can be a much bigger ask than just allowing us to use it (if they even have a recording, which they probably don’t for a livestream), so often after getting permission I’d just download it straight from social media to transcode for TV.”That use case is common. Reporters frequently need high-fidelity copies of video or audio tracks for publication or reporting. Ling, the freelance security reporter, said he also uses YouTube-dl to “get the best audio quality” when downloading copies of press conferences or news events “to grab snippets of audio for use in podcasts or radio work.”Finally, numerous reporters described using YouTube-dl to download copies of their own works. Freedom of the Press Foundation has previously worked to help writers preserve portfolio copies of their articles, and to help full news archives stay online when the outlet itself is under threat. YouTube-dl plays an important role in that ecosystem as well.GitHub has not publicly commented on its removal of one of its most popular repositories. Clearly, YouTube-dl in particular and the ability to download and manipulate online videos in general are an important part of the work of journalism and contemporary media literacy. Given the important role that YouTube-dl plays in public interest reporting and archiving, the RIAA’s efforts to have the tool removed represent an extraordinary overreach with the possibility for dramatic unforeseen consequences. We urge RIAA to reconsider its threat, and GitHub to reinstate the account in full.
Imagine your job done quickly and efficiently, your sales processes streamlined and your appointment schedule finally aligned. Taskeo makes it possible. Designed to improve the way your team works, Taskeo is a fully equipped toolkit packed with everything you need to increase your productivity and increase your revenue. This suite comes with Client Management (CRM), Project Management, Time Tracking and Billing, Appointment Scheduler and Email Marketing tools. Now you can get more work done faster and run your business from just one platform. A variety of subscriptions are on sale ranging from 1 user for 1 year for $50 to 10 users for an unlimited time for $379 so you can find the right license for your needs. Use the code SAVE15NOV for an additional 15% off this and other items storewide.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
A few months back we highlighted the insane lengths the WHO was going to in an effort to silence Taiwan, despite that country's extraordinarily successful efforts to combat COVID-19. Yes, yes, everyone understands the geopolitical mess in that the Chinese government refuses to recognize that Taiwan is an independent country (which everyone who lives in reality knows) and that various organizations and governments have to pretend otherwise to keep the Chinese government happy.But come on. People are dying. Denying the independent existence of Taiwan to continue playing pretend is silly beyond all belief.And yet here we are. The WHO has apparently decided to play by China's rules and refuses to acknowledge Taiwan. And now they've taken it up a notch. Journalist William Yang noticed that when he tried to post any comment on a certain WHO post that included the word "Taiwan" the comment failed.
We've repeatedly made it pretty clear that President Trump's effort to ban TikTok is little more than a performative, xenophobic, idiotic mess. For one, the effort appears more focused on trying to get Trump-allied Oracle a new hosting deal than any serious concern about consumer privacy and security. Two, banning a teen dancing and lip syncing app does jack shit in terms of thwarting China or protecting U.S. consumer privacy, since the U.S. telecom, app, and adtech markets are largely an unaccountable privacy mess making it trivial to obtain this kind of data elsewhere.Further highlighting the performative nature of the proposed ban, TikTok this week effectively stated that Trumpland appears to have forgotten about the proposed ban entirely. TikTok filed a petition this week in a US Court of Appeals calling for a review of actions by the Trump administration’s Committee on Foreign Investment in the United States (CFIUS), pointing out that the deadline for ByteDance to sell off its US assets over national security concerns came and went this week with no action from Trumpland or word on any extension.Apparently the whole TikTok thing fell off the radar as the administration focuses on pretending it didn't lose the election. Whoops:
When cops can't do the brutalization themselves, they send in man's best friend. Best friend to The Man, that is. K-9 "officers" aren't just for illegally extending traffic stops. They're also capable of maiming people for the offense of not being respectful/subdued enough for an officer's liking.A recent investigative report by The Marshall Project shows cops are more than willing to use police dogs to inflict pain on arrestees, even when there's nothing about the situation that demands such a violent response. Just being a suspected criminal is enough to trigger dog handlers, who appear to feel any amount of damage to another human being is justified.
There are a great many interesting arguments we tend to have over both the purpose of copyright law and how effectively its current application aligns with that purpose. Still, we are on fairly solid legal footing when we state that the main thrust of copyright was supposed to be to drive more and better content to the public. Much of the disagreement we tend to have with naysayers revolves around whether ever expanding rights coupled with protectionist attitudes truly results in more and better content for the public. We, to a large extent, say the current copyright bargain is horribly one-sided against the public interest. Detractors say, essentially, "nuh-uh!".But if one were to distill the problems with the current state of copyright to their most basic forms, you would get No One Lives Forever. The classic PC shooter/spy game was released way back in 2000, times of antiquity in the PC gaming space. It was a critically acclaimed hit, mixing Deus Ex style shooter missions, spycraft, and an aesthetic style built on 1960s classic spy films. And, as RockPaperShotgun reminds us, No One Lives Forever celebrated its 20th birthday this November.If you remember the game fondly, or perhaps if you never played it and are curious as to why there's so much love for the game, you might be thinking about going and getting a copy for yourself to play. Well, too bad. You can't.