Back in February, $130 "smart" pet feeders from a company named PetNet simply stopped working. When customers reached out to the company to complain, they hit a complete and total brick wall in terms of functioning customer service. Customers say emails and phone calls weren't returned (or wound up undeliverable), and the company simply refused to answer annoyed customer inquiries on Twitter or Facebook.Fast forward to late March and April and PetNet customers once again complained to outlets like Ars Technica that the company's products didn't work and its customer support was still nowhere to be found. Customers who complained were now being shoveled to a third party contractor with 16 followers on Twitter which, like the company that employed it, didn't appear capable of offering any help:
A few days back I saw a friend share an incredible video on YouTube of a guy in Milwaukee named Wes Tank, rapping Dr. Seuss's "Fox in Sox" over Dr. Dre beats. Even if you think that sounds great, the final result is even better than you expect:That video went super viral and is currently at over 3.5 million views on YouTube. Wes has since been adding more and more Seuss-over-Dre videos to his channel and each one is incredible. Because I can't pick favorites, here are a few. You'll want to watch them all.Since seeing these videos (multiple times), I've read and watched a few different interviews with him and he says he basically came up with this idea on a whim five or so years ago, and did it at a live show, and got a tremendous response. Since then, he's done it a few times and it's a crowd favorite, but he never really had time to make the videos for it until, you know, the pandemic hit and the work he was planning to do with his recently created video production company, TankThink, got put on hold.Of course, as someone who has promoted and supported the concept of mashup or remix culture for decades, this reminded me (yet again) of why being able to do this kind of creativity is so important. Wes himself has talked about the joy and value these videos are creating:
When you lose a lawsuit -- as former Sheriff Joe Arpaio did last year -- you have a few options. Arpaio sued a few news outlets for defamation, alleging their reference to him as a convicted felon had done over $300 million in damage to his pristine reputation.Represented by Larry Klayman, Arpaio came away with a loss. The DC federal court not only said Arpaio failed to state facts pointing to actual malice by the publications, but that Arpaio failed to plead any facts at all. That's classic Klayman lawyering: go light on facts, heavy on rhetoric, and try to avoid being being hit with sanctions and/or having your license suspended for your antics both on and off the court.The correct thing to do when faced with a dismissal is file a motion to amend the lawsuit or petition the DC appeals court for a second look. Arpaio and Klayman did neither of these things. Instead, they dropped one defendant (CNN) and filed essentially the same lawsuit in the same court that had dismissed Arpaio's previous lawsuit with prejudice. (h/t Adam Steinbaugh)Having had its time wasted twice with the same lawsuit, the court isn't happy with Arpaio or his representation. This decision [PDF] is even shorter than the 11-page dismissal Arpaio received on his first pass. The court says there's nothing new here and this isn't the way the court system works -- something Arpaio's lawyer should know but apparently chose to ignore.
The 2020 Adobe Graphic Design School has 3 courses to help you learn more about top Adobe apps and elements of graphic design. The first course covers Adobe Photoshop and all aspects of the design process from the importing of images right through to final production considerations for finished artwork. The second course covers Adobe Illustrator and will lead you through the design process, where you’ll learn a variety of ways to produce artwork and understand the issues involved with professional graphic design. The third course will help you discover how to harness the power of Adobe InDesign to develop different types of documents, from simple flyers to newsletters, and more. The bundle is on sale for $49.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
It's been over a year since Devin Nunes kicked off his vexatious campaign to sue various critics. As you probably are aware, in that year he's sued news organizations, journalists, political operatives, critics, and, most famously, a satirical internet cow.Most (though not all) of those lawsuits have been filed in Virginia which has a notoriously weak anti-SLAPP law. He's also filed a lawsuit in Iowa, which has no anti-SLAPP law. He briefly filed one case in California -- which does have an anti-SLAPP law -- but (1) he had his campaign file it rather than himself personally, and then (2) quickly dropped it.The barrage of frivolous, censorial SLAPP suits in Virginia, though, inspired lawmakers there to pass a good anti-SLAPP law, though the two legislative houses were unable to come to an agreement on a consensus version of the law before their brief 2020 legislative session came to an end, though they promised to take it up again in 2021.What's interesting is that in a new article about what happened with the law in the Roanoke Times, it's explained that once the law does hopefully pass next year, it may apply back to Nunes' series of vexatious lawsuits -- meaning he might be on the hook for the legal bills of everyone he sued. Normally, laws cannot apply retroactively, but as the article notes, this wouldn't be about making something illegal retroactively, but rather about making a procedural change to how cases play out, meaning it could apply retroactively:
We've long discussed how the Pai FCC's net neutrality repeal was plagued with millions of fraudulent comments, many of which were submitted by a bot pulling names from a hacked database of some kind. Millions of ordinary folks (myself included) had their identities used to support Pai's unpopular plan, as did several Senators. The Trump FCC stonewalled both law enforcement and journalist inquiries into who was behind the comments, and why the FCC didn't lift a finger to either stop them or to help identify those responsible.Numerous journalists like Jason Prechtel have submitted FOIA requests for more data (server logs, IP addresses, API data, anything) that might indicate who was behind the fraudulent comments, who may have bankrolled them, and what the Pai FCC knew about it. Thanks to that effort, early last year, Gizmodo's Dell Cameron worked with Prechtel to link some of the fake comments to Trump associates and some DC lobbying shops like CQ Roll Call. Then late last year, Buzzfeed's Kevin Collier and Jeremy-Singer Vine showed how, unsurprisingly, the broadband industry funded at least some of the fraudulent efforts.Meanwhile two reporters for the New York Times, Nicholas Confessore and Gabriel Dance, sued the FCC under the Freedom of Information Act after the agency refused to reveal logs that could show the IP addresses used to submit the mass comments. Last week, a Manhattan federal judge hand over copies of the logs to both Confessore and Dance:
NSO Group is not having a great year. At least not on the PR front. The books may be balancing, but its indiscriminate distribution of malware/spyware to questionable governments has been raising eyebrows and blood pressure for years. Now, it's being sued by Facebook for using WhatsApp as its preferred delivery system for malware payloads.These payloads target criminals and national security threats. But -- since NSO doesn't care who it sells to or what they do with its powerful software -- the payloads also target journalists, dissidents, activists, and attorneys. This malware can take over devices, feeding communications and phone contents to government agencies that want to keep an eye on their enemies -- even when their "enemies" are just critics and people who disagree with their policies.But the malware can be used for other reasons, too. Any powerful surveillance tool ultimately ends up being misused. Just ask the NSA. And the FBI. And now, ask NSO, as Joseph Cox has for Motherboard.
Sometimes you turn out to be wrong. When we initially discussed Kawhi Leonard's lawsuit against Nike over the "Klaw" logo, I'd said I was interested to hear Nike's response. That was because my glance at Leonard's description of the history of the logo, one which he created in rough draft form when he was young to one which Nike used as inspiration for the eventual Nike Kawhi shoe logo, it sure seemed like Nike was being hypocritical. After all, Nike has a reputation for being extremely protective of its own intellectual property rights while being rather cavalier with those of others. As a reminder, Leonard created a logo that makes something of a "K" and "L" outlined via the tracing of his own hand. It sure seemed that if that all wasn't unique enough that Nike shouldn't be trying to trademark a version of the logo from under him, what could be?Well, a U.S. District Judge in Oregon appears to disagree. And, given some of the side by side comparisons that Nike brought in its response... perhaps he has a point.
If you do anything internet related, hopefully you already know Tim Bray. Among tons of other things, he helped develop XML and a variety of other standards/technologies the internet relies on. He's also been a vocal and thoughtful commenter on a wide variety of issues, especially in the tech policy space. For the past five years he's been working at Amazon as a VP and Distinguished Engineer -- but as he's announced he has now quit in protest over the company's retaliation against workers who were speaking up over the company's handling of their working conditions during the pandemic. Bray gives some of the background of workers organizing and speaking up about their concerns, and then discusses the company's reaction (firing the vocal ones and offering lame excuses).
It seems like Ring really wants to add facial recognition tech to its cameras. It employs a "Head of Facial Recognition Tech." It pitches this still-nonexistent feature to law enforcement. And it says it will "continue to innovate" to meet customer feature demands in response to Congressional queries about its facial recognition plans.But now is not the best time to be trotting out new facial recognition products. Cities and one entire state have enacted bans or moratoriums on facial recognition tech use by government agencies. Even the leader in law enforcement body cams (Axon, formerly Taser) has pulled back from adding this tech to its products. So, Ring is playing it safe even if it's inevitably going to add this feature in as soon as it can justify it.And it's looking for ways to justify it. Just like it promised Congress, it will continue to "innovate" by adding features customers say they want -- even if it's tech many feel is untrustworthy, if not possibly dangerous. A document obtained by Ars Technica shows the company is feeling out its newest customers on several potential features, including facial recognition.
Earlier this year, we mentioned, in passing, personal injury lawyer Annie McAdams' weird crusade against internet companies and Section 230. The lawyer -- who bragged to the NY Times about how she found out her favorite restaurant's secret margarita mix by suing them and using the discovery process to get the recipe -- has been suing a bunch of internet companies trying to argue that we can ignore Section 230 if you argue that the sites were "negligent" in how they were designed. In a case filed in Texas against Facebook (and others) arguing that three teenagers were recruited by sex traffickers via Facebook and that Facebook is to blame for that, the lower court judge ruled last year that he wouldn't dismiss on Section 230 grounds. I wish I could explain to you his reasoning, but the ruling is basically "well, one side says 230 bars this suit, and the other says it doesn't, and I've concluded it doesn't bar the lawsuit." That's literally about the entire analysis:
For over a decade now, we've been saying that the Supreme Court should absolutely stream its oral arguments live via the internet -- and for all that time the Supreme Court has rejected the idea. All of the Justices have always seems to be aligned in this view, though with very bad justifications. The two most frequently cited reasons are that (1) the public wouldn't understand what was going on, and (2) that it might make the oral arguments more "performative" as the Justices (and perhaps some lawyers) would act differently for the cameras. Neither of these arguments makes much sense.If people just wouldn't understand -- well, it seems like that would be a very educational opportunity. Having the video of the arguments would allow for more people to learn about our justice system and how it works, and for experts to step in and teach people. As for the "performative" concern, that also seems silly. Are the Justices really arguing that after working their way up through the ranks as judges for decades, that they'll suddenly toss away all of their solemn and careful approach to justice because the cameras are on? If so, they don't belong on the Supreme Court. And, of course, many other courts, including the Appeals Courts that these judges came from, will stream oral arguments live on the internet, and there's been little to no evidence that it suddenly caused the judges to start tap dancing for the cameras.Either way, with the pandemic forcing the court to shift to operating remotely, today for the very first time, the Court heard arguments telephonically and agreed to stream it live online for anyone to listen to (initially there had been talk that they would only stream it to "journalists" which raised a number of 1st Amendment issues on its own -- and eventually the decision was made to just make it accessible to everyone). Amusingly, when asked, many Supreme Court lawyers apparently said they'd still stand up while making their arguments, as if at the lectern in the Supreme Court (as someone who works mostly at a standing desk and prefers to do calls standing, I approve):
Utilize the Complete Developer And IT Pro Library to master today’s most in-demand IT and Software development skills. Now with this limited-time offer, you’ll gain unlimited access to 750+ courses! Whether you are looking to earn a promotion, make a career change, or pick up a side gig to make some extra cash, LearnNowOnline delivers engaging online courses featuring the skills that matter most today. From AWS and Azure to Python, C#, ASP.NET Core and Java to Linux, SQL Server and Cyber Security, LNO stays ahead of the hottest trends to offer the most relevant courses and up-to-date information. It's on sale for $80.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
In 2018, Cambodia's government passed a "fake news" law. It was enacted shortly before a general election, allowing the government to stifle criticism of the Prime Minister. It also required all local websites to register with the government and put government employees to work scouring social media for violations.The government's new power to decide what is or isn't "real" news allowed it to consolidate its power over the local press. One newspaper was sold to a Malaysian firm that also performed PR work for the Prime Minister, giving the PM the ability to produce news unlikely to ever be classified as "fake."The appearance of the coronavirus in Cambodia has resulted in a spike in "fake news" arrests. A crisis like this shouldn't be allowed to go to waste and the Cambodian government is making the most of its new powers to silence critics, stifle dissent, and punish anyone who doesn't have nice things to say about the party in power.
For years, consumers have been bitching about the high cost of sports programming as it pertains to your monthly cable bill. Especially for those who don't watch sports, but are often forced to pay the sky high prices for sports programming as part of a bloated cable bundle anyway. One survey a few years ago found that 56% of consumers would ditch ESPN in a heartbeat if it meant saving the $8 per month subscribers pay for the channel. The "regional sports fees" tacked on to subscriber bills have also long been a point of contention because they're often used to help falsely advertise a lower rate.This is the "norm" in more normal times. During a pandemic, sports have largely been cancelled, but consumers are still shelling out big bucks for sports programming that costs them an arm and a leg. That's resulted in a spike in complaints to NY Attorney General Letitia James, who this week announced she would be asking the nation's six biggest traditional cable providers to reduce or eliminate fees related to sports programming while there's, you know, no sports:
This week, our first place winner on the insightful side comes from James Burkhardt in response to someone questioning our use of "OK, Landlord" in reference to copyright holders:
Five Years AgoThis week in 2015, we learned more about one of the NSA's sweeping but useless surveillance programs, and about the stunning lack of oversight when the CIA wants to drone-strike people. But we weren't learning more about the TPP, since it was secret, even though President Obama was demanding critics explain what was wrong with the agreement they weren't allowed to see (just as a UN expert was saying that secret trade negotiations are a threat to human rights). Tom Friedman, meanwhile, was maybe going just a little overboard in advocating for the deal.Ten Years AgoThis week in 2010, the UK Labour Party was yet again caught apparently infringing on copyright with a campaign poster while also being the champions of the Digital Economy Bill and its draconian copyright rules. They claimed "innocent error" — a defense notably absent from their own law. In the US, a worrying bill was pushing to extend DMCA-style takedowns to "personal information", while Twitter was taking down a lot of tweets over bogus DMCA claims, and an appeals court upheld a hugely problematic ruling about who counts as a journalist.Fifteen Years AgoThis week in 2005, Wal-Mart was making a hilariously late second-entry into the online music store market, while Disney was backing down from a video-on-demand offering that I suppose counts as a distant ancestor to Disney Plus. Nathan Myhrvold was mixing up innovation and invention with Intellectual Ventures, while the head of the Patent Office was floating some very bad ideas about reform, even as companies like Intel were getting vocal about the problem of patent trolls.
Last fall, we wrote about what appeared to be many of the sketchy details between the non-profit Internet Society (ISOC) agreeing to sell off the non-profit Public Interesty Registry (PIR), which runs the .org top level domain registry, to the very much for-profit private equity firm, Ethos Capital, which had recently been formed, and involved a bunch ex-ICANN execs and other internet registry folks. Even if the deal made perfect sense, there was a lot of questionable issues raised concerning who was involved, whether or not there was self-dealing, and how transparent the whole thing was. On the flipside, a number of very smart people I know and respect -- including some who worked for ISOC, insisted that the deal not only made sense, but was good for the future of the .org domain and the wider internet. In January, we had a long podcast with Mike Godwin, who is on the board of ISOC and voted for the deal, debating whether or not the deal made sense.In the intervening months, many people and organizations had petitioned ICANN to block the deal, and ICANN had repeatedly delayed its vote -- with the last delay coming a few weeks ago right after California's Attorney General, Xavier Becerra, sent a pretty scathing letter about the deal.On Thursday, ICANN's board voted to block the deal, saying that it just created too much uncertainty for non-profit organizations who rely on the .org top level domain.
Things are getting even more interesting in Facebook's lawsuit against Israeli malware merchant, NSO Group. Facebook was getting pretty tired of NSO using WhatsApp as an attack vector for malware delivery, which resulted in the company having to do a lot more upkeep to ensure users were protected when utilizing the app.Unfortunately, Facebook wants a court to find that violating an app's terms of service also violates the CFAA -- something most of us really don't want, even if it would keep NSO and its customers from exploiting messaging services to target criminals, terrorists… and, for some reason, lots of journalists, dissidents, and activists.NSO finally responded to Facebook's lawsuit by saying it could not be sued over the actions of its customers. Its customer base is mainly government agencies -- including some especially sketchy governments. NSO claims all it does is sell the stuff. What the end users do with it is between the end users and their surveillance targets. Since its customers are governments, sovereign immunity applies… which would dead-end this lawsuit (wrong defendant) and any future lawsuits against governments by Facebook (the sovereign immunity).NSO's claims it can't be touched by this lawsuit are falling apart. Citizen Lab researcher John Scott-Railton pointed out on Twitter that Facebook's latest filings point to NSO operating its malware servers from inside the United States -- apparently doing far more than simply selling malware to government customers and letting them handle the deployment details.Facebook's answer to NSO's attempt to dismiss the lawsuit concedes NSO's point: it is not its customers. But that's precisely why it can be sued. From Facebook's response [PDF]:
I remain perplexed by people who insist that internet platforms "need to do more" to fight disinformation and at the same time insist that we need to "get rid of Section 230." This almost always comes from people who don't understand content moderation or Section 230 -- or who think that because of Section 230's liability protections that sites have no incentive to moderate content on their platforms. Of course platforms have tons of incentive to moderate: much of it social pressure, but also the fact that if they're just filled with garbage they'll lose users (and advertisers).But a key point in all of these debates about content moderation with regards to misinformation around COVID-19, is that for it to work in any way, there needs to be flexibility -- otherwise it's going to be a total mess. And what gives internet platforms that flexibility? Why it's that very same Section 230. Because Section 230 makes it explicit that sites don't face liability for their moderation choices, that enables them to ramp up efforts -- as they have -- to fight off misinformation without fear of facing liability for making the "wrong" choices.
Tired: monitoring parolees with ankle bracelets. Wired: monitoring parolees with smartphone apps.Maybe it will be a better idea someday, but that day isn't here yet. Ankle bracelets are prone to unexpected failure, just like any other electronic device. False negatives -- alerts saying a parolee isn't at home -- are no better than false positives in the long run, although the former is the only one that can take away someone's freedom.The costs of ankle bracelets are borne by the parolee. Smartphone apps may be slightly cheaper… but only if you don't factor in the cost of a smartphone or the app itself. Smartphones aren't easy for parolees to obtain. Neither are the jobs needed to subsidize both the phone and the app's monthly charge.Those lucky enough to secure a smartphone are discovering the new solution is just as prone to error as its predecessor.
As you may have heard (and may still being bombarded by ads for) recently, a new video streaming service called Quibi launched to much fanfare. The fanfare was not around the technology or the content, mind you, but around the fact that there were some famous people involved (namely: former Disney/Dreamworks exec Jeffrey Katzenberg and, to a lesser extent, former eBay/HP CEO Meg Whitman -- who had been a Disney and Dreamworks exec earlier) and the ridiculous fact that the company raised nearly $2 billion before it even had launched. There was some buzz about it leading up to launch... but mostly from the media which always falls for the story of the company that raises a ton of money and has some famous person in charge.So far, however, Quibi appears to be yet another example of a point I made about Hollywood and the internet over a decade ago: entertainment execs have a long history of overvaluing the content and undervaluing users and what they want. Sometimes I've described this as overvaluing content and undervaluing technology (which plays into the whole Hollywood/Silicon Valley divide), but it's actually something different than that, which is ably shown by the complete dumpster fire that Quibi has been so far. They spent silly amounts of money on content, and then seemed to focus the "technology" money on a completely pointless and weird setup where the orientation of your devices changes what you see (that is, if you hold your phone in portrait orientation, what you see is not just a scrunched up landscape mode, but something different). That seems like the kind of thing that a bunch of out-of-touch execs would sit around and brainstorm without any understanding whatsoever of how real people use devices. At best it seems like an attempt to try again with Verizon's brain-dead Go90 service that no one ever used.And, so far Quibi's living down to expectations. Within a week of its launch, things were not looking good:
The QuickBooks 2020 Essentials Bundle has two courses to take you from beginner to bookkeeper with the popular accounting software. You'll start at the very beginning and cover everything that’s required to get set up in QuickBooks. You'll move on to processing payroll, creating invoices, and how to run helpful reports. One course covers QuickBooks Pro and the other covers QuickBooks Online. It's on sale for $30.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
I have opined that FBI Director Chris Wray needs to shut the fuck up about encryption. Wray has presented a completely skewed perspective on the issue, following in the footsteps of Jim Comey. Wray claims encryption is leading to a criminal apocalypse, even as crime rates remain at historic lows. He also claims encryption is making it impossible to follow through with investigations, but has presented no evidence to back this claim.The best argument the FBI could present was its always-growing number of encrypted devices in its possession. In the space of a couple of years, the number jumped from less than 900 to nearly 8,000. This seemed to indicate encryption was a growing problem, but when the FBI tried to verify this number for Congress, it found out it had overstated the amount of locked devices in its possession. In reality, the number of locked devices is likely less than 2,000 -- hardly the apocalypse of impregnability Wray and Comey continuously presented. This was discovered in May 2018. The FBI has yet to hand over an accurate count of these devices.Now, there's this bit of news, which surfaced during Facebook's lawsuit against malware developer, NSO Group. Chris Wray was for encryption before he was against encryption. Documents filed by Facebook detail Wray's defense of WhatsApp and its encryption in an earlier legal battle while Wray was working for the King & Spalding law firm. [paywall-free link here]
After pressure from consumer groups, major ISPs last month announced that they would be suspending all usage caps and overage fees for 60 days in response to the COVID-19 pandemic. After all, US consumers already pay some of the highest prices for broadband in the developed world, and imposing draconian usage restrictions and overage fees created added hardships for consumers already facing unemployment and additional financial headaches due to the pandemic. Being the sweetheart it is, Comcast also stated it would temporarily stop kicking poor folks offline during the pandemic.This week Comcast announced it would be graciously extending the suspension of said caps for another 60 days. This is, Comcast insists, part of the company's dedication to ensure people remain connected during a crisis:
A friend sent over a press release announcement from a company called CREOpoint that claims it has patented "Containing the Spread of Disinformation" and that it was now using it to "help contain the spread of COVID-19 disinformation." Would that it were so, but that's not how any of this works. Tellingly, the press release does not provide the patent number of any of the details about the patent -- which should probably be your first sign that it's utterly bogus. However, with a little sleuthing I was able to turn up the patent application... and it confirms that this is a ridiculous patent that never should have been approved. The official title is "Containing Disinformation Spread Using Customizable Intelligence Channels."The 1st claim is the main one and describes what the patent is about:
The COVID-19 crises has changed most of our lives. Working from home is now the norm for many, rather than a perk. Sports is mostly gone, replaced by esports simulacrums. Schools are shut down, as are most non-essential businesses.And the folks from Queer Eye are now advising on and critiquing your digital homes rather than your IRL abodes.
It's becoming an unfortunate regularity that we keep writing posts highlighting how China is trying to suppress criticism around the globe regarding its terrible handling of the COVID-19 pandemic. As we've said over and over again, what the world needs right now is radical transparency regarding the disease and various responses, and instead we're getting standard operating procedures from the Chinese government which is all about suppressing bad information and denying everything (with a healthy does of spreading more disinfo everywhere -- make sure you check the comments here a few hours after we post this, because it seems to show up in a timely manner).The latest example comes from the EU, where the Chinese government pressured officials in Brussels not to release a report about the Chinese government's disinformation efforts regarding COVID-19. While the EU did eventually release it, it put it out on a Friday evening (the classic news dump of where you hide stuff) and some of the criticism of China was supposedly "rearranged or removed."
So we've noted a few times how giant telecom providers, as companies that have spent the better part of the last century as government-pampered monopolies, are adorable when they try (then inevitably fail) to innovate or seriously compete in more normal markets. Verizon's attempt to pivot from curmudgeonly old phone company to sexy new ad media darling, for example, has been a cavalcade of clumsy errors, missteps, and wasted money.AT&T has seen similar issues. Under CEO Randall Stephenson, AT&T spent more than $150 billion on mergers with DirecTV and Time Warner, hoping this would secure its ability to dominate the pay TV space. But the exact opposite happened. Saddled with so much debt from the deal, AT&T passed on annoying price hikes to its consumers. It also embraced a branding strategy so damn confusing -- with so many different product names -- it even confused its own employees.As a result, AT&T lost 3,190,000 pay TV subscribers last year alone. Not exactly the kind of "domination" the company envisioned.Stephenson's failures on this front were so pronounced, even the company's investors got angry about how much money AT&T spent on questionable mergers. And late last week, Stephenson himself announced (not coincidentally) he'd now be retiring, though AT&T's farewell letter understandably addresses none of these recent headaches:
Clearview AI is inserting itself into a discussion no one invited it to participate in. The discussion around contact tracing to manage and (hopefully) impede the spread of the coronavirus involves multiple governments around the world. It also includes Google and Apple, who are partnering to create a platform for contact tracing apps.At least in the case of Google and Apple's offerings, there appears to have been a serious discussion about protecting users' privacy as much as possible while still offering a valuable service to government health agencies and people concerned about contracting the virus.Now, Clearview has blundered into the discussion -- a company that has shown utter disdain for the millions of people it has force-fed to its multi-billion image database via social media site scraping. There is no opting in or out of this collection -- one that Clearview is selling to law enforcement agencies in the US and to government agencies around the world. If it's on the web, it's likely already in Clearview's database.In a brief discussion with NBC News, CEO Hoan Ton-That pitched his idea for a Clearview-based contact tracing program. What Ton-That wants to do is tie his app and its database to thousands of CCTV cameras located in stores, parking lots, gyms, and other locations where, as he puts it, "there's no expectation of privacy." Ton-That doesn't explain how his system will be notified of a person's COVID status, but he's pretty sure his software will be able to recognize faces accurately. Clearview's facial recognition AI remains unproven, but it's supposedly capable of making guesses about faces using images as small as 110x110.Ton-That also seems unconcerned about the privacy implications of adding people's health status to his enormous database of scraped personal information. He says any limitations on gathering/storage of this info would be up to whoever decides to take him up on his unsolicited offer.Obviously, no one should do this. The AI is unproven and Clearview is far from trustworthy. Activist group Fight For The Future has issued its official statement on Clearview's contact tracing pitch. It's short but punchy.
As the pandemic got worse and worse earlier this year, many internet platforms sprang into action -- spurred by many calls to do exactly this -- to ramp up their content moderation to fight off "disinformation" attacks. And there is no doubt that there are plenty of sophisticated (and even nation state) actors engaging in nefarious disinformation campaigns concerning the whole pandemic. So there's good reason to be concerned about the spread of disinformation -- especially when disinformation can literally lead to death.However, as I've been saying for quite some time now, content moderation at scale is impossible to do well. And that's true in the best of times. It gets much more complicated in the worst of times. As we noted a few weeks ago, various internet platforms said they'd be taking down information that contradicted what government officials were saying, but that ran into some problems when the various officials were wrong.How can we expect internet platforms to know what is "allowed" and what is "truthful" vs. what is "disinformation" when we have a situation where even the experts are working in the dark, trying to figure things out. And the natural process of figuring things out often involves initially suggesting things that turn out to be incorrect.Professor Kate Starbird has a great piece over at Brookings, detailing just how important social media is in helping people go through this "collective sensemaking process" and highlighting that if we're too aggressive in trying to take down "disinformation," it's likely that much of the important process of figuring out what's going on for real can get lost in the process. To be clear: this is not an excuse for doing nothing. Pretty much everyone agrees that some level of moderation is necessary to deal with outright dangerous disinformation. But as we've spent years detailing, these issues are very, very, very rarely black and white -- and we need that vast gray area to help everyone sort out what's going on.
The Ultimate Beginner’s Guide to Microsoft Office has 12 courses designed to help you gain fluency on the widely used office software suite. You'll move from beginner to advanced courses covering Excel, Word, Outlook, Access, and PowerPoint. You'll learn via worksheets, quizzes, video lectures, and more hands-on activities. It's on sale for $35.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
We've been writing a number of pieces lately about how incredibly dangerous China's internet censorship has been during COVID-19, from silencing medical professionals to hiding research results tod trying to ignore Taiwan's success in fighting COVID-19, it's shown a pretty clear pattern that Chinese internet censorship is literally killing people. This is not to say that the US government's response has been much better -- it's obviously been a disaster, but at least we have more free speech online and in the press, which is enabling all sorts of useful information to spread.But you might not know that if you read this odd piece in the Atlantic by Jack Goldsmith and Andrew Keane Woods arguing that China has the right approach to handling free speech online during a pandemic, and the US has not. While the overall piece is, perhaps, a bit more thoughtful than the headline and tagline, it has moments that simply defy any sense of what's happening in the world.
There's a laundry list of shoddy arguments and business structures that have been exposed as nonsense and folly during the pandemic. One of them is the traditional Hollywood film release window, which typically involves a 90 day gap between the time a move appears in theaters and its streaming or DVD release (in France this window is even more ridiculous at three years). The goal is usually to "protect the traditional film industry," though it's never been entirely clear why you'd protect traditional theaters at the cost of common sense, consumer demand, and a more efficient model. Just because?While the industry has flirted with the idea of "day and date" releases for decades (releasing movies on home video at the same time as brick and mortar theaters) there's long been a lot of hyperventilation on the part of movie theaters and traditionalists that this sort of shift wasn't technically possible or would somehow destroy the traditional "movie experience," driving theaters out of business.Then came the pandemic, when visiting a traditional theater suddenly became potentially fatal. Numerous studios quickly adapted, and began experimenting with much shorter release windows or in some instances, no window at all. Comcast NBC Universal, for example, offered early access to some films for $20 at home while they were still in theaters. Other films, like "Trolls World Tour," were released simultaneously on video on demand and in theaters that remained open.Guess what: the film did very well, raking in $100 million in premium VOD rentals in its first three weeks in North America, which wasn't just profitable, but wasn't that far behind the $116 million grossed by the original Trolls film during the first three weeks it was in theaters in 2016. Excited by the success, Comcast NBC Universal CEO Jeff Shell gave a fairly innocuous statement to the Wall Street Journal:
As part of its copyright reform, South Africa plans to bring in a fair use right. Despite the fact its proposal is closely modeled on fair use in American law, the copyright industry has persuaded the US government to threaten to kill an important free trade deal with South Africa if the latter dares to follow America's example. If you thought only US copyright companies were capable of this stunningly selfish behavior, think again. It seems that the European copyright industry has been having words with the EU, which has now sent a politely threatening letter to the South African government about its copyright reform (pdf). After the usual fake compliments, it gets down to business in the following passage:
It never ceases to amaze me how often people that really should know better seem to think that they can simply remove their own histories from the internet effectively. It seems the be a lesson never learned, be it from major corporations or even the Pope, that the internet never forgets. Thanks to tools like The Wayback Machine and others, attempts to sweep history under the rug are mostly fruitless endeavors. And, yet, people still try.Such as Michael Caputo, the new spokesman for the Department of Health and Human Services. That department is just a tad important at the moment, given the COVID-19 pandemic we're all enduring. Well, Caputo got the job and decided he better get to Twitter to delete all that racist and conspiratorial shit he said so that we all don't find out about it.
Over the last month or so, we've written plenty on the challenges of social media companies managing content moderation in the midst of a pandemic, highlighting the challenges when misinformation is coming from official sources, when it's impossible to distinguish legit info from misinformation, when the intersection of politics and misinformation gets tricky, and, of course, when platforms have to rely more on AI while all their workers are working from home (raising significant privacy concerns if they're still moderating content).In the long run, what happened over the last couple months is going to represent a truly fascinating place to look for case studies about content moderation on the internet -- but only if the data is available. To that end, a bunch of public interest groups, led by CDT, have put out an open letter asking social media platforms to preserve as much as possible about the content moderation decisions they're making and to be as transparent as possible for future research:
On April 11, Princeton mathematician and the inventor of “Game of Life” John Horton Conway passed away from the coronavirus. Known as a “magical genius” whose curiosity extended beyond just mathematics, the passing was a devastating blow to many who loved the man.Yet as news of his passing broke, an interesting scenario developed. Instead of a formal statement from the institution or his family, the news first appeared on Twitter. With no verifiable proof of the claim, many were left struggling to determine whether to believe the story.This scenario––a questionable story that can be proven true or false in time––presents a challenge for combating the spread of false information online. As we have seen many times before on social media, stories are often shared prior to the information being verified. Unfortunately, this will increasingly occur––especially in an election year and during a pandemic. Therefore, examining how social media responded during this particular event can help better determine the rules and patterns that drive the spread of information online.Around 2:00 pm EST on Saturday, April 11, news started to spread on social media that John Horton Conway had died. The main source was a tweet that came from a fellow mathematician, who expressed his condolences and shared a story of Conway writing a blog post for April Fool’s Day.As the news began to spread, most individuals who saw the tweets accepted the information as true and began expressing condolences themselves.However some started to question the news; mainly because the original tweet had no source verifying the claim. As time went on, people began to speculate that this may indeed be a hoax, and many began deleting and retracting earlier tweets; a void existed where a source should be.Users filled that void with Wikipedia, a platform where any individual can make changes to the information on any given page. However, this led to a series of citation conflicts, where users would post and then others would delete the post, claiming a lack of source.The confusion eventually died down as more individuals who knew John Horton Conway explained what had happened, and how they knew. Indeed, the account that first broke the news followed up later with an explanation of what happened. But in that brief window where questions arose, we received a glimpse into how social media reacts to questionable news. And as if discovering the rules to a “Game of Misinformation,” this teaches us a few important lessons about user behavior and how misinformation spreads over time.First, most users quickly trusted the initial reports as the information filtered in. This is to be expected: research has shown that individuals tend to trust those in their social networks. And indeed, the mathematician whose tweet was the primary source, while not the closest person to the deceased, was in the same community. In other words, what he said had weight. Further, by linking an article in Scientific American, users may have made a connection between the news and the article, even when the tweet did specify that was not the case.Because of this level of trust within networks, individuals must carefully consider the content and the context by which they share information. Rushing to post breaking news can cause significant harms when that information is incorrect. At the same time, presentation can also have a drastic impact on how the reader digests the information. In this case, linking to the Scientific American story provided interesting context about the man behind the name, but also could give the reader the impression that the article supported the claim that he had died. That is not to say that any tweets in this situation were hasty or ill-conceived, but individuals must remain mindful of how the information shared online is presented and may be perceived by the audience.Second, people do read comments and replies. The original tweet or social media post may receive the most attention, but many users will scroll through the comments, especially those who post the original material. This leads to two key conclusions. First, users should critically examine information and wait for additional verification before accepting assertions as truth. Second, when information seems incorrect, or at least unverified, users can and should engage with the content to point out the discrepancy. This can mean the difference between a false story spreading between 1,000 people or 1,000,000 people before the information is verified/disproven. Again, while this will not stop the spread of false information outright, it can lead to retractions and a general awareness from other users, which will “flatten the misinformation curve”, so to speak.Finally, when a void of sources exists, individuals may try to use other mediums or hastily reported news to bolster their point of view. In this case, so-called “edit wars” developed on John Conway’s Wikipedia page, with some writing that he had died while others removed the information. While it is impossible to say whether the same individuals who edited the Wikipedia page also used it as evidence to support the original tweet, it does highlight how easy it could be to use a similar method in the future. Users often have to rely on the word of a small number of individuals in the hours following the release of a questionable story. When this is the case, some may try to leverage the implicit trust we have in other institutions to bolster their claims and arguments. In this case, it was Wikipedia, but it could be others. Users must carefully consider the possible biases or exploits that exist with specific sources.Like Conway’s Game of Life, there are patterns to how information spreads online. Understanding these patterns and the rules by which false information changes and grows will be critical as we prepare for the next challenge. Sadly, the story that spread earlier this month turned out to be true, but the lessons we can learn from it can be applied to similar stories moving forward.Jeffrey Westling is a technology and innovation policy fellow at the R Street Institute, a free-market think tank based in Washington, D.C.
Andy Baio always digs up the absolute best stories. His latest involves layers upon layers of fascinating issues and legal questions. The key part, though, is that Jay-Z and his company Roc Nation, were able to convince YouTube to remove two "audio deepfakes" by claiming both copyright infringement and "unlawfully using AI to impersonate our client's voice." Both of these are highly questionable claims. But let's take a few steps back first.We've discussed how there seems to be a bit of a moral panic around deepfakes, with the idea being that more and more advanced technology can be used to create faked video and audio that looks or sounds real -- and that might be used to dupe people. So far, there's little evidence of the technology ever actually being used to really deceive people, and there's plenty of reason to believe that society can adjust and adapt to any eventual attempts at using deepfakes to deceive.Still, in part because of the media and politicians freaking out about the whole idea, a number of social media platforms have put in place fairly aggressive content moderation policies regarding deepfakes, so as to (hopefully) avoid the inevitable big media "expose" about how they're enabling nefarious activities by not pulling such faked videos down. But, as we've noted in some of those previous articles, the vast majority of deepfake content these days is purely used for entertainment/amusement purposes -- not for nefarious reasons.And that's absolutely the case with the anonymous user Vocal Synthesis, who has been playing around with a variety of fun audio deepfakes -- just using AI to synthesize the voice of various famous people saying things they wouldn't normally say (or singing things they wouldn't normally sing). The creator releases them as videos, but it's just a static image, and even when they're "singing" songs, it's without any of the music -- just the voice. So, here's Bob Dylan singing Britney Spears' "... Baby One More Time":And here's Bill Clinton's rendition of Sir Mix-A-Lot's "Baby Got Back":Some other people have taken some of those audio deepfakes and put them to music, which is also fun. Here are six former President's singing N.W.A.'s "Fuck the Police":A few of the audio deepfakes use Jay-Z's distinctive voice -- and apparently Jay-Z or his lawyers got upset about this and issued takedown notices to YouTube on two of them. As I type this, those two videos (one of Jay-Z reciting the famed "To Be, Or Not To Be" soliloquy from Hamlet and another of him doing Billy Joel's "We Didn't Start the Fire") are back up with YouTube saying that the original takedown notices were "incomplete" and therefor the video had been reinstated. But they were taken down originally, and it's possible that more "complete" takedowns will be sent, so for the time being (as Andy Baio did) I'll also point to the same content hosted by LBRY, a decentralized file storage system:And here's where things get odd. As Andy notes in his post (which is so detailed and worth reading), the takedown from Roc Nation made two separate claims: first that the videos infringe on Jay-Z's copyright, and the second that each video "unlawfully uses an AI to impersonate our client’s voice." But what law is being broken here? And if it was illegal to impersonate someone, a bunch of impressionists would be in jail. Andy goes through a detailed fair use analysis on the copyright question:
The Increase Your Google App Productivity with Google Script Bundle has 7 courses to help you learn about Google Apps Script. Apps Script lets you increase the power of your favorite Google apps — like Calendar, Docs, Drive, Gmail, Sheets, and Slides. These 7 courses cover every step to get started with Google Scripts including an overview of the editor and what it does and how to use it. You'll learn more advanced uses with hands-on projects and more. It's on sale for $35.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
I get that people are getting a bit of cabin fever and perhaps that's impacting people's outlook on the world, but a recent piece by David Rotman in the MIT Tech Review is truly bizarre. The title gets you straight to the premise: Covid-19 has blown apart the myth of Silicon Valley innovation. Of course, even the paragraph that explains the thesis seems almost like a modern updating of the famous "what have the Romans ever done for us?" scene from Monty Python's Life of Brian:
42 million Americans lack access to any kind of broadband whatsoever -- more than double official FCC estimates. Millions more can't afford broadband because the monopolized US telecom sector suffers from a dire lack of competition in most markets. US telcos, bored with the slow rate of return, have effectively stopped upgrading their DSL networks across broad swaths of America, leaving cable giants like Comcast and Charter spectrum with a bigger monopoly than ever across wide swaths of America. And no, wireless 5G won't magically fix the problem due to patchy availability and high prices.This is, to hear the FCC tell it, all going swimmingly.By law (Section 706 of the Telecommunications Act of 1996) the FCC is required once a year to issue a report indicating whether quality broadband is being deployed on a "reasonable and timely basis." If not, the agency is supposed to, you know, actually do something about it. But every year like clockwork, the FCC issues the report ignoring all of the biggest problems in the telecom sector, to the obvious benefit of an industry eager to keep things precisely as they are: largely uncompetitive. Never will you see policy that improves competition, because the lack of competition isn't even acknowledged.This year was no exception. The Trump FCC's latest report once again insists that broadband "is being deployed to all Americans in a reasonable and timely fashion," so no shift from the status quo is necessary. And in a very Trumpian statement, FCC boss Ajit Pai congratulates himself for incredible leadership, while repeating the falsehood that his decision to take an axe to already fairly filmsy FCC oversight of the broken sector has somehow resulted in a massive wave of new investment:
Over the years, Techdirt has written many stories about the various forms that censorship has taken in China. The coronavirus pandemic has added an extra dimension to the situation. China is evidently trying to erase certain aspects of the disease's history. In particular, it seeks to deny its likely role in acting as the breeding ground for COVID-19, and to downplay how it infected the rest of the world after the initial outbreak in Wuhan. As the New York Times put it: "China is trying to rewrite its role, leveraging its increasingly sophisticated global propaganda machine to cast itself as the munificent, responsible leader that triumphed where others have stumbled." Quartz reports on a new front in this campaign to re-cast China's actions. Volunteers in China working on a project called Terminus2049, which aims to preserve key digital records of the coronavirus outbreak, are now targets of a crackdown:
Four years ago, the Baltimore Police Department unilaterally decided to put several eyes in the sky. The 192-million megapixel camera system capable of covering 32-square miles was sent skyward with zero public comment or input from the city. And why not? The city was barely involved. The BPD received the camera system courtesy of a private donor.The head of the company, John McNutt, was contacted by some Texas-based philanthropists who offered to pay for the system if McNutt's company, Persistent Surveillance Systems, would put it up in the air. What the system lacks in depth, it makes up for in breadth. Humans and vehicles are reduced to mere pixels, but the system's ability to rewind recordings makes it possible for the PD to track movement of vehicles and people near crime scenes.The aerial surveillance system is more re-purposed war tech. It was originally deployed in Iraq and Afghanistan under the name "Gorgon Stare." That's what Persistent Surveillance Systems is flying over Baltimore, this time with the city's official blessing. After a period of public comment, the surveillance system is no longer just a test project.The ACLU sued to block the launch of the program, citing the Supreme Court's Carpenter decision, which adds a warrant requirement to the collection of cell site location data. It's not an exact fit, but the Carpenter decision has been read by some courts to cover more than just location data.Unfortunately, the ACLU's attempt to secure an injunction has failed. The decision [PDF] doesn't find the Carpenter decision applicable to an all-seeing-eye that can only capture the movement of pixels, rather than identifiable human beings. That being said, the planes (three of them) will fly for a minimum 40 hours a week each, resulting in six months of 12-hour-a-day coverage of nearly the entire city. (h/t Munchkin at Law)There's something more than a little dystopian about the program. But, despite the promise contained in the company's name, this surveillance isn't all that persistent.
The broadcast and TV sector spent the last fifteen years trying to claim that TV cord cutting (cancelling traditional TV and going with streaming or antenna broadcasts) wasn't a real thing, or that it was only something done by losers. But it's the cord cutters who'll be getting the last laugh.A new study (pdf) by the Convergence Research Group indicates that cord cutting, once denied to exist at all by the cable TV sector, is about to get even hotter. According to the report, 36% of US homes didn't pay for "traditional" cable TV at the end of a particularly bloody year for the pay TV sector. The group estimates that total will grow to 42% of US households in 2020, and finally topple into a majority of consumers (54%) by 2022. That in turn is contributing to a notable drop in revenue from the major cable TV providers, down from $100 billion in 2019 to a predicted $94.8 billion this year.If you're worried about major giants like Comcast, AT&T and Verizon struggling, you shouldn't. While their video profits will erode, their monopoly over broadband means they'll simply be recouping that lost revenue by jacking up the price of your broadband connection (including usage caps and overage fees) in the massive number of uncompetitive US broadband markets:
Fucking predictive policing/how the fuck does it work. Mostly, it doesn't. For the most part, predictive policing relies on garbage data generated by garbage cops, turning years of biased policing into "actionable intel" by laundering it through a bunch of proprietary algorithms.More than half a decade ago, early-ish adopters were expressing skepticism about the tech's ability to suss out the next crime wave. For millions of dollars less, average cops could have pointed out hot crime spots on a map based on where they'd made arrests, while still coming nothing close to the reasonable suspicion needed to declare nearly everyone in a high crime area a criminal suspect.The Los Angeles Police Department's history with the tech seems to indicate it should have dumped it years ago. The department has been using some form of the tech since 2007, but all it seems to be able to do is waste limited law enforcement resources to violate the rights of Los Angeles residents. The only explanations for the LAPD's continued use of this failed experiment are the sunk cost fallacy and its occasional use as a scapegoat for the department's biased policing.Predictive policing is finally dead in Los Angeles. Activists didn't kill it. Neither did the LAPD's oversight. Logic did not finally prevail. For lack of a better phrase, it took an act of God {please see paragraph 97(b).2 for coverage limits} to kill a program that has produced little more than community distrust and civil rights lawsuits. Caroline Haskins has more details at BuzzFeed.
Last month, Kara Swisher wrote an opinion piece for the NY Times ripping Sean Hannity and Fox News to shreds for convincing her mother that COVID-19 wasn't going to be too bad back in February and leading into March. It's notable how she started her piece:
After writing this post, we realized that the phrase would make a great t-shirt! So now you can get yourself some OK, Landlord gear from the Techdirt store on Threadless »For a long time now we've explained why comparing copyrights to property is fraught with problems. So much of the reason that we engage in property rights is to enable a more efficient allocation of scarce goods. When you have something that is not-scarce -- or as the cool economist kids like to say "non-rivalrous and non-excludable" -- treating them in the same manner as if they were scarce creates all sorts of weird problems, many of which we've spent two decades detailing on this site. Indeed, for every argument made that copyright is property, you could make a compelling case that it's actually the opposite of property in that it frequently takes away the rights and ability of individuals to do what they want with products they rightfully own.Five years ago, I noted that one of the big problems around the concept of "intellectual property" was the failure of people to separate the content, from the exclusive rights. That is, it's fair to think of the copyright as a form of property -- as the "right of exclusion" that it creates is more property-like -- but that it must be seen as separate from the underlying content. The "copyright" is not the content. And so much of the discussion around copyrights conflates the right and the underlying content and that creates all sorts of problems.Meanwhile, law professor Brian Frye has spent the last month or so making a really important point regarding the never-ending "is copyright property" debate -- saying that if copyright is property, then copyright holders should be seen and treated as landlords. This whole approach can be summed up in the slightly snarky and trollish phrase: "OK, Landlord" used to respond to all sorts of nonsensical takes in support of more egregious copyright policies:
The Start-to-Finish Guide to Launching a Successful Podcast Bundle has 9 courses designed to teach you what you need to know to get your own podcast up and running. Regardless of your budget or skill level, this bundle will show you what it takes to start, record, edit, publish, grow, and monetize your podcast. You'll dive into the benefits of running a podcast, the gear you'll need to get started, and more essential concepts. Courses also cover social media marketing, music production, how to interview your heroes, and more. It's on sale for $45.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.