Even before COVID-19, the brick and mortar movie industry was already struggling to adapt in the face of technological evolution. Now with a pandemic demolishing theater attendance, companies like AMC Theaters face an accelerated timeline as they attempt to cling to outdated constructs like movie release windows, sticky floors, and seventeen dollar popcorn, which were already showing their age in the 4K streaming era.Theater chains haven't exactly been handling this pandemic thing all that well. When Comcast Universal began sending its movies straight to home streaming (you know, given that people don't want to die), AMC Theaters CEO Adam Aron threw an apoplectic fit, insisting his chain would never again show a Comcast/Universal movie. After apparently realizing he didn't have any leverage to make those kinds of threats -- and negotiating to get an unspecified cut of the proceeds -- Aron and AMC agreed to a shorter 17-day release window. Baby steps, I guess.It should go without saying that the scientific consensus is that the pandemic isn't going anywhere, and the problems it's creating are very likely to get worse as it collides with the traditional flu season this fall. Like so many who think they can just bull rush through factual reality and scientific consensus, AMC seems intent on opening a good chunk of its traditional theaters next week, and is hoping to draw crowds by offering 15 cent movie tickets on the first day:
The Beginner's Guide to Personal Finance and Investment Bundle will help you learn how to take control of your finances and grow your wealth. Courses cover money management, the stock market, mutual funds, cryptocurrency, and more. It's on sale for $30.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
Last year we had a detailed post about judge Lucy Koh's district court ruling that outlined exactly how Qualcomm abused its patents in an anticompetitive way to shake down the entire mobile phone industry for decades. This was in a case that was brought by the FTC and it was a stunning ruling on multiple accounts. First, it's rare for a court to recognize how patents and copyrights grant monopoly rights that can be abused in violation of antitrust rules. Second, it exposed a stunning degree of anticompetitive behavior on the part of Qualcomm.Unfortunately, this week, the 9th Circuit overturned that ruling, though did so in a somewhat confused manner. Indeed, this ruling may significantly limit any kind of antitrust activity (at least in the 9th Circuit).There are a lot of details, but the key element at stake was Qualcomm's "no license, no chips" rules. No company could buy Qualcomm chips, which were necessary for mobile devices, without licensing Qualcomm's entire patent portfolio. This snippet from the lower court ruling gives you the basics:
This may be shocking to hear, but nearly all of the promises AT&T made in the lead up to its $86 billion merger with Time Warner wound up not being true.The company's promise that the deal wouldn't result in price hikes for consumers? False. The company's promise the deal wouldn't result in higher prices for competitors needing access to essential AT&T content like HBO? False. AT&T's promise they wouldn't hide Time Warner content behind exclusivity paywalls? False. The "$15 TV service" the company repeatedly hyped as a byproduct of the deal? Already discontinued. The idea that the merger would somehow create more jobs at the company? False.AT&T has laid off 41,000 employees just since it received its 2017, $42 billion tax cut from the Trump administration for doing absolutely nothing (technically, less than nothing, since it fired countless employees and trimmed 2020 CAPEX by around $3 billion). And this week, the company laid off another 600 employees across Time Warner, including employees at HBO and DC Comics:
The CBP is going to continue fishing in people's devices, despite federal courts (including the Ninth Circuit Court of Appeals) telling it that suspicionless device searches are unconstitutional. The agency will just have to come up with something approximating suspicion to do it. Its latest Privacy Impact Assessment of its border device search policy gives it plenty of options for continuing its practice of performing deep dives into devices it encounters.
Even when facial recognition software works well, it still performs pretty poorly. When algorithms aren't generating false positives, they're acting on the biases programmed into them, making it far more likely for minorities to be misidentified by the software.The better the image quality, the better the search results. The use of a low-quality image pulled from a store security camera resulted in the arrest of the wrong person in Detroit, Michigan. The use of another image with the same software -- one that didn't show the distinctive arm tattoos of the non-perp hauled in by Detroit police -- resulted in another bogus arrest by the same department.In both cases, the department swore the facial recognition software was only part of the equation. The software used by Michigan law enforcement warns investigators search results should not be used as sole probable cause for someone's arrest, but the additional steps taken by investigators (which were minimal) still didn't prevent the arrests from happening.That's the same claim made by Las Vegas law enforcement: facial recognition search results are merely leads, rather than probable cause. As is the case everywhere law enforcement uses this tech, low-quality input images are common. Investigating crimes means utilizing security camera footage, which utilizes cameras far less powerful than the multi-megapixel cameras found on everyone's phones. The Las Vegas Metro Police Department relied on low-quality images for many of its facial recognition searches, documents obtained by Motherboard show.
Summary:Creating family friendly environments on the internet presents some interesting challenges that highlight the trade-offs in content moderation. One of the founders of Electric Communities, a pioneer in early online communities, gave a detailed overview of the difficulties in trying to build such a virtual world for Disney that included chat functionality. He described being brought in by Disney alongside someone from a kids’ software company, Knowledge Adventure, who had built an online community in the mid-90s called “KA-Worlds.” Disney wanted to build a virtual community space, HercWorld, to go along with the movie Hercules. After reviewing Disney’s requirements for an online community, they realized chat would be next to impossible:
Well, that didn't take long. We had just been discussing how the Paulding County School District had suspended a student for taking a photo of packed hallways filled with kids not wearing masks on the first day back to school a week or so ago. While the school mumbled something about the suspension being for using a phone without permission at school, the school also said the quiet part out loud over the intercom when it informed students that any social media activity that made the school look bad would result in "consequences." In case it wasn't already clear, that is blatantly unconstitutional, violating the students' First Amendment rights.In the least shocking news ever, the district has since reversed that suspension.
Every person in Myanmar above the age of 10 has lived part, if not most, of their life under a military dictatorship characterized by an obsession with achieving autonomy from international influences. Before the economic and political reforms of the past decade, Myanmar was one of the most isolated nations in the world. The digital revolution that has reshaped nearly every aspect of human life over the past half-century was something the average Myanmar person had no personal experience with.Recent reforms brought an explosion of high hopes and technological access, and Myanmar underwent a digital leapfrog, with internet access jumping from nearly zero percent in 2015 to over 40 percent in 2020. At 27-years-old, I remember living in a Yangon where having a refrigerator was considered high tech, and now, there are 10-year-olds making videos on Tik Tok.Everyone was excited for Myanmar's digital revolution to spur the economic and social changes needed to transform the country from a pariah state into the next economic frontier. Tourists, development aid, and economic investment poured into the country. The cost of SIM cards dropped from around 1,000 US dollars in 2013 to a little over 1 dollar today.This dramatic price drop was paired with a glut of relatively affordable smartphones and phone carriers that provided data packages that made social media platforms like Facebook free, or nearly free, to use. This led to the current situation where about 21 million out of the 22 million people using the internet are on Facebook. Facebook became the main conduit through which people accessed the internet, and now is used for nearly every online activity from selling livestock, watching porn, reading the news, to discussing politics.Then, following the exodus of over 700,000 Rohingya people from Myanmar’s war-torn Rakhine State, Facebook was accused of enabling a genocide.The ongoing civil wars in the country and the state violence against the Rohingya, characterized by the UN as ethnic cleansing with genocidal intent, put a spotlight on the potential for harm brought on by digital connectivity. Given its market dominance, Facebook has faced great scrutiny in Myanmar for the role social media has played in normalizing, promoting, and facilitating violence against minority groups.Facebook was, and continues to be, the favored tool for disseminating hate speech and misinformation against the Rohingya people, Muslims in general, and other marginalized communities. Despite repeated warnings from civil society organizations in the country, Facebook failed to address the new challenges with the urgency and level of resources needed during the Rohingya crisis, and failed to even enforce its own community standards in many cases.To be sure, there have been improvements in recent years, with the social media giant appointing a Myanmar focused team, expanding their number of Myanmar language content reviewers, adding minority language content reviewers, establishing more regular contact with civil society, and devoting resources and tools focused on limiting disinformation during Myanmar’s upcoming election. The company also removed the accounts of Myanmar military officials and dozens of pages on Facebook and Instagram linked to the military for engaging in "coordinated inauthentic behavior." The company defines "inauthentic behavior" as "engag[ing] in behaviors designed to enable other violations under our Community Standards," through tactics such as the use of fake accounts and bots.Recognizing the seriousness of this issue, everyone from the EU to telecommunications companies to civil society organizations have poured resources into digital literacy programs, anti-hate-speech campaigns, social media monitoring, and advocacy to try and address this issue. Overall, the focus of much of this programming is on what Myanmar and the people of Myanmar lack—rule of law, laws protecting free speech, digital literacy, knowledge of what constitutes hate speech, and resources to fund and execute the programming that is needed.In the frenzy of the desperate firefighting by organizations on the ground, less attention has been given to larger systemic issues that are contributing to the fire.There is a need to pay greater attention to those coordinated groups that are working to spread conspiracy theories, false information, and hatred to understand who they are, who is funding them, and how their work can be disrupted—and, if necessary, penalized.There is a need to reevaluate how social media platforms are designed in a way that incentivizes and rewards bad behavior.There is also a need to question how much blame we want to assign to social media companies, and whether it is to the overall good to give them the responsibility, and therefore power, to determine what is and isn't acceptable speech.Finally, there is a need to ask ourselves about alternatives we can build, when many governments have proven themselves more than willing to surveil and prosecute netizens under the guise of health, security, and penalizing hate speech.It is dangerous to expect private, profit-driven multinational corporations to be given the power to draw the line between hate speech and free speech. Just as it is dangerous to give that same power to governments, especially in this time of rising ethno-nationalistic sentiments around the globe and the increasing willingness of governments to overtly and covertly gather as much data as possible to use against those they govern. We can see from the ongoing legal proceedings against Myanmar in international courts regarding the Rohingya and other ethnic minorities, and statements from UN investigative bodies on Myanmar that Facebook has failed release to them evidence of serious international crimes, that neither company policies nor national laws are enough to ensure safety, justice, and dignity for vulnerable populations.The solution to all this, as unsexy as it sounds, is a multifaceted, multi-stakeholder, long-term effort to build strong legal and cultural institutions that disperses the power and the responsibility to create and maintain safe and inclusive online spaces between governments, individuals, the private sector, and civil society.Aye Min Thant is the Tech for Peace Manager at Phandeeyar, an innovation lab which promotes safer and more inclusive digital spaces in Myanmar. Formerly, she was a Pulitzer Prize winning journalist who covered business, politics, and ethno-religious conflicts in Myanmar for Reuters. You can follow her on Twitter @ma_ayeminthant.This article was developed as part of a series of papers by the Wikimedia/Yale Law School Initiative on Intermediaries and Information to capture perspectives on the global impacts of online platforms’ content moderation decisions. You can read all of the articles in the series here, or on their Twitter feed @YaleISP_WIII.
Last year we wrote about what we called the "dumbest gotcha story of the week," involving the music annotation site Genius claiming that Google had "stolen" its lyrics. The only interesting thing about the story is that Genius had tried to effectively watermark its version of the lyrics by using some smart apostrophes and some regular apostrophes. However, as we noted, the evidence that Google "copied" Genius just wasn't supported by the facts -- and even if they had copied Genius, it's unclear how that would violate any law. You can read that post for more details, but the simple fact is that a bunch of sites all license lyrics and have permission for them -- and many use a third party such as LyricFind to supply the lyrics. But how those lyrics are created is... however possible. Even as sites "license" lyrics from publishing companies, those companies themselves don't have their own lyrics. So basically lyric databases are created however possible -- including having people jot down what they think lyrics are... or by copying other sites that are doing the same. And there's nothing illegal about any of that.And yet, for reasons that are beyond me, last December, Genius sued both Google and LyricFind over this. As we noted at the time, it was one of the dumbest lawsuits we'd seen in a while, and it would easily fail. And that is exactly what has happened. The lawsuit was removed from NY state court to federal court, and while Genius tried to send it back, the judge not only rejected that request, but she dismissed the entire lawsuit for failure to state a claim (that's legal talk for "wtf are you even suing over, that doesn't violate any law, go home.")There were a bunch of issues that Genius tried to raise, but all of them were pretend issues. As we noted all along, Genius has no copyright interest in the lyrics (indeed, it has to license them too -- and, amusingly, in its early days, songwriters accused Genius of being a "pirate" site for not licensing those lyrics...). And so Genius tried to make a bunch of claims without arguing any copyright interest, but these were all really attempted copyright claims in disguise, and the court rightly pointed out that copyright pre-empts all of them.Breach of contract? Nah, copyright pre-empt's that:
Working from home can be amazingly convenient but really hard at the same time. To successfully work remotely you need key skills: focus, self-motivation, communication, collaboration, and more. The 2020 Work From Anywhere Bundle can help you make the transition from working in an office to working remotely or for yourself. It's on sale for $30.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
You probably didn't notice it, but there are currently no third-party ads on Techdirt. We pulled them down late last week, after it became impossible to keep them on the site, thanks to some content moderation choices by Google. In some ways, this is yet another example of the impossibility of content moderation at scale. If we didn't know and understand how impossible content moderation at scale is to do well, we might be like The Federalist and pretend that Google's content moderation decisions were based on disagreement with our ideology. That would have allowed us to make a fake story like the one that is still getting news cycles, thanks to idiots in Congress insisting that Google defunded the Federalist because of its ideological viewpoints.The truth is that Google's AdSense (its third-party ad platform) content moderation just sucks. In those earlier posts about The Federalist's situation, we mentioned that tons of websites deal with those "policy violation" notices from Google all the time. Two weeks ago, it went into overdrive for us: we started receiving policy violation notices at least once a day, and frequently multiple times per day. Every time, the message was the same, telling us we had violated their policies (they don't say which ones) and we had to log in to our "AdSense Policy Center" to find out what the problem was. Every day for the ensuing week and a half (until we pulled the ads down), we would get more of these notices, and every time we'd log in to the Policy Center, we'd get an ever rotating list of "violations." But there was never much info to explain what the violation was. Sometimes it was "URL not found" (which seems to say more about AdSense's shit crawler than us). Sometimes it was "dangerous and derogatory content." Sometimes it was "shocking content."But that would be about it. One difference, however, was that in the past Google would say that we didn't need to fix those flagged URLs and that they would just stop showing ads on those pages. Which is fine. They don't want their ads appearing there, no problem. But, many of these new "policy violations" said they were "must fix" issues. But what that "fix" sould be was never explained. Incredibly, this included a non-existent URL (a malformed URL that would just take you to the front page of Techdirt). That was deemed "must fix." Also, somewhat amusingly, the tag page for Google was deemed "dangerous or derogatory" and "must fix":Same with the tag page for "content moderation." I only wish I were joking:Again, what you see there was basically all of the information given to us. How do we "fix" that? Who the fuck knows? Again, I do not think that this is Google targeting us for our views (even when we have been critical of Google or Google's content moderation practices). It just seems to be that content moderation is impossible to do well, and Google is a prime example.Incredibly, this list of problematic URLs would just keep changing. Some would drop off the list with no explanation (even the "must fix" ones). Some new ones would be added. Some would switch between "must fix" and "don't need to fix." No explanation. No record of the "fixes." As an example, on Friday July 31st, I logged in and saw 25 URLs deemed to be policy violations. On Saturday morning I logged in and it was down to 18. No reason. Sunday morning it was at 22. But Sunday evening it was 27.I tried to reach out to people at AdSense to figure out what the hell we should do and did not get back anything useful.Three other things happened around this time as well. First, on the same day we started receiving these daily (or multiple times daily) policy violation emails, Google also started claiming that our daily emails (which are just snapshots of the blog itself) were phishing attempts, and automatically deleting them from any G Suite user's email account:For users of Gmail (not G Suite) it just moved our newsletters to spam, still claiming they were phishing attempts:Again, the emails don't ask users to do anything or to log in to anything. They're not phishing. They're just an email version of the day's blog posts. We didn't see how these two things (the AdSense violations and the accusations of "phishing") could possibly be connected, so it might just be a coincidence that they started the exact same day -- but, again, who knows?The next thing that happened was that the company we work with to manage the ad flow on our website (and to bring in other sources, beyond Google ads) told us that Google had reached out to them (not us) to say that because of all of the ongoing unfixed "policy" violations, we would be kicked out of AdSense by the end of August. Also, Google told them that we were engaging in "clickspam" by hiding our ads to make them look like regular content, and that needed to be fixed immediately. The problem is -- we don't do that and have never done that. Our ads were always in the right hand column and clearly called out as ads. Indeed, we pay attention to what other sites do, and we are way, way, way, way more careful than basically every other website on the planet when it comes to not shoving our ads where they might be mistaken as organic content.Finally, we started receiving reports from multiple Techdirt visitors (including those who told us they had purposefully whitelisted Techdirt from their ad blockers) that ads being delivered by Google were causing their computers to run hot. Multiple reports of ads on Techdirt failing to load properly, and causing Techdirt to fail to load properly. And also causing fans to turn on. And, to be honest, that's the last straw for us. We would try to work with Google to understand why our content is so problematic for it, but when Google's products start harming our users and causing a nuisance for them, that's when they've got to go.Given all this, we just decided that we're pulling the ads off the site entirely for the time being -- at least until we can figure out a better situation. This (obviously) represents a revenue hit for us, but the situation had become impossible to deal with. I was wasting so much time the past few weeks trying to figure out what the hell we were supposed to do, as opposed to doing the work I needed to be doing. So, that's it for now. We're looking at other providers out there, but so far, so many of the ones we talk to appear to be sketchy, and we're not doing that either. If anyone knows of any non-sketchy, non-awful advertising partners, please let us know. Or, if you happen to have some excess money and want to just sponsor stuff so we don't even have to worry about regular ads, let us know. Assuming most of you are not in that position, we do have a page of various ways individuals can support us. We know that times are tough for many, many people right now, but if you happen to be doing okay, and can help us replace at least a little of what money we made from ads, that would be greatly appreciated.
As we've noted a few times, not much about the Trump administration's ban of TikTok makes coherent sense. Most of the biggest TikTok pearl clutchers in the GOP and Trump administration have actively opposed things like basic US privacy laws or even improving election security, and were utterly absent from efforts to shore up other privacy and problems, be it the abuse of cellular location data or our poorly secured telecom infrastructure. It's a bunch of xenophobia, pearl clutching, and performative politics dressed up as serious adult policy that doesn't even get close to fixing any actual problems.And yet, many reporters and internet experts keep parroting the idea that banning TikTok somehow "protects U.S. consumers" or "prevents the Chinese government from obtaining U.S. consumer data." You're to ignore that Americans install millions of Chinese-made "smart" TVs, fridges, and poorly secured IOT gadgets on home and business networks with reckless abandon. Or that international corporations not only sell access to consumer data to any nitwit with a nickel, they often leave it unencrypted in the cloud. Or that the U.S. has no privacy law for the internet era, and corporations routinely see performative wrist slaps for privacy and security incompetence.The idea that Chinese intelligence, with zero scruples and an unlimited budget, "needs" TikTok access to spy on Americans' data in this environment is just silly nonsense. Any yet, here we are.It's all even more absurd when you consider the scope and complexity of global adtech markets. As Gizmodo's Shoshana Wodinsky recently explored, international adtech is a complex, unaccountable monster. This orgy of consumer tracking, behavioral data, and "anonymized" (read: not actually anonymous at all) datasets is so complex, even folks that cover the sector have a hard time understanding it. Thinking we can control what data the Chinese government is gleaning from this tangled web -- or that even selling TikTok to Microsoft somehow "fixes" anything -- is an act of hubris in full context:
Georgia governor Brian Kemp -- last seen here trying to turn his own election security problems into a Democrat-lead conspiracy -- has just proven he's unable to read the room. The governor can't read the room in his own state, much less the current state of the nation. Less than a month after the killing of George Floyd in Minneapolis triggered nationwide protests against police violence, officers in Atlanta were involved in a controversial killing of a Black man in a fast food restaurant parking lot.The state of the nation is pretty much the same as it is in Georgia. Now is not the time to be offering police officers even more legal protections, considering how much they've abused the ones they already have. Idiotic bills touted by legislators saying stupid things like "blue lives matter" come and go. Mostly they go, since they're either redundant or unworkable. These laws try to turn a person's career choice into an immutable characteristic, converting some of the most powerful people in the nation into a class that deserves protection from the public these officers are sworn to serve.It's now possible to commit a hate crime against a cop in Georgia, thanks to Kemp and his party-line voters.
This is an apple. Some people might try to tell you that this is a banana. They might scream banana, banana, banana. Over and over and over again. They might put BANANA in all caps. You might even start to believe that this is a banana. But it's not. This is an apple.
Todesign better regulation for the Internet, it is important tounderstand two things: the first one is that today's Internet,despite how much it has evolved, still continues to depend on itsoriginal architecture; and, the second relates to how preserving thisdesign is important for drafting regulation that is fitfor purpose.On top of this, the Internet invites a certain way of networking –let's call it the Internet way of networking. There are manytypes of networking out there, but the Internet way ensuresinteroperability and global reach, operates on building blocks thatare agile, while its decentralized management and general purposefurther ensure its resilience and flexibility. Rationalizing this,however, can be daunting because the Internet is multifaceted, whichmakes its regulation complicated. The entire regulatory processinvolves the reconciliation of a complex mix of technology and socialrules that can be incompatible and, in some cases, irreconcilable.Policy makers, therefore, are frequently required to make toughchoices, which often manage to strike the desired balance, while,other times, they lead to a series of unintended consequences.Europe'sGeneral Data Protection Regulation (GDPR) is a good example. Thepurpose of the regulation was simple: fix privacy by providing aframework that would allow users to understand how their data isbeing used, while forcing businesses to alter the way they treat thedata of their customers. The GDPR was set to create much-neededstandardsfor privacy in the Internet and, despite continuous enforcement andcompliance challenges, this has majorly been achieved. But,when it comes to the effect it has had on the Internet, the GDPR hasposed some challenges. Almost two months after going into effect, itwas reportedthat more than 1000 websites were affected, becoming unavailable toEuropean users. And, even now, two years after, fragmentationcontinues to be an issue.So,what is there to do? How can policy makers strike a balance betweenaddressing social harms online and policies that do not harm theInternet?A starting point isto perform a regulatory impact assessment for the Internet. It atestedmethod of policy analysis, intended to assist policy makers in thedesign, implementation and monitoring of improvements to theregulatory system; it provides the methodology for producing highquality regulation, which can, in turn, allow for sustainabledevelopment, market growth and constant innovation. A regulatoryimpact assessment constitutes a tool that ensures regulation isproportional(appropriate to the size of the problem it seeks to address),targeted(focused and without causing any unintended consequences),predictable(it creates legal certainty), accountable(in terms of actions and outcomes) and, transparent(on how decisions are made).Thistype of thinking can work to the advantage of the Internet. TheInternet is an intricate system of interconnected networks thatoperates according to certain rules. It consists of a set offundamental properties that contribute to its flexible and agilecharacter, while ensuring its continuous relevance and constantability to support emerging technologies; it is self-perpetuating inthe sense that it systematically evolves while its foundation remainsintact. Understanding and preserving the idiosyncrasy of the Internetshould be key in understanding how best to approach regulation.Ingeneral, determining the context, scope and breadth of Internetregulation is important to determine whether regulation is needed andthe impact it may have. Asking questions that under normalcircumstances policy makers contemplate when seeking to make informedchoices is the first step. These include: Does the proposed new rulesolve the problem and achieve the desired outcome? Does it balanceproblem reduction with other concerns, such as costs? Does it resultin a fair distribution of the costs and benefits across segments ofsociety?Is it legitimate, credible and, trustworthy? But, there should be anadditional question: Does the regulation create any consequences forthe Internet?Activelyseeking answers to these questions is vital because regulation isgenerally risky, and risksarise from acting as well as from not acting.To appreciate this, imagine if the choices made in the early days ofthe Internet dictated a high regulatory regime in the deployment ofadvanced telecommunications and information technologies. TheInternet would, most certainly, not be able to evolve the way it hasand, equally the quality of regulation would suffer.Inthis context, the scope of regulation is important. The fundamentalproblem with much of the current Internet regulation is that it seeksto fix social problems by interfering with the underlying technologyof the Internet. Across a wide range of policymaking, we know thatsolely technical fixes rarely fix social problems. It is importantthat governments do not regulate aspects of the Internet that couldbe seen as compromising network interoperability, to solve societalproblems. Thisis a "category error" or, more elaborately, amisunderstanding of the technical design and boundaries of theInternet. Such a misunderstanding tends to confuse the salientsimilarities and differences between the problem and where thisproblem occurs; it not only fails to tackle the root of the problembut causes damage to the networks we all rely on. Take, for instance,data localization rules, which seek to force data to remain withincertain geographical boundaries. Various countries, most recentlyIndia,are tryingto forcibly localize data, and risk impeding the openness andaccessibility of the global Internet. Data will not be able to flowuninterrupted on the basis of network efficiency; rather, specialarrangements will need to be put in place in order for that data tostay within the confines of a jurisdiction. The result will beincreased barriers to entry, to the detriment of users, businessesand governments seeking to access the Internet. Ultimately, forceddata localization makes the Internet less resilient, less global,more costly, and less valuable.Thisis where a regulatory risk impact analysis can come in handy.Generally, what the introduction of a risk impact analysis does isthat it shows how policy makers can make informed choices about howsome of the regulatory claims can or cannot possibly be true. Thiswould require a shift in the behavior of policy makers from solelyfocusing on process to a more performance-oriented and result-basedapproach.Thissounds more difficult than it actually is. Jurisdictions around theworld are accustomed to performing regulatory impact assessmentswhich has successfully been integrated in many governments' policymaking process for more than 35 years. So, why can't it be partof Internet regulation?Dr. Konstantinos Komaitis is the Senior Director, Policy Strategy and Development at the Internet Society.
The standard operating procedure for most companies is to freak out about copycat products, and usually to use intellectual property laws to fight them tooth and nail — even at the expense of other aspects of the business that could use a lot more attention. Today, we're talking to the founder of a company that takes a more nuanced, less panicked approach: Dan Vinson is the creator of Monkii Bars, which launched with a Kickstarter that embraced and celebrated people making DIY copies, and he joins us this week to discuss a better way to think about copycats, and the advantages it brings.Follow the Techdirt Podcast on Soundcloud, subscribe via iTunes or Google Play, or grab the RSS feed. You can also keep up with all the latest episodes right here on Techdirt.
One of the dumber aspects of press coverage of the TikTok kerfuffle is the lack of broader context. How, exactly, does banning a Chinese-owned teen dancing app solve our security and privacy headaches in a world where apps and services everywhere are collecting most of the same data, if not more? And why the myopic focus on just TikTok when Americans attach millions of totally unsecured Chinese-made "smart" IOT devices to their home and business networks with reckless abandon? If you're going to freak out about U.S. consumer privacy and internet security -- why not focus on actually shoring up overall U.S. consumer privacy and security?Many press outlets and analysts have innately bought into the idea that banning TikTok somehow seriously thwarts the Chinese government's spying efforts. In reality, China's spying capabilities, fueled by an unlimited budget, have no limit of potential other ways to get far more data thanks to United States' lax privacy and security standards. Case in point, last week in the midst of TikTok hysteria, a report quietly emerged showing that the U.S. satellite communications networks have the security of damp cardboard:
Congressional legislators -- apparently caught off guard by one state's revenue stream -- are asking the California Department of Motor Vehicles a $50 million question: why the hell are you selling residents' personal data?
This seems like the sort of thing a court shouldn't need to sort out, but here we are. More specifically, here are two plaintiffs suing over Oakland County, Michigan's forfeiture policy. This isn't civil asset forfeiture -- where property is treated as guilty until proven innocent. This isn't even criminal asset forfeiture -- the seizure of property by the government following a conviction.But this form of forfeiture can be just as abusive as regular civil asset forfeiture. There's no criminal act involved -- real or conjectured. It's the result of a civil violation: the nonpayment of property taxes. And Oakland County, the plaintiffs argue, is performing unconstitutional takings to unjustly enrich itself.It's not that these sorts of things are uncommon. Tax liens are often put on property when tax payments are delinquent. It's that one of these seizures -- and subsequent auction -- was triggered by a delinquent amount that would have required the county to make change from a $10 bill. (via Volokh Conspiracy)This is from the opening of the state Supreme Court's decision [PDF], which shows just how much the county government can profit from these forfeitures.
Last month, scammers hijacked the Twitter accounts of former President Barack Obama and dozens of other public figures to trick victims into sending money. Thankfully, this brazen act of digital impersonation only fooled a few hundred people. But artificial intelligence (AI) is enabling new, more sophisticated forms of digital impersonation. The next big financial crime might involve deepfakes—video or audio clips that use AI to create false depictions of real people.Deepfakes have inspired dread since the term was first coined three years ago. The most widely discussed scenario is a deepfake smear of a candidate on the eve of an election. But while this fear remains hypothetical, another threat is currently emerging with little public notice. Criminals have begun to use deepfakes for fraud, blackmail, and other illicit financial schemes.This should come as no surprise. Deception has always existed in the financial world, and bad actors are adept at employing technology, from ransomware to robo-calls. So how big will this new threat become? Will deepfakes erode truth and trust across the financial system, requiring a major response by the financial industry and government? Or are they just an exotic distraction from more mundane criminal techniques, which are far more prevalent and costly?The truth lies somewhere in between. No form of digital disinformation has managed to create a true financial meltdown, and deepfakes are unlikely to be the first. But as deepfakes become more realistic and easier to produce, they offer powerful new weapons for tech-savvy criminals.Consider the most well-known type of deepfake, a “face-swap” video that transposes one person’s expressions onto someone else’s features. These can make a victim appear to say things she never said. Criminals could share a face-swap video that falsely depicts a CEO making damaging private comments—causing her company’s stock price to fall, while the criminals profit from short sales.At first blush, this scenario is not much different than the feared political deepfake: a false video spreads through social or traditional media to sway mass opinion about a public figure. But in the financial scenario, perpetrators can make money on rapid stock trades even if the video is quickly disproven. Smart criminals will target a CEO already embroiled in some other corporate crisis, who may lack the credibility to refute a clever deepfake.In addition to video, deepfake technology can create lifelike audio mimicry by cloning someone’s voice. Voice cloning is not limited to celebrities or politicians. Last year, a CEO’s cloned voice was used to defraud a British energy company out of $243,000. Financial industry contacts tell me this was not an isolated case. And it shows how deepfakes can cause damage without ever going viral. A deepfake tailored for and sent directly to one person may be the most difficult kind to thwart.AI can generate other forms of synthetic media beyond video and audio. Algorithms can synthesize photos of fictional objects and people, or write bogus text that simulates human writing. Bad actors could combine these two techniques to create authentic-seeming fake social media accounts. With AI-generated profile photos and AI-written posts, the fake accounts could pass as human and earn real followers. A large network of such accounts could be used to denigrate a company, lowering its stock price due to false perceptions of a grassroots brand backlash.These are just a few ways that deepfakes and other synthetic media can enable financial harm. My research highlights ten scenarios in total—one based in fact, plus nine hypotheticals. Remarkably, at least two of the hypotheticals already came true in the few months since I first imagined them. A Pennsylvania attorney was scammed by imposters who reportedly cloned his own son’s voice, and women in India were blackmailed with synthetic nude photos. The threats may still be small, but they are rapidly evolving.What can be done? It would be foolish to pin hopes on a silver bullet technology that reliably detects deepfakes. Detection tools are improving, but so are deepfakes themselves. Real solutions will blend technology, institutional changes, and broad public awareness.Corporate training and controls can help inoculate workers against deepfake phishing calls. Methods of authenticating customers by their voices or faces may need to be re-examined. The financial industry already benefits from robust intelligence sharing and crisis planning for cyber threats; these could be expanded to cover deepfakes.The financial sector must also collaborate with tech platforms, law enforcement agencies, journalists, and others. Many of these groups are already working to counter political deepfakes. But they are not yet as focused on the distinctive ways that deepfakes threaten the financial system.Ultimately, efforts to counter deepfakes should be part of a broader international strategy to secure the financial system against cyber threats, such as the one the Carnegie Endowment is currently developing together with the World Economic Forum.Deepfakes are hardly the first threat of financial deception, and they are far from the biggest. But they are growing and evolving before our eyes. To stay ahead of this emerging challenge, the financial sector should start acting now.Jon Bateman is a Cyber Policy Initiative, Technology and International Affairs Fellow at the Carnegie Endowment for International Peace.
The rise of streaming video competitors is indisputably a good thing. Numerous new streaming alternatives have driven competition to an antiquated cable TV sector that has long been plagued by apathy, high rates, and comically-bad customer service. That's long overdue and a positive thing overall, as streaming customer satisfaction scores suggest.But as the sector matures, there's a looming problem it seems oblivious to.Increasingly, companies are pulling their content off central repositories like Hulu and Netflix, and making them exclusive to their own streaming platforms, forcing consumers to subscribe to more and more streaming services if they want to get all the content they're looking for.Want to watch Star Trek: Discovery, you need CBS All Access. Can't miss Stranger Things? You'll need Netflix. The Boys? Amazon Prime. The Handmaid's Tale? Hulu. Friends? AT&T. This week it was Comcast's turn in announcing that the Harry Potter films would now be exclusive to Comcast's new streaming service, Peacock. Of course it's not as simple as all that. The titles will appear and disappear for the next few years, being free for a while... then shifting to a pay per view model for a while:
It's pretty well established that offensive hand gestures are covered by the First Amendment, even when it's a cop receiving the finger. This free speech has resulted in "contempt of cop" arrests and citations, but there hasn't been a federal court yet willing to recognize a police officer's "right" to remain unoffended. And if the First Amendment is violated by cops in retaliation for flipping the bird, there's going to be some Fourth Amendment violations as well.Somehow the Constitution hasn't gotten around to removing a terrible law from the books in San Diego, California. In this city, it's still a criminal act to say rude things within hearing distance of a cop. (It's actually illegal if anyone can overhear it, but only police officers have the power to turn something mildly offensive into a criminal citation.)
Last year, I co-authored an article with my law school advisor, Prof. Eric Goldman, titled “Why Can’t Internet Companies Stop Awful Content?” In our article, we concluded that the Internet is just a mirror of our society. Unsurprisingly, anti-social behavior exists online just as it does offline. Perhaps though, the mirror analogy doesn’t go far enough. Rather, the Internet is more like a magnifying glass, constantly refocusing our attention on all the horrible aspects of the human condition.Omegle, the talk-to-random-strangers precursor to Chatroulette, might be that magnifying glass, intensifying our urge to do something about awful content.Unfortunately, in our quest for a solution, we often skip a step, jumping to Section 230—the law that shields websites from liability for third-party content—instead of thinking carefully about the scalable, improvable, and measurable strides to be made through effective content moderation efforts.Smaller companies make for excellent content moderation case studies, especially relatively edgier companies like Omegle. It’s no surprise that Omegle is making a massive comeback. After 100+ days of quarantine, anything that recreates at least a semblance of interaction with humans, not under the same roof, is absolutely enticing. And that’s just what Omegle offers. For those that are burnt out on monotonous Zoom “coffee chats,” Omegle grants just the right amount of spontaneity and nuanced human connection that we used to enjoy before “social distancing” became a household phrase.Of course, it also offers a whole lot of dicks.When I was a teen, Omegle was a sleepover staple. If you’re unfamiliar, Omegle offers two methods of randomly connecting with strangers on the Internet: text or video. Both are self-explanatory. Text mode pairs two anonymous strangers in a chat room whereas video mode pairs two anonymous strangers via their webcams.Whether you’re on text or video, there’s really no telling what kinds of terrible content—and people—you’ll likely encounter. It’s an inevitable and usual consequence of online anonymity. While the site might satisfy some of our deepest social cravings, it might also expose us to some incredibly unpleasant surprises outside the watered-down and sheltered online experiences provided to us by big tech. Graphic pornography, violent extremism, hate speech, child predators, CSAM, sex trafficking, etc., are all fair game on Omegle; all of which is truly awful content that has always existed in the offline world, now magnified by the unforgiving, unfiltered, use-at-your-own-risk, service.Of course, like with any site that exposes us to the harsh realities of the offline world, critics are quick to blame Section 230. Efforts to curtail bad behavior online usually start with calls to amend Section 230.At least to Section 230’s critics, the idea is simple: get rid of Section 230 and the awful content will follow. Their reason, as I understand it, is that websites will then “nerd harder” to eliminate all awful content so they won’t be held liable for it. Some have suggested the same approach for Omegle.Obvious First Amendment constraints aside (because remember, the First Amendment protects a lot of the “lawful but awful content,” like pornography, that exists on Omegle’s service), what would happen to Omegle if Section 230 were repealed? Rather, what exactly is Omegle supposed to do?For starters, Section 230 excludes protection for websites that violate federal criminal law. So, Omegle would continue to be on the hook if it started to actively facilitate the transmission of illegal content such as child pornography. No change there.But per decisions like Herrick v. Grindr, Dyroff v. Ultimate Software, and Roommates.com, it is well understood that Section 230 crucially protects sites like Omegle that merely facilitate user-to-user communication without materially contributing to the unlawfulness of the third-party content. Hence, even though there exists an unfortunate reality where nine year-olds might get paired randomly with sexual predators, Omegle doesn’t encourage or materially contribute to that awful reality. So, Omegle is afforded Section 230 protection.Without Section 230, Omegle doesn’t have a lot of options as a site dedicated to connecting strangers on the fly. For example, the site doesn’t even have a reporting mechanism like its big tech counterparts. This is probably for two reasons: (1) The content on Omegle is ephemeral so by the time it’s reported, the victim and the perpetrator have likely moved on and the content has disappeared; and (2) it would be virtually impossible for Omegle to issue suspensions because Omegle users don’t have dedicated accounts. In fact the only option Omegle has for repeat offenders is a permanent IP ban. Such an option is usually considered so extreme that it’s reserved for only the most heinous offenders.There are a few things Omegle could do to reduce their liability in a 230-less world. They might consider requiring users to have dedicated handles. It’s unclear though whether account creation would truly curb the dissemination of awful content anyway. Perhaps Omegle could act on the less heinous offenders, but banned, suspended, or muted users could always just generate new handles. Plus, where social media users risk losing their content, subscribers, and followers, Omegle users realistically have nothing to lose. So, generating a new handle is relatively trivial, leaving Omegle with the nuclear IP ban.Perhaps Omegle could implement some sort of traditional reporting mechanism. Reporting mechanisms are only effective if the service has the resources to properly respond to and track issues. This means hiring more human moderators to analyze the contextually tricky cases. Additionally, it means hiring more engineers to stand up robust internal tooling to manage reporting queues and to perform some sort of tracking for repeat offenders.For Omegle, implementing a reporting mechanism might just be doing something to do something. For traditional social media companies, a reporting mechanism ensures that violating content is removed and the content provider is appropriately reprimanded. Neither of those goals are particularly relevant to Omegle’s use case. The only goal a reporting mechanism might accomplish is in helping Omegle track pernicious IP addresses. Omegle could set up an internal tracking system that applies strikes to each IP before the address is sanctioned. But if the pernicious user can just stand up a new IP and continue propagating abuse, the entire purpose of the robust reporting mechanism is moot.Further, reporting mechanisms are great for victimized users that might seek an immediate sense of catharsis after encountering abusive content. But if the victim’s interaction with the abusive content and user is ephemeral and fleeting, the incentive to report is also debatable.All of this is to drive home the point that there is no such thing as a one-size-fits-all approach to content moderation. Even something as simple as giving users an option to report might be completely out of scope depending on the company’s size, resources, bandwidth, and objectives.Another suggestion is that Omegle simply stop allowing children to be paired with sexual predators. This would require Omegle to (1) perform age verification on all of its users with the major trade-off being privacy—not to mention the obvious that it may not even work. Nothing really stops a teen from stealing and uploading their parents’ credit card or license; and (2) require all users to prove they aren’t sexual predators (???)—an impossible (and invasive) task for a tiny Internet company.Theoretically, Omegle could pre-screen all content and users. Such an approach would require an immense team of human content moderators, which is incredibly expensive for a website that has an estimated annual revenue of less than $1 million and less than 10 employees. Plus, it would completely destroy the service’s entire point. The reason Omegle hasn’t been swallowed up by tech incumbents is because it offers an interesting online experience completely unique from Google, Facebook, and Twitter. Pre-screening might dilute that experience.Another extreme solution might be to just strip out anonymity entirely and require all users to register all of their identifying information with the service. The obvious trade-off: most users would probably never return.Clearly, none of these options are productive or realistic for Omegle; all of which are consequences of attacking the awful content problem via Section 230.Without any amendments to Section 230, Omegle has actually taken a few significant steps to effectively improve their service. For example, Omegle now has an 18+ Adult, “unmoderated section” in which users are first warned about sexual content and required to acknowledge that they’re 18 or older before entering. Additionally, Omegle clarifies that the “regular” video section is monitored and moderated to the best of their abilities. Lastly, Omegle recently included a “College student chat” which verifies students via their .edu addresses. Of course, to use any of Omegle’s features, a user must be 18+ or 13+ with parental permission.The “unmoderated section” is an ingenious example of a “do better” approach for a service that’s strapped for content moderation options. Omegle’s employees likely know that a primary use case of the service is sex. By partitioning the service, Omegle might drastically cut down on the amount of unsolicited sexual content encountered by both adult and minor users of the regular service, without much interruption to the service’s overall value-add. These experiments in mediating the user to user experience can only improve from here. Thanks to Section 230, websites like Omegle increasingly pursue such experiments to help their users improve too.But repealing Section 230 leaves sites like Omegle with one option: exit the market.I'm not allergic to conversations about how the market can self-correct these types of services and whether they should be supported by the market at all. Maybe sites like Omegle—that rely on their users to not be awful to each other as a primary method of content moderation—are not suitable for our modern day online ecosystem.There's a valid conversation to be had within technology policy and Trust and Safety circles about websites like Omegle and whether the social good they provide outweigh the harms they might indirectly cater to. Perhaps, sites like Omegle should exit the market. However, that's a radically different conversation; one that inquires into whether current innovations in content moderation support sites like Omegle, and whether such sites truly have no redeemable qualities worth preserving in the first place. That’s an important conversation; one that shouldn’t involve speculating about Section 230’s adequacy.Jess Miers is a third-year law student at Santa Clara University School of Law and a Legal Policy Specialist at Google. Her scholarship primarily focuses on Section 230 and content moderation. Opinions are her own and do not represent Google.
The Essential 2020 Adobe CC Mastery Bundle will help you hone your design skills to industry-standard level with 25+ hours of hands-on content on Photoshop, Illustrator, InDesign, & Spark. You'll learn how to prepare images for print & online presentations, how to create typographic designs, how to navigate through the InDesign work area, how to publish content from the Spark programs to social media, and more. It's on sale for $40.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
The Baltimore PD's eye in the sky program continues. First (inadvertently) introduced to the public in 2016, the camera/Cessna system, made by a company called Persistent Surveillance Systems, flew above the city capturing up to 32-square miles of human and vehicle movements using a 192-million-megapixel camera.The only upside to the residents of Baltimore not being informed of this development is that they weren't spending their money on it. It was completely funded by a private donor, Arnold Ventures, LLC. The system, known as "Gorgon Stare" when deployed in war zones by the military, is referred to by the city and PD by the friendlier, if clunkier, name "Aerial Investigation Research Pilot Program."The second run of this program began earlier this year. The latest take on persistent aerial surveillance survived an early legal challenge by the ACLU. A federal court judge said the system did not violate anyone's Fourth Amendment rights, mainly because of its technical limitations. The system is far from "persistent." The planes -- three of them -- are airborne around 11 hours a day at most and they're almost completely useless at night. They're also mostly useless during bad weather and, in especially inclement weather, unable to get up off the ground at all.So, it may be Constitutional and it may have been run past the public the second time around, but is it actually useful? That's something no one seems to know. The initial run in 2016 didn't add much to the Baltimore law enforcement knowledge base, mainly because it involved Baltimore cops and their apparently shoddy work practices.
We've noted for years how, despite a lot of pretense to the contrary, the federal government doesn't actually know where broadband is or isn't available. The FCC usually doesn't independently confirm that ISP-provided data is accurate, and the agency declares an entire area "served" with broadband if just one home in a zip code has service. Efforts to fix this problem have historically been undermined by telecom lobbying, since incumbent ISPs aren't keen on further highlighting the profound lack of competition (and high prices) that plague the sector.In just the latest in a long line of discordant efforts to fix the problem, New York state has announced it will be conducting a new study to determine broadband availability, after a survey found that the federal and state government's existing broadband availability data was largely nonsense. It's a problem that has plagued the U.S. for roughly twenty years, but it is seeing renewed attention given that studies have shown that 42 million Americans lack access (double official FCC estimates) to any broadband whatsoever during a pandemic:
This week, our first place winner on the insightful side Stephen T. Stone breaking down Trump's astonishing demand that his government should take a cut of a TikTok sale:
Five Years AgoThis week in 2015, we wrote about how the TPP would override five years of democratic discussion about patents in New Zealand, and then got a look at the latest leak of the agreement which showed the US fighting hard to permit patent and copyright abuse, opposing provisions in support of the public domain, trying to include rules that would kill any future Aereo clones, and generally making copyright mandatory but public rights voluntary — but then, after missing a key deadline because of the failure to reach an agreement, the whole deal was put in jeopardy.Ten Years AgoThis week in 2010, Indonesia ordered a ban on all online porn while the United Arab Emirates and Saudi Arabia announced plans to ban Blackberry usage and lawyers in New Zealand were suggesting total internet bans for repeat copyright infringers. Meanwhile, the Pentagon was freaking out about Wikileaks in a way that was reminiscent in some ways of the RIAA's response to Napster, first demanding the "return" of the digital documents then taking a total head-in-the-sand approach and banning military personnel from accessing the site. And the FBI was starting its own stupid fight with a different "wiki" — telling Wikipedia that it can't display the FBI logo.Fifteen Years AgoThis week in 2005, we were watching the rise of the online counterfeit drug market and the beginning of the deflation of the ringtone market bubble. Even back then, the US was already working hard to export the worst of its copyright law to other countries, though plenty of other countries had it was sued for letting people download movies and music, but that didn't seem to put a dent in the massive investments that flowed in when the IPO hit.
Editor's Note: Originally, this article was set to run before the article of Crystal Dynamics defending this decision... but somehow that didn't happen. You can read that article here if you like, or if you haven't already, you can read this one first, and recognize that time has no meaning any more, so the linear publishing of articles is no longer necessary... or maybe Mike just screwed things up. One of those.For anything that isn't first-party content, I will never understand why games sell as console exclusives. Maybe there is math out there that makes having a game publisher limit itself to one sliver of the potential market make sense, but somehow I have a hard time believing it. That's all the more the case given that the recent trend has been less exclusivity, rather than more. While the PC market is now seeing platform exclusivity emerge, something which makes even less sense than with consoles, game franchises that were once jealously guarded exclusives, such as MLB The Show, are announcing opening up to more systems, including PCs.But it seems the instinct to carve out something exclusive for your system is hard to shake. Or, that's at least the case for Sony, which has managed to retain exclusive rights for the character Spider-Man in the upcoming Marvel's Avengers game.
Summary:The ability to instantly upload recordings and stream live video has made content moderation much more difficult. Uploads to YouTube have surpassed 500 hours of content every minute (as of May 2019), making any form of moderation inadequate.The same goes for Twitter and Facebook. Facebook's user base exceeds two billion worldwide. Over 500 million tweets are posted to Twitter every day (as of May 2020). Algorithms and human moderators are incapable of catching everything that violates terms of service.When the unthinkable happens -- as it did on August 26, 2015 -- these two social media services swiftly responded. But even their swift efforts weren't enough. The videos posted by Vester Lee Flanagan, a disgruntled former employee of CBS affiliate WDBJ in Virginia, showed him tracking down a WDBJ journalist and cameraman and shooting them both.Both platforms removed the videos and deactivated Flanagan's accounts. Twitter's response took only minutes. But the spread of the videos had already begun, leaving moderators to try to track down duplicates before they could be seen and duplicated yet again. Many of these ended up on YouTube, where moderation efforts to contain the spread still left several reuploads intact. This was enough to instigate an FTC complaint against Google, filed by the father of the journalist killed by Flanagan. Google responded by stating it was still removing every copy of the videos it could locate, using a combination of AI and human moderation.Users of Facebook and Twitter raised a novel complaint in the wake of the shooting, demanding "autoplay" be opt in -- rather than the default setting -- to prevent them from inadvertently viewing disturbing content.Moderating content as it is created continues to pose challenges for Facebook, Twitter, and YouTube -- all of which allow live-streaming.Decisions to be made by social media platforms:
Time and time again we've highlighted how, in the modern era, you don't really own the hardware you buy. Music, ebooks, and videos can disappear on a dime without recourse, your game console can lose important features after a purchase, and a wide variety of "smart" tech can quickly become dumb as a rock in the face of company struggles, hacks, or acquisitions, leaving you with pricey paperweights where innovation once stood.The latest case in point: Google acquired Waterloo, Ontario based North back in June. For several years, North had been selling AR capable "smart" glasses dubbed Focal. Generally well reviewed, Focal glasses started at $600, went dramatically up from there, and required you visit one of two North stores -- either in Brooklyn or Toronto -- to carefully measure your head using 11 3D modeling cameras. The glasses themselves integrated traditional prescription glasses with smart technology, letting you enjoy a heads up display and AR notifications directly from your phone.But with the Google acquisition, North posted a statement to its website, stating the company was forced to make the "difficult decision" to wind down support for Focal as of the end of July, at which point the "smart" tech will become rather dumb:
On February 8, 1996, President Clinton signed into law the Telecommunication Act of 1996. Title V of that act was called the Communications Decency Act, and Section 509 of the CDA was a set of provisions originally introduced by Congressmen Chris Cox and Ron Wyden as the Internet Freedom & Family Empowerment Act. Those provisions were then codified at Section 230 of title 47 of the United States Code. They are now commonly referred to as simply “Section 230.”Section 230 prohibits a “provider or user” of an “interactive computer service” from being “treated as the publisher or speaker” of content “provided by another information content provider.” 47 U.S.C. § 230(c)(1). The courts construed Section 230 as providing broad federal statutory immunity to the providers of online services and platforms from any legal liability for unlawful or tortious content posted on their systems by their users.When it enacted Section 230, Congress specified a few important exceptions to the scope of this statutory immunity. It did not apply to liability for federal crimes or infringing intellectual property rights. And in 2018, President Trump signed into law an additional exception, making Section 230’s liability protections inapplicable to user content related to sex trafficking or the promotion of prostitution.Nevertheless, critics have voiced concerns that Section 230 prevents the government from providing effective legal remedies for what those critics claim are abuses by users of online platforms. Earlier this year, legislation to modify Section 230 was introduced in Congress, and President Trump has, at times, suggested the repeal of Section 230 in its entirety.As critics, politicians, and legal commentators continue to debate the future of Section 230 and its possible repeal, there has arisen a renewed interest in what the potential legal liability of online intermediaries was for the content posted by their users under the common law, before Section 230 was enacted. Thirty years ago, as a relatively young lawyer representing CompuServe, I embarked on a journey to explore that largely uncharted terrain.In the pre-Section 230 world, every operator of an online service had two fundamental questions for their lawyers: (1) what is my liability for stuff my users post on my system that I don’t know about?; and (2) what is my liability for the stuff I know about and decide not to remove (and how much time do I have to make that decision)?The answer to the first question was not difficult to map. In 1990, CompuServe was sued by Cubby, Inc. for an allegedly defamatory article posted on a CompuServe forum by one of its contributors. The article was online only for a day, and CompuServe became aware of its contents only after it had been removed, when it was served with Cubby’s libel lawsuit. Since there was no dispute that CompuServe was unaware of the contents of the article when it was available online in its forum, we argued to the federal district court in New York that CompuServe was no different from any ordinary library, bookstore, or newsstand, which, under both the law of libel and the First Amendment, are not subject to civil or criminal liability for the materials they disseminate to the public if they have no knowledge of the material’s content at the time they disseminate it. The court agreed and entered summary judgment for CompuServe, finding that CompuServe had not “published” the alleged libel, which a plaintiff must prove in order to impose liability on a defendant under the common law of libel.Four years later, a state trial court in New York reached a different conclusion in a libel lawsuit brought by Stratton Oakmont against one of CompuServe’s competitors, Prodigy Services Co., based on an allegedly defamatory statement made in one of Prodigy’s online bulletin boards. In that case, the plaintiff argued that Prodigy was different because, unlike CompuServe, Prodigy had marketed itself as using software and real-time monitors to remove material from its service that it felt were inappropriate for a “family-friendly” online service. The trial court agreed and entered a preliminary ruling that, even though there was no evidence that Prodigy was ever actually aware of the alleged libel when it was available on its service, Prodigy should nevertheless be deemed the “publisher” of the statement, because, in the court’s view, “Prodigy has uniquely arrogated to itself the role of determining what is proper for its members to post and read on its bulletin boards.”The Stratton Oakmont v. Prodigy ruling was as dubious as it was controversial and confusing in the months after it was issued. CompuServe’s general counsel, Kent Stuckey, asked me to address it in the chapter I was writing on defamation for his new legal treatise, Internet and Online Law. Tasked with this scholarly mission in the midst of one of the digital revolution’s most heated legal controversies, I undertook to collect, organize and analyze every reported defamation case and law review commentary in this country that I could find that might bear on the two questions every online service faced: when are we liable for user content we don’t know about and when are we liable for the user content we know about but decide not to remove?With respect to the first question, the answer dictated by the case law for other types of defendants who disseminate defamatory statements by others was fairly clear. As I wrote in my chapter, “[t]wo common principles can be derived from these cases. First, a person is subject to liability as a ‘publisher’ only if he communicates a defamatory statement to another. Second, a person communicates that statement to another if, but only if, he is aware of its content at the time he disseminates it.” Hamilton, “Defamation,” printed as Chapter 2 in Stuckey, Internet & Online Law (Law Journal-Seminars Press 1996), at 2-31 (footnotes omitted).I concluded that the trial court had erred in Stratton Oakmont because it failed to address what the term “publish” means in the common law of libel—to “communicate” a statement to a third party. When an intermediary disseminates material with no knowledge of its content, it does not “communicate” the material it distributes, and therefore does not “publish” it, at least as that term is used in the law of libel. Thus, whether the intermediary asserts the right of “editorial control” over the content provided by others, and the degree of such control the intermediary claims to exercise, are immaterial to the precise legal question at issue: did the defendant “communicate” the statement to another? I wrote:
Another day, another bunch of nonsense about Section 230 of the Communications Decency Act. The Senate Commerce Committee held an FTC oversight hearing yesterday, with all five commissioners attending via video conference (kudos to Commissioner Rebecca Slaughter who attended with her baby strapped to her -- setting a great example for so many working parents who are struggling with working from home while also having to manage childcare duties!). Section 230 came up a few times, though I'm perplexed as to why.Senator Thune, who sponsored the problematic PACT Act that would remove Section 230 immunity for civil actions brought by the federal government, asked a leading question to FTC Chair, Joe Simons, that was basically "wouldn't the PACT Act be great?" and Simons responded oddly about how 230 was somehow blocking their enforcement actions (which is just not true).
The Complete 2020 Learn Linux Bundle has 12 courses to help you learn Linux OS concepts and processes. You'll start with an introduction to Linux and progress to more advanced topics like shell scripting, data encryption, supporting virtual machines, and more. Other courses cover Red Hat Enterprise Linux 8 (RHEL 8), virtualizing Linux OS using Docker, AWS, and Azure, how to build and manage an enterprise Linux infrastructure, and much more. It's on sale for $69.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
About five years ago, frustration at John Deere's draconian tractor DRM culminated in a grassroots "right to repair" movement. The company's crackdown on "unauthorized repairs" turned countless ordinary citizens into technology policy activists, after DRM and the company's EULA prohibited the lion's share of repair or modification of tractors customers thought they owned. These restrictions only worked to drive up costs for owners, who faced either paying significantly more money for "authorized" repair, or toying around with pirated firmware just to ensure the products they owned actually worked.Since then, the right to repair movement has expanded dramatically, with a heavy focus on companies like Apple, Microsoft, Sony and their attempts to monopolize repair, driving up consumer costs, and resulting in greater waste.It has also extended into the medical arena, where device manufacturers enjoy a monopoly on tools, documentation, and replacement parts, making it a nightmare to get many pieces of medical equipment repaired. That has, unsurprisingly, become even more of a problem during the COVID-19 pandemic due to mass hospitalizations and resource constraints, with medical professionals being forced to use grey market parts or DIY parts just to get ventilators to work.Hoping to give the movement a shot of adrenaline, Senator Ron Wyden and Representative Yvette D. Clark have introduced the Critical Medical Infrastructure Right-to-Repair Act of 2020 (pdf), which would exempt medical equipment owners and "servicers" from liability for copying service materials or breaking DRM if it was done so to improve COVID-19 aid. The legislation also pre-empts any agreements between hospitals and equipment manufacturers preventing hospital employees from working on their own equipment, something that's also become more of a problem during the pandemic.From a Wyden statement:
A lawsuit against PACER for its long list of wrongs may finally pay off for the many, many people who've subjected themselves to its many indignities. The interface looks and runs like a personal Geocities page and those who manage to navigate it successfully are on the hook for pretty much every page it generates, including $0.10/page for search results that may not actually give users what they're looking for.Everything else is $0.10/page too, including filings, orders, and the dockets themselves. They're capped at $3.00/each if they run past 30 pages, but for the most part, using PACER is like using a library's copier. Infinite copies can be "run off" at PACER at almost no expense, but the system charges users as though they're burning up toner and paper.Back in 2016, the National Veterans Legal Services Program, along with the National Consumer Law Center and the Alliance for Justice, sued the court system over PACER's fees. The plaintiffs argued PACER's collection and use of fees broke the law governing PACER, which said only "reasonable" fees could be collected to offset the cost of upkeep. Instead, the US court system was using PACER as a piggy bank, spending money on flat screen TVs for jurors and other courtroom upkeep items, rather than dumping the money back into making PACER better, more accessible, and cheaper.A year later, a federal judge said the case could move forward as a class action representing everyone who believed they'd been overcharged for access. A year later, it handed down a decision ruling that PACER was illegally using at least some of the collected fees. The case then took a trip to the Federal Circuit Court of Appeals with both adversarial parties challenging parts of the district court's ruling.The Appeals Court has come down on the side of PACER users. Here's Josh Gerstein's summary of the decision for Politico:
We had just been talking about the upcoming Marvel's Avengers multi-platform game and its very strange plan to make Spider-Man a PlayStation exclusive character. In that post, I mentioned that I don't think these sorts of exclusive deals, be they for games or characters, make any real sense. Others quoted in the post have actually argued that exclusive characters specifically hurt everyone, including owners of the exclusive platform, since this can only serve to limit the subject of exclusion within the game. But when it came to why this specific deal had been struck, we were left with mere speculation. Was it to build on some kind of PlayStation loyalty? Was it to try to drive more PlayStation purchases? Was it some kind of Sony licensing thing?Well, we have now gotten from the head of the publishing studio an...I don't know... answer? That seems to be what was attempted, at least, but I'll let you all see for yourselves, if you can make out what the actual fuck is going on here. The co-leader of Crystal Dynamics gave an interview to ComicBook and touched on the subject.
If you only read one qualified immunity decision this year, make it this one. (At least until something better comes along. But this one will be hard to top.) [h/t MagentaRocks]The decision [PDF] -- written by Judge Carlton W. Reeves for the Southern District of Mississippi -- deals with the abuse of a Black man by a white cop. Fortunately, the man lived to sue. Unfortunately, Supreme Court precedent means the officer will not be punished. But the opening of the opinion is unforgettable. It's a long recounting of the injustices perpetrated on Black people by white law enforcement officers.
A federal judge has happily dismissed one of Devin Nunes' many SLAPP suits. This isn't much of a surprise given what the judge had said back in May regarding Nunes' Iowa-based SLAPP suit (reminder: Iowa has no anti-SLAPP law) against Esquire Magazine and reporter Ryan Lizza. The lawsuit was over this article that Devin Nunes really, really doesn't want you to read: Devin Nunes’s Family Farm Is Hiding a Politically Explosive Secret. Reading that will make Rep. Devin Nunes very, very sad.Back in May, the judge made it clear that he didn't think there was much of a case here, but gave Nunes a chance to try to save the lawsuit. As you can already tell, his lawyer, Stephen Biss, has come up empty in his attempt. The court easily dismisses the case with prejudice. First, the judge goes through the various statements that Nunes/Biss claim are defamatory and says "lol, no, none of those are defamatory."
The Python 3 Complete Masterclass Bundle has 7 courses to help you hone your Python skills. You'll learn how to automate data analysis, do data visualization with Bokeh, test basic script, network automation, and more. It's on sale for $30.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
Forget banning TikTok, the Trump State Department just suggested it wants to basically ban China from the internet. Rather than promoting an open internet and the concept of openness, it appears that under this administration we're slamming the gates shut and setting up the Great American Firewall for the internet. Under the guise of what it calls the Clean Network to Safeguard America, last night Secretary of State Mike Pompeo announced a program that is full of vague statements, that could, in practice, fragment the internet.This is incredibly disappointing on multiple levels. While other countries -- especially China, but also Iran and Russia -- have created their own fragmented internet, the US used to stand for an open internet across the globe. Indeed, for whatever complaints we had about the State Department during the Obama administration (and we had many complaints), its commitment to an open internet was very strong and meaningful. That's clearly now gone. The "Clean Network to Safeguard America" consists of five programs that can be summed up as "fuck you China."
We've been noting for a few weeks that much of the hysteria surrounding TikTok is kind of dumb. For one, banning TikTok doesn't really do much to thwart Chinese spying, given our privacy and security incompetence leaves us vulnerable on countless fronts. Most of the folks doing the heaviest pearl clutching over TikTok have opposed efforts at any meaningful internet privacy rules, have opposed funding election security reform, and have been utterly absent or apathetic in the quest for better security and privacy practices over all (the SS7 flaw, cellular location data scandals, etc.).Even the idea that banning TikTok meaningfully thwarts Chinese spying given the country's total lack of scruples, bottomless hacking budget, and our own security and privacy incompetence (the IOT comes quickly to mind) is fairly laughable. Banning TikTok to thwart Chinese spying is kind of like spitting at a thunderstorm in the hopes of preventing rain. Genuine privacy and security reform starts by actually engaging in serious privacy and security reform, not (waves in the general direction of Trump's bizarre, extortionist, TikTok agenda) whatever the hell this is supposed to be.I see the entire TikTok saga as little more than bumbling, performative nonsense by wholly unserious people more interested in money, politics, leverage, and power than privacy or national security. Case in point: desperate to create the idea that TikTok is a serious threat, a new document leak reveals that the Department of Homeland Security has spent a good chunk of this year circulating the claim that a nineteen year-old girl was somehow "training terrorists" via a comedy video she posted to TikTok.According to Mainer, the video in question was sent to police departments across Maine by the Maine Information and Analysis Center (MIAC), part of the DHS network of so-called "Fusion Centers" tasked with sharing and and distributing information about "potential terrorist threats." The problem: when you dig through the teen in question's TikTok posts, it's abundantly clear after about four minutes of watching that she's not a threat. The tweet itself appears to have been deleted, but it too (duh) wasn't anything remotely resembling a genuine terrorist threat or security risk:
The French anti-piracy framework known as Hadopi began as tragedy and soon turned into farce. It was tragic that so much energy was wasted on putting together a system that was designed to throw ordinary users off the Internet -- the infamous "three strikes and you're out" approach -- rather than encouraging better legal offerings. Four years after the Hadopi system was created in 2009, it descended into farce when the French government struck down the signature three strikes punishment because it had failed to bring the promised benefits to the copyright world. Indeed, Hadopi had failed to do anything much: its first and only suspension was suspended, and a detailed study of the three strikes approach showed it was a failure from just about every viewpoint. Nonetheless, Hadopi has staggered on, sending out its largely ignored warnings to people for allegedly downloading unauthorized copies of material, and imposing a few fines on those unlucky enough to get caught repeatedly.As TorrentFreak reports, Hadopi has published its annual report, which contains some fascinating details of what exactly it has achieved during the ten years of its existence. In 2019, the copyright industry referred 9 million cases to Hadopi for further investigation, down from 14 million the year before. However, referral does not mean a warning was necessarily sent. In fact, since 2010, Hadopi has only sent out 12.7 million warnings in total, which means that most people accused of piracy don't even see a warning.Those figures are a little abstract; what's important is how effective Hadopi has been, and whether the entire project has been worth all the time and money it has consumed. Figures put together by Next INpact, quoted by TorrentFreak, indicate that during the decade of its existence, Hadopi has imposed the grand sum of €87,000 in fines, but cost French taxpayers nearly a thousand times more -- €82 million. Against that background of staggering inefficiency and inefficacy, the following words in the introduction to Hadopi's annual report (pdf), written by the organization's president, Denis Rapone, ring rather hollow:
There are many ways to respond to a cease and desist notice over trademark rights. The most common response is probably fear-based capitulation. After all, trademark bullying works for a reason, and that reason is that large companies have access to large legal war chests while smaller companies usually just run away from their own rights. Another response is the aggressive defenses against the bullying. And, finally, every once in a while you get a response so snarky in tone that it probably registers on the richter scale, somehow.The story of how a law firm called Southtown Moxie responded to a C&D from a (maybe?) financial services firm called Financial Moxie is of the snark variety. But first, some background.
Summary:Though social media networks take a wide variety of evolving approaches to their content policies, most have long maintained relatively broad bans on nudity and sexual content, and have heavily employed automated takedown systems to enforce these bans. Many controversies have arisen from this, leading some networks to adopt exceptions in recent years: Facebook now allows images of breastfeeding, child-birth, post-mastectomy scars, and post-gender-reassignment surgery photos, while Facebook-owned Instagram is still developing its exception for nudity in artistic works. However, even with exceptions in place, the heavy reliance on imperfect automated filters can obstruct political and social conversations, and block the sharing of relevant news reports.One such instance occurred on June 11, 2020 following controversial comments by Australian Prime Minister Scott Morrison, who stated in a radio interview that “there was no slavery in Australia”. This sparked widespread condemnation and rebuttals from both the public and the press, pointing to the long history of enslavement of Australian Aboriginals and Pacific Islanders in the country. One Australian Facebook user posted a late 19th century photo from the state library of Western Australia, depicting Aboriginal men chained together by their necks, along with a statement:
As the coronavirus pandemic continues, nobody really knows what's going to happen — especially if kids start going back to school. Statistical models of the possibilities abound, but this week we're joined by some people who are taking a different approach: John Cordier and Don Burke are the founders of Epistemix, which is using a new agent-based modeling approach to figure out what the future of the pandemic might look like.Follow the Techdirt Podcast on Soundcloud, subscribe via iTunes or Google Play, or grab the RSS feed. You can also keep up with all the latest episodes right here on Techdirt.
Orleans Parish District Attorney Leon Cannizzaro continues to get himself in legal hot water. Back in 2017, New Orleans journalistic outlet The Lens uncovered his office's use of fake subpoenas to coerce witnesses and crime victims into showing up to provide testimony and make statements.The documents weren't real. They had never been approved by a judge. But they still had the same threat of fines or imprisonment printed on them. Just like the real ones. But these threats were also fake -- no judge had given the office permission to lock these witnesses/victims up.Once this practice was exposed, the lawsuits began. The DA's office was sued multiple times by multiple plaintiffs. One suit -- filed by the MacArthur Justice Center -- demanded copies of every bogus subpoena issued by the DA's office. Another -- filed by the ACLU -- sought the names of every DA's office attorney who'd signed or sent one of these bogus subpoenas.Yet another lawsuit targeted the DA's office and the DA directly for violating the law and citizens' rights by issuing fake subpoenas. That one is still pending but DA Cannizzaro and his attorneys were denied immunity by the Fifth Circuit Court of Appeals, making it far more likely someone will be held personally responsible for cranking out fake legal paperwork.The MacArthur Center lawsuit continues. And it's more bad news for the DA, which has spent nearly a half-decade dodging the Center's public records requests.
Every minute, more than 500 hours of video are uploaded to YouTube, 350,000 tweets are sent, and 510,000 comments are posted on Facebook.Managing and curating this fire hose of content is an enormous task, and one which grants the platforms enormous power over the contours of online speech. This includes not just decisions around whether a particular post should be deleted, but also more minute and subtle interventions that determine its virality. From deciding how far to allow quack ideas about COVID-19 to take root, to the degree of flexibility that is granted to the President of the United States to break the rules, content moderation raises difficult challenges that lie at the core of debates around freedom of expression.But while plenty of ink has been spilled on the impact of social media on America’s democracy, these decisions can have an even greater impact around the world. This is particularly true in places where access to traditional media is limited, giving the platforms a virtual monopoly in shaping the public discourse. A platform which fails to take action against hate speech might find itself instrumental in triggering a local pogrom, or even genocide. A platform which acts too aggressively to remove suspected “terrorist propaganda” may find itself destroying evidence of war crimes.Platforms’ power over the public discourse is partly the result of a conscious decision by global governments to outsource online moderation functions to these private sector actors. Around the world, governments are making increasingly aggressive demands for platforms to police content which they find objectionable. The targeted material can range from risqué photos of the King of Thailand, to material deemed to insult Turkey’s founding president. In some instances, these requests are grounded in local legal standards, placing platforms in the difficult position of having to decide how to enforce a law from Pakistan, for example, which would be manifestly unconstitutional in the United States.In most instances, however, moderation decisions are not based on any legal standard at all, but on the platforms’ own privately drafted community guidelines, which are notoriously vague and difficult to understand. All of this leads to a critical lack of accountability in the mechanisms which govern freedom of expression online. And while the perceived opacity, inconsistency and hypocrisy of online content moderation structures may seem frustrating to Americans, for users in the developing world it is vastly worse.Nearly all of the biggest platforms are based in the United States. This means not only that their decision-makers are more accessible and receptive to their American user-base than they are to frustrated netizens in Myanmar or Uganda, but also that their global policies are still heavily influenced by American cultural norms, particularly the First Amendment.Even though the biggest platforms have made efforts to globalize their operations, there is still a massive imbalance in the ability of journalists, human rights activists, and other vulnerable communities to get through to the U.S.-based staff who decide what they can and cannot say. When platforms do branch out globally, they tend to recruit staff who are connected to existing power structures, rather than those who depend on the platforms as a lifeline away from repressive restrictions on speech.For example, the pressure to crackdown on “terrorist content” inevitably leads to collateral damage against journalism or legitimate political speech, particularly in the Arab world. In setting this calculus, governments and ex-government officials are vastly more likely to have a seat at the table than journalists or human rights activists. Likewise, the Israeli government has an easier time communicating their wants and needs to Facebook than, say, Palestinian journalists and NGOs.None of this is meant to minimize the scope and scale of the challenge that the platforms face. It is not easy to develop and enforce content policies which account for the wildly different needs of their global user base. Platforms generally aim to provide everyone with an approximately identical experience, including similar expectations with regard to the boundaries of permitted speech. There is a clear tension between this goal and the conflicting legal, cultural and moral standards in force across the many countries where they operate.But the importance and weight of these decisions demands that platforms get this balancing right, and develop and enforce policies which adequately reflect their role at the heart of political debates from Russia to South Africa. Even as the platforms have grown and spread around the world, the center of gravity of these debates continues to revolve around D.C. and San Francisco.This is the first in a series of articles developed by the Wikimedia/Yale Law School Initiative on Intermediaries and Information appearing here at Techdirt Policy Greenhouse and elsewhere around the internet—intended to bridge the divide between the ongoing policy debates around content moderation, and the people who are most impacted by them, particularly across the global south. The authors are academics, civil society activists and journalists whose work lies on the sharp edge of content decisions. In asking for their contributions, we offered them a relatively free hand to prioritize the issues they saw as the most serious and important with regard to content moderation, and asked them to point to areas where improvement was needed, particularly with regard to the moderation process, community engagement, and transparency.The issues that they flag include a common frustration with the distant and opaque nature of platforms’ decision-making processes, a desire for platforms to work towards a better understanding of local socio-cultural dynamics underlying the online discourse, and a feeling that platforms’ approach to moderation often did not reflect the importance of their role in facilitating the exercise of core human rights. Although the different voices each offer a unique perspective, they paint a common picture of how platforms’ decision making impacts their lives, and of the need to do better, in line with the power that platforms have in defining the contours of global speech.Ultimately, our hope with this project is to shed light on the impacts of platforms’ decisions around the world, and provide guidance on how social media platforms might do a better job of developing and applying moderation structures which reflect their needs and values of their diverse global users.Michael Karanicolas is a Resident Fellow at Yale Law School, where he leads the Wikimedia Initiative on Intermediaries and Information as part of the Information Society Project. You can find him on twitter at @M_Karanicolas.
The Ultimate Leadership and Stress Management Bundle has 9 courses to help you develop the tools you need to lead and empower your team. Courses focus on interpersonal skills, remote team management, time management and stress management. It's on sale for $40.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.