Feed techdirt Techdirt

Favorite IconTechdirt

Link https://www.techdirt.com/
Feed https://www.techdirt.com/techdirt_rss.xml
Updated 2026-01-14 01:47
Daily Deal: Taskolly Project Manager
Taskolly is an easy, flexible, and visual way to manage your projects and organize anything. It's software that will help you and your team manage work and tasks so you can increase your productivity. Easily plan, collaborate, organize, and deliver projects of all sizes, on time, by using a single project planning software equipped with all of the right tools, all in one place. There are three tiers on sale: Pro Plan (5 users) for $39, Business Plan (10 users) for $59, and Enterprise Plan (unlimited users) for $149.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
Massachusetts Top Court Says Cops Need Warrants To Engage In Long-Term Video Surveillance Of People's Houses
Is a police camera aimed at a publicly-viewable area Constitutional? That's a question courts have had to answer periodically. In most cases, the answer appears to be "no." Long-term surveillance -- even of a publicly-viewable area -- is a government intrusion into private citizens' lives. This sort of intrusion requires a warrant and sufficient probable cause.A ruling by Massachusetts Supreme Judicial Court doesn't quite reach the Fourth Amendment but does find the seven months of surveillance by utility pole mounted cameras violates the state's Constitution. The long-term surveillance of two residences resulted in multiple motions to suppress by the defendants. None of these have been granted but the SJC has reversed the lower court's dismissal of the suppression attempts. (via FourthAmendment.com)Here's the crucial part of the ruling [PDF], which notes the court isn't going to go federal with this, leaving the Fourth Amendment question open.
Boys And Girls Club Backtracks After Folks Ask Why It's Helping A Cable Monopoly Lobby The FCC
Last month we noted how the Boys and Girls Club was one of several organizations cable giant Charter (Spectrum) was using to lobby the FCC in a bid to kill off merger conditions affixed to its 2015 merger with Time Warner Cable. Many of those conditions actively protect consumers from monopoly price gouging (a 7 year temporary moratorium on arbitrary and unnecessary usage caps, for example). Other conditions worked to expand broadband into less affluent areas. Despite the conditions actually helping, you know, boys and girls... the club's letter opposed them.In a letter to the FCC, the Boys and Girls Club insisted that a recent $5,000 donation by Charter to the organization helped it weather the COVID-19 storm, and that "lifting these conditions will level the playing field for Charter while having zero impact on the online video marketplace." But after activist and reporter Phil Dampier pointed out that wasn't true (garnering local press attention), both the Boys and Girls Club and Charter appear to have quickly pivoted to damage control mode.In a statement to a Rochester, New York NBC affiliate, the Club acknowledges that after getting a big donation they signed off on a letter to the FCC that was written by Charter -- without reading it:
Consumer Reports Study Shows Many 'Smart' Doorbells Are Dumb, Lack Basic Security
Like most internet of broken things products, we've noted how "smart" devices quite often aren't all that smart. More than a few times we've written about smart lock consumers getting locked out of their own homes without much recourse. Other times we've noted how the devices simply aren't that secure, with one study finding that 12 of 16 smart locks they tested could be relatively easily hacked thanks to flimsy security standards, something that's the primary feature of many internet of broken things devices."Smart" doorbells aren't much better. A new study by Consumer Reports studied 24 different popular smart doorbell brands, and found substantial security problems with at least five of the models. Many of these flaws exposed user account information, WiFi network information, or, even in some cases, user passwords. Consumer Reports avoids getting too specific as to avoid advertising the flaws while vendors try to fix them:
Documents Show Law Enforcement Agencies Are Still Throwing Tax Dollars At Junk Science
Recently, 269 gigabytes of internal law enforcement documents were liberated by hacker collective Anonymous -- and released by transparency activists Distributed Denial of Secrets (DDoSecrets). The trove contained plenty of sensitive law enforcement data, but also a lot of stuff law enforcement considers "sensitive" just because it doesn't want to let the public know what it's been spending their tax dollars on.The documents highlighted in this report by Jordan Smith of The Intercept show law enforcement agencies are spending thousands of dollars to maximize the Dunning-Kruger effect. People are still peddling junk science and discredited techniques to law enforcement agencies and We the People are picking up the tab.
Content Moderation And Human Nature
It should go without saying that communication technologies don’t conjure up unfathomable evils all by themselves. They are a convenience-enhancer, a conduit, and a magnifying lens amplifying something that’s already there: our deeply flawed humanity. Try as we might to tame it (and boy have we tried), human nature will always rear its ugly head. Debates about governing these technologies should start by making the inherent tradeoffs more explicit.InstitutionsFirst, a little philosophizing. From the social contract onwards, a significant amount of resources have been allocated to attempting to subdue human nature’s predilection for self-preservation at all costs. Modern society is geared towards improving the human condition by striving to unlearn — or at least overpower — our more primitive responses.One such attempt is the creation of institutions, with norms, rules, cultures and, on paper, inherently stronger principles than those rooted deep inside people.It’s difficult to find ideologies that don’t allow for some need for institutions. Even the most ardent of free market capitalists acquiesce to the — limited, in their mindset — benefits of certain institutions. Beyond order and a sense of impartiality, institutions help minimize humans’ unchecked power in consequential choices that can impact wider society.One ideal posits that institutions (corporations, parties, governments) given unfettered control over society could rid us of the aspects of our humanity that we’ve so intently tried to escape, bringing forth prosperity, equality, innovation, and progress. The fundamental flaw in that reasoning is that institutions are still intrinsically connected to humanity; created, implemented, and staffed by fallible human beings.However strict the boundaries in which humans are expected to operate, the potential for partial or even total capture is very high. The boundaries are rarely entirely solid, and even if they were, humans always have the option to not comply. Bucking the system is not just an anomaly, it’s revered in a large portion of non-totalitarian regimes as a sign of independence, strong individuality, and as a characteristic of those lauded as mavericks.The power of institutional norms tasked with guarding against the worst of what humans can offer is proven to be useless when challenged by people for whom self-preservation is paramount. A current and facile example is the rise to power of Donald Trump and his relentless destruction of society-defining unwritten rules.Even without challenging the institution, a turn towards self-indulgence is easily achievable, forging a path to a reshaping in its image. The most obvious example is that of communism, wherein the lofty goal of equality is operationalized through a party-state apparatus to ostensibly distribute equally the spoils of society’s labor. As history has shown, this is contingent on the sadly unlikely situation wherein all those populating institutions are genuinely altruistic. Invariably, the best-case scenario dissipates, if it ever materialized, and inequality deepens — the opposite of the desired goal.This is not a tacit endorsement of a rule-less, institution-less dystopia simply because rules and institutions are not adept at a practically impossible task. Instead, this should be read as a cautionary tale for overextending critical aspects of society and treating them as panacea, rather than a suitable and mostly successful palliative.Artificial IntelligenceArmed with the continuous failure of institutions to overcome human nature, you’d think we would stop trying to remove our imperfect selves from the equation.But what we’ve seen for more than a decade now has been technology that directly and distinctly promises to remove our worst impulses, if not humans entirely, from thinking, acting, or doing practically anything of consequence. AI, the ultimate and literal deus ex machina, is advertised as a solution for a large number of much smaller concerns. Fundamentally, its solution to these problems is ostensibly removing the human element.Years of research, experiments, blunders, mistakes and downright evil deeds have led us to safely conclude that artificial intelligence is as successful at eliminating the imperfect human as the “you wouldn’t steal a car” anti-piracy campaign was at stopping copyright infringement. This is not to denigrate the important and beneficial work scientists and engineers have put into building intelligent automation tasked with solving complex problems.Technology, and artificial intelligence in particular, is created, run and maintained by human beings with perspectives, goals, and inherent biases. Just like institutions, once a glimpse of positive change or success is evident, we extrapolate it far beyond its limits and task it with the unachievable and unenviable goal of fixing humanity — by removing it from the equation.PlatformsCommunication technology is not directly tasked with solving society, it simply is meant as a tool to connect us all. Much like AI, it has seemingly elegant solutions for messy problems. It’s easy to see that thanks to tech platforms, be they bulletin boards or TikTok, distance becomes trivial in maintaining connection. Community can be built and fostered online, otherwise marginalized voices can be heard, and businesses can be set up and grow digitally. Even loneliness can be alleviated.With such a slew of real and potential benefits, it’s no wonder that we started to ascribe them with increasingly more consequential roles for society; roles these technologies were never built for and are far beyond their technical and ethical capabilities.The Arab Spring in the early 2010s wasn't just a liberation movement by oppressed and energized populations. It was also an opportunity for free PR for now tech-giants Twitter and Facebook, as various outlets and pundits branded revolutions with their names. It didn't help that CEOs and tech executives seized on this narrative and, in typical Silicon Valley fashion, took to promising things akin to a politician trying to get elected.When you set the bar that high, expectations understandably follow. The aura of tech solutionism implies such earth-shattering advancements as ordinary.Nearly everyone can picture the potential good for society these technologies can do. And while we may all believe in that potential, the reality is that, so far, communication technologies have mostly provided convenience. Sometimes this convenience is in fact live-saving, but mostly it’s just an added benefit.Convenience doesn’t alter our core. It doesn’t magically make us better humans or create entirely different societies. It simply lifts a few barriers from our path. This article may be seen as an attempt to minimize the perceived role of technology in society, in order to subsequently deny it and its makers any blame for how society uses it. But that is not what I am arguing.An honest debate about responsibility has to fundamentally start with a clear understanding of the actual task something accomplishes, the perceived task others attribute to it, and its societal and historical context. A technology that provides convenience should not be fundamental to the functioning of a society. Convenience can easily become so commonplace that it ceases to be an added benefit but an integral part of life where the prospect of it being taken away is met with screams of bloody murder.Responsibility has to be assigned to the makers, maintainers and users of communication technology, by examining which barriers are being lifted and why. There is plenty of responsibility there to be had, and I am involved in a couple of projects that try to untangle this complex mess. However, these platforms are not the reason for the negative parts of life, they are merely the conduit.Yes, a sentient conduit can tighten or loosen its grip, divert, amplify, temporarily block messages, but it isn’t the originator of those messages, or of the intent behind it. It can surely be extremely inviting for messages of hate and division, maybe because of business models, maybe because of engineering decisions, or maybe simply because growth and scale never actually happened in a proper way. But that hate and division is endemic to human nature, and to assume that platforms can do what institutions have persistently failed to do, namely entirely eradicate it, is nonsensical.RegulationIt is clear that platforms, reaching the size and ubiquity that they have, require updated and smart regulations in order to properly balance their benefits and the risks. But the push (and counter-push) to regulate has to start from a perspective that understands both fundamental leaps: platforms are to human nature what section 230 (or any other national-level intermediary liability law) is to the First Amendment (or any national level text that inscribes the social consensus on free speech).If your issue is with hate and hate speech, the main thing you have to contend with are human nature and the First Amendment, not just the platforms and section 230. Without a doubt, both the platforms and section 230 are choices and explicit actions built on top of the other two, and are not fundamentally the only or best form of what they could be.But a lot of the issues that bubble up within the content moderation and intermediary liability space come from a concern over the boundaries. That concern is entirely related to the broader contexts rather than the platforms or the specific legislation.Regulating platforms has to start from the understanding that tradeoffs, most of which are cultural in nature, are inevitable. To be clear: there is no way to completely stop evil from happening on these platforms without making them useless.If we were to simply ignore hate speech, we’d eliminate convenience and in some instances invalidate the very existence of these platforms. That should not be an issue if these platforms were still seen as simple conveyors of convenience, but they are currently being seen as much more than that.Tech executives and CEOs have moved into the fascinating space wherein they have to protect their market power to assuage their shareholders, treat their products as mind-meltingly amazing to gain and keep users, yet imply their role in society is transient and insignificant in order to mollify policy-makers all at the same time.The convenience afforded by these technologies is allowing nefarious actors to cause substantial harm to a substantial number of people. Some users get death threats, or even have their life end tragically because of interactions on these platforms. Others will have their most private information or documents exposed, or experience sexual abuse or trauma through a variety of ways.Unfortunately, these things happen in the offline world as well, and they are fundamentally predicated on the regulatory/institutional context and the tools that allow them to manifest. The tools are not off the hook. Their propensity to not minimize harm, online and off, are due for important conversations. But they are not the cause. They are the conduit.Thus, the ultimate goal of “platforms existing without hate or violence” is very sadly not realistic. Neither are tradeoffs such as being ok with stripping fundamental rights in exchange for a safer environment, or being ok with some people suffering immense trauma and pain simply because one believes in the concept of open speech.Maybe the solution is to not have these platforms at all, or ask them to change substantially. or maybe it’s to calibrate our expectations, or maybe yet, to address the underlying issues in our society. Once we see what the boundaries truly are, any debate becomes infinitely more productive.This article is not advancing any new or groundbreaking ideas. What it does is identify crucial and seemingly misunderstood pieces of the subtext and spell it out. Sadly, the fact that these more or less evident issues needed to be said in plain text should be the biggest take-away.As a qualitative researcher, I learned that there is no way to “de-bias” my work. Trying to remove myself from the equation results in a bland “view from nowhere” that is ignorant of the underlying power dynamics and inherent mechanisms of whatever I am studying. However, that doesn’t mean we take off our glasses when trying to see for fear of the glasses influencing what we see, because that would actually make us blind. We remedy that by acknowledging our glasses as well.A communication platform (company, tech, product) that doesn’t have inherent biases is impossible. But that shouldn’t mean that we can’t try to ask it to be better, either through regulation, collaboration or hostile action. We just have to be cognizant of the place we’re standing when asking, the context, potential consequences and as this piece hopefully shows, what it can’t actually do.The conversation surrounding platform governance would benefit immensely from these tradeoffs being made explicit. It would certainly dial down the rhetoric and (genuine) visceral attitudes towards debate as it would force those directly involved or invested in one outcome to carefully assess the context and general tradeoffs.David Morar, PhD is an academic with the mind of a practitioner and currently a Fellow at the Digital Interests Lab and a Visiting Scholar at GWU’s Elliott School of International Affairs.
Appeals Court: City Employee's Horrific Facebook Posts About Tamir Rice Shooting Were Likely Protected Speech
Just your periodic reminder that the First Amendment protects some pretty hideous speech. And it does so even when uttered by public servants. Caveats apply, but the Sixth Circuit Court of Appeals [PDF] has overturned a lower court dismissal of a Cleveland EMS captain, who made the following comment several months after Cleveland police officers killed 12-year-old Tamir Rice as he played with a toy gun in a local park.
Daily Deal: The All-In-One Mastering Organization Bundle
The All-In-One Mastering Organization Bundle has 5 courses to help you become more organized and efficient. You'll learn how to organize all your digital files into a single inbox-based system, how to organize your ideas into a hierarchy, how to categorize each object in your home/apartment/office/vehicle into one of the categories from the "One System Framework," and more. It's on sale for $30.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
Secret Service Latest To Use Data Brokers To Dodge Warrant Requirements For Cell Site Location Data
Another federal law enforcement agency has figured out a way to dodge warrant requirements for historical cell site location data. The Supreme Court's Carpenter decision said these records were covered by the Fourth Amendment. But rather than comply with the ruling, agencies like the CBP and ICE are buying location data in bulk from private companies that collect this data, rather than approach service providers with warrants.These agencies argue they aren't violating the Constitution because the data is "pseudonymized" and doesn't specifically target any single person. But even cops using "reverse" warrants are still using warrants to gather this data. Federal agencies apparently can't be bothered with this nicety, preferring to collect information in bulk and work backwards to whatever it is they're looking for.The Secret Service is the latest federal agency to buy location data from Locate X -- one of the companies already providing cell site location data to CBP and ICE. Joseph Cox has the details for Motherboard.
VoLTE Flaw Lets A Hacker Spy On Encrypted Communications For A Measly $7,000
As we've noted, much of the hysteria surrounding TikTok isn't based on anything close to consistent outrage. As in, many of the folks freaking out about a teen dancing app were nowhere to be found when U.S. wireless carriers were found to be selling access to your location data to any random idiot. Most of the folks pearl clutching about TikTok have opposed election security funding or even the most basic of privacy rules. The SS7 flaw that makes most wireless networks vulnerable to eavesdropping ? The lack of any security or privacy safeguards in the internet of things (IOT) space?Which is all a long way of saying: if you're going to lose sleep over TikTok, you'll be shocked to learn there's an ocean of issues that folks are paying absolutely no attention to. Or, to put it another way, TikTok is probably the very least of a long list of problems related to keeping U.S. data secure.The latest case in point: a report last week noted how with around $7,000 worth of gear, a marginally competent person could eavesdrop on voice over LTE (VoLTE) communications, even though these transmissions are purportedly encrypted:
Funniest/Most Insightful Comments Of The Week At Techdirt
This week, our first place winner is an anonymous comment summing up how there are no good guys in the Epic/Apple showdown:
This Week In Techdirt History: August 16th - 22nd
Five Years AgoThis week in 2015, new leaks confirmed what we suspected about AT&T's cozy relationship with the NSA, which was especially concerning given the company's long history of fraudulent and abusive behavior, and the fact that the NSA seemed to think telco partners freed it from the constraints of the Fourth Amendment. The leak also revealed that the agency was misleading at best about how many cellphone records it could access.Ten Years AgoThis week in 2010, Peter Sunde gave a fascinating presentation on the history of The Pirate Bay, while we were emphasizing that record labels can still have a role in music if they embrace the ways that role is changing, and a new comprehensive graphic aptly demonstrated just how insane the music licensing world is. The trend of established musicians and industry folk using apocalyptic language to describe the impact of the internet continued, with rants from U2's manager and John Mellencamp (who compared the internet to the atomic bomb).Fifteen Years AgoThis week in 2005, we took a look at how the DMCA was not just a failure but a completely avoidable one with flaws that were obvious from the start, while we were pleased to see one person finally ready to fight back against the RIAA's lawsuits. The mobile music market was on the rise with Japan blazing the trail (and trying to debunk claims that this was due to a lack of wired connections), but we wondered if the market might be killed by aggressive use of DRM. Mobile games were also on the rise, but the biggest and most important development was one we (like many people) underestimated when it happened: Google bought Android, leading to some speculation that they might be building a mobile OS which we said "seems unlikely".
Apple Goes In Even Harder Against Prepear Over Non-Apple Logo
A couple of weeks ago, we wrote about Apple opposing the trademark for Prepear, a recipe sharing phone app, over its pear logo. The whole thing was completely absurd. The logos don't look anything alike, the color schemes and artistic styles are different, and also a pear is not an apple. I likened the whole thing to those absurd CNN commercials, which should give you some idea of just how dumb this whole thing was. So, thanks to this idiocy being exposed and the public backlash, Apple finally realized the error of its ways and backed off the opposition.Just kidding. Apple, in fact, has decided to double down in opposing Prepear's trademarks, now going after the Canadian trademark registration for the logo as well.
Content Moderation Case Study: Nextdoor Faces Criticism From Volunteer Moderators Over Its Support Of Black Lives Matter (June 2020)
Summary:Nextdoor is the local “neighborhood-focused” social network, which allows for hyper-local communication within a neighborhood. The system works by having volunteer moderators from each neighborhood, known as “leads.” For many years, Nextdoor has faced accusations of perpetuating racial stereotyping from people using the platform to report sightings of black men and women in their neighborhood as somehow “suspicious.”
Content Moderation Knowledge Sharing Shouldn't Be A Backdoor To Cross-Platform Censorship
Ten thousand moderators at YouTube. Fifteen thousand moderators at Facebook. Billions of users, millions of decisions a day. These are the kinds of numbers that dominate most discussions of content moderation today. But we should also be talking about 10, 5, or even 1: the numbers of moderators at sites like Automattic (Wordpress), Pinterest, Medium, and JustPasteIt—sites that host millions of user-generated posts but have far fewer resources than the social media giants.There are a plethora of smaller services on the web that host videos, images, blogs, discussion fora, product reviews, comments sections, and private file storage. And they face many of the same difficult decisions about the user-generated content (UGC) they host, be it removing child sexual abuse material (CSAM), fighting terrorist abuse of their services, addressing hate speech and harassment, or responding to allegations of copyright infringement. While they may not see the same scale of abuse that Facebook or YouTube does, they also have vastly smaller teams. Even Twitter, often spoken of in the same breath as a “social media giant,” has an order of magnitude fewer moderators at around 1,500.One response to this resource disparity has been to focus on knowledge and technology sharing across different sites. Smaller sites, the theory goes, can benefit from the lessons learned (and the R&D dollars spent) by the biggest companies as they’ve tried to tackle the practical challenges of content moderation. These challenges include both responding to illegal material and enforcing content policies that govern lawful-but-awful (and mere lawful-but-off-topic) posts.Some of the earliest efforts at cross-platform information-sharing tackled spam and malware such as the Mail Abuse Prevention System (MAPS) — which maintains blacklists of IP addresses associated with sending spam. Employees at different companies have also informally shared information about emerging trends and threats, and the recently launched Trust & Safety Professional Association is intended to provide people working in content moderation with access to “best practices” and “knowledge sharing” across the field.There have also been organized efforts to share specific technical approaches to blocking content across different services, namely, hash-matching tools that enable an operator to compare uploaded files to a pre-existing list of content. Microsoft, for example, made its PhotoDNA tool freely available to other sites to use in detecting previously reported images of CSAM. Facebook adopted the tool in May 2011, and by 2016 it was being used by over 50 companies.Hash-sharing also sits at the center of the Global Internet Forum to Counter Terrorism (GIFCT), an industry-led initiative that includes knowledge-sharing and capacity-building across the industry as one of its 4 main goals. GIFCT works with Tech Against Terrorism, a public-private partnership launched by the UN Counter-Terrrorism Executive Directorate, to “shar[e] best practices and tools between the GIFCT companies and small tech companies and startups.” Thirteen companies (including GIFCT founding companies Facebook, Google, Microsoft, and Twitter) now participate in the hash-sharing consortium.There are many potential upsides to sharing tools, techniques, and information about threats across different sites. Content moderation is still a relatively new field, and it requires content hosts to consider an enormous range of issues, from the unimaginably atrocious to the benignly absurd. Smaller sites face resource constraints in the number of staff they can devote to moderation, and thus in the range of language fluency, subject matter expertise, and cultural backgrounds that they can apply to the task. They may not have access to — or the resources to develop — technology that can facilitate moderation.When people who work in moderation share their best practices, and especially their failures, it can help small moderation teams avoid pitfalls and prevent abuse on their sites. And cross-site information-sharing is likely essential to combating cross-site abuse. As scholar evelyn douek discusses (with a strong note of caution) in her Content Cartels paper, there’s currently a focus among major services in sharing information about “coordinated inauthentic behavior” and election interference.There are also potential downsides to sites coordinating their approaches to content moderation. If sites are sharing their practices for defining prohibited content, it risks creating a de facto standard of acceptable speech across the Internet. This undermines site operators’ ability to set the specific content standards that best enable their communities to thrive — one of the key ways that the Internet can support people’s freedom of expression. And company-to-company technology transfer can give smaller players a leg up, but if that technology comes with a specific definition of “acceptable speech” baked in, it can end up homogenizing the speech available online.Cross-site knowledge-sharing could also suppress the diversity of approaches to content moderation, especially if knowledge-sharing is viewed as a one-way street, from giant companies to small ones. Smaller services can and do experiment with different ways of grappling with UGC that don’t necessarily rely on a centralized content moderation team, such as Reddit’s moderation powers for subreddits, Wikipedia’s extensive community-run moderation system, or Periscope’s use of “juries” of users to help moderate comments on live video streams. And differences in the business model and core functionality of a site can significantly affect the kind of moderation that actually works for them.There’s also the risk that policymakers will take nascent “industry best practices” and convert them into new legal mandates. That risk is especially high in the current legislative environment, as policymakers on both sides of the Atlantic are actively debating all sorts of revisions and additions to intermediary liability frameworks.Early versions of the EU’s Terrorist Content Regulation, for example, would have required intermediaries to adopt “proactive measures” to detect and remove terrorist propaganda, and pointed to the GIFCT’s hash database as an example of what that could look like (CDT joined a coalition of 16 human rights organizations recently in highlighting a number of concerns about the structure of GIFCT and the opacity of the hash database). And the EARN-IT Act in the US is aimed at effectively requiring intermediaries to use tools like PhotoDNA—and not to implement end-to-end encryption.Potential policymaker overreach is not a reason for content moderators to stop talking to and learning from each other. But it does mean that knowledge-sharing initiatives, especially formalized ones like the GIFCT, need to be attuned to the risks of cross-site censorship and eliminating diversity among online fora. These initiatives should proceed with a clear articulation of what they are able to accomplish (useful exchange of problem-solving strategies, issue-spotting, and instructive failures) and also what they aren’t (creating one standard for prohibited — much less illegal— speech that can be operationalized across the entire Internet).Crucially, this information exchange needs to be a two-way street. The resource constraints faced by smaller platforms can also lead to innovative ways to tackle abuse and specific techniques that work well for specific communities and use-cases. Different approaches should be explored and examined for their merit, not viewed with suspicion as a deviation from the “standard” way of moderating. Any recommendations and best practices should be flexible enough to be incorporated into different services’ unique approaches to content moderation, rather than act as a forcing function to standardize towards one top-down, centralized model. As much as there is to be gained from sharing knowledge, insights, and technology across different services, there’s no-one-size-fits-all approach to content moderation.Emma Llansó is the Director of CDT’s Free Expression Project, which works to promote law and policy that support Internet users’ free expression rights in the United States and around the world. Emma also serves on the Board of the Global Network Initiative, a multistakeholder organization that works to advance individuals’ privacy and free expression rights in the ICT sector around the world. She is also a member of the multistakeholder Freedom Online Coalition Advisory Network, which provides advice to FOC member governments aimed at advancing human rights online.
Judge Recommends Copyright Troll Richard Liebowitz Be Removed From Roll Of The Court For Misconduct In Default Judgment Case
Would you believe it? Copyright troll Richard Liebowitz is in trouble yet again. And yes, we just had a different article about him yesterday, but it's tough to keep up with all of young Liebowitz's court troubles. The latest is that a judge has sanctioned Liebowitz and recommended he be removed from the roll of the court in the Northern District of NY.But here's the amazing thing: this is all happening in a case where they're trying to get damages in a default judgment case. As we noted just last week, it's quite rare for a court to do anything other than rubber stamp a default judgment request (what usually happens when the defendant doesn't show up in court and ignores a lawsuit). Yet, last week we saw a judge deny a default judgment in a different copyright trolling case, involving Malibu Media. And here, Richard Liebowitz has managed to not only lose a case in which the court clerk had already entered a default, but to get sanctioned and possibly kicked off the rolls of the court. That's... astounding.The judge, Lawrence Kahn, is clearly having none of Liebowitz's usual bullshit. The ruling cites many of Liebowitz's other bad cases. Ostensibly, at this point the issue is that Liebowitz took the default and wanted to have the court order statutory damages against the defendant (Buckingham Brothers LLC), but instead the court just slams Liebowitz for a wide variety of issues. First, the court points out that despite the default, the original legal pleading was insufficient for statutory damages (and for attorney's fees) in part because, in typical Liebowitz fashion, he tried to hide stuff from the court. In particular, Liebowitz didn't allege the date of infringement or the date of the copyright registration. This is important, because you can't get statutory damages if the infringement is before the registration. This is an issue that Liebowitz has been known to fudge in the past. And here, the failure to plead those key points dooms the request for statutory damages and attorneys fees here:
Daily Deal: Naztech Ultimate Power Station
Featuring a sophisticated wireless charger, a 5 USB charging hub, and an ultra-compact portable battery, the Naztech Ultimate Power Station is your all-in-one charging solution. Charge up to 6 power-hungry devices at the same time from a single AC wall outlet. With 50 watts of rapid charging power, the Ultimate is the perfect and practical solution for homes and offices with limited outlets and multiple devices that need high-speed charging. It's on sale for $50.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
The Supreme Court's Failure To Protect The Right To Assemble Has Led Directly To Violence Against Protesters
It appears the Supreme Court is unwilling to address a another problem it created.The first major problem created by the Court has been discussed here quite frequently. Qualified immunity was created by the Supreme Court in 1967 as a way to excuse rash decisions by law enforcement if undertaken in "good faith." Since then, it has only gotten worse. Fifteen years later, the Supreme Court added another factor: a violation of rights must be "clearly established" as a violation before a public servant can be held accountable for violating the right. Further decisions moved courts away from determining whether or not a rights violation took place, relying instead on steadily-decreasing precedent showing this violation was "clearly established."The Supreme Court continues to dodge qualified immunity cases that might make it rethink the leeway it has granted to abusive cops. Plenty of people have taken note of this, including federal court judges.But that's not the only way the general public is being screwed by SCOTUS. As Kia Rahnama points out for Politico, the right to freely assemble -- long-considered an integral part of the First Amendment -- continues to be narrowed by the nation's top court. As violence against demonstrators increases in response to ongoing protests over abusive policing (enabled by qualified immunity's mission creep), those participating in the violence feel pretty secure in the fact they'll never have to answer for the rights violations.
Bizarre Court Ruling Helps Cable Broadband Monopoly Charter Tap Dance Around Merger Conditions
Eager to impose higher rates on its mostly captive customers, Charter Communications (Spectrum) has been lobbying the FCC to kill merger conditions affixed to its 2015 merger with Time Warner Cable. The conditions, among other things, prohibited Charter from imposing nonsensical broadband caps and overage fees, or engaging in the kind of interconnection shenanigans you might recall caused Verizon customers' Netflix streams to slow to a crawl back in 2015. The conditions also involved some fairly modest broadband expansion requirements Charter initially tried to lie their way out of.But with the GOP having neutered FCC authority over broadband providers (including the axing of net neutrality rules), Charter obviously is eager to take full advantage. So on one hand, they've been engaged in some fairly dodgy lobbying of the FCC to scuttle the conditions, which already had a seven year sunset provision (they expire in 2 years anyway). On the other hand, the telecom-backed Competitive Enterprise Institute (CEI) took a different tack, and filed suit against the conditions, somehow convincing four Charter customers to sue under the argument the conditions (not the merger) raised consumer prices.This being America, the telecom-backed think tank last week scored a favorable ruling thanks to the US Court of Appeals for the District of Columbia Circuit. In its ruling (pdf), the court completely bought into the CEI's arguments that conditions crafted by consumer advocates, aimed at protecting consumers, somehow hurt consumers. As such, the court vacated two of the conditions -- one banning Charter from having to offer lower-cost broadband plans, and one prohibiting ISPs from engaging in dodgy behavior out at the edge of the network (interconnection).In its ruling, the court proclaims that the restrictions on interconnection drove up consumer prices:
Epic Games Sued By Company That Manages 'Coral Castle' In Florida Over New Fortnite Map
Of all the trademark insanity we cover here, there are still little nuggets of niche gold when it comes to the truly insane trademark disputes. There are plenty of these categories, but one of my personal favorites is when real life brands get their knickers twisted over totally unrelated items in fiction. If you cannot conceptualize what I'm talking about, see the lawsuit brought by a software company that creates something called Clean Slate against Warner Bros. because...The Dark Knight Rises had a piece of software in it that was referred to as "clean slate."Which brings us, as most stories about insanity do, to Florida. Epic Games released a new map for its hit game Fortnite recently, entitled Coral Castle. The map includes motifs of water and structures made from coral. CCI, based out of Florida, holds trademarks for a real life landmark called Coral Castle. There too, you can catch real life motifs of water mixed with structures made to look like coral. It is not, however, a video game setting. It is real life. And, yet, CCI has decided to sue Epic Games over the name of its map.
New Jersey Supreme Court Says 'Forgone Conclusion' Trumps Fifth Amendment In Crooked Cop Case
The New Jersey Supreme Court has made the Fifth Amendment discussion surrounding compelled production of passwords/passcodes more interesting. And by interesting, I mean frustrating. (h/t Orin Kerr)The issue is far from settled and the nation's top court hasn't felt like settling it yet. Precedent continues to accumulate, but it's contradictory and tends to hinge on each court's interpretation of the "foregone conclusion" concept.If the only conclusion that needs to be reached by investigators is that the suspect owns the device and knows the password, it often results in a ruling that says compelled decryption doesn't violate the Fifth Amendment, even if it forces the suspect to produce evidence that could be used against them. Less charitable readings of this concept recognize that "admitting" to ownership of a device is admitting to ownership of everything in it, and view the demand for passcodes as violating Fifth Amendment protections against self-incrimination. The stronger the link between the suspect and the phone, the less Fifth Amendment there is to go around.This decision [PDF] deals with a crooked cop. Sheriff's officer Robert Andrews apparently tipped off a drug dealer who was being investigated. The dealer tipped off law enforcement about Andrews' assistance with avoiding police surveillance -- something that involved Officer Andrews telling the drug suspect to ditch phones he knew were being tapped and giving him information about vehicles being used by undercover officers.Two iPhones were seized from Andrews who refused to unlock them for investigators. Investigators claimed they had no other option but force Andrews to unlock them. According to the decision, there was no workaround available at that time (at some point in late 2015 or early 2016).
Has The Pandemic Shown That The Techlash Was Nonsense?
There's an excellent piece over at RealClearPolitics arguing that COVID-19 killed the techlash. It makes a fairly compelling argument, coming at it from multiple angles. First, there's the question of how real the "techlash" ever was. It's long appeared to be more of a media- and politician-driven narrative than a real anger coming from people who make use of technology every day:
California Fusion Center Tracked Anti-Police Protests, Sent Info To 14,000 Police Officers
As anti-police brutality protests have spread across the country in the wake of the yet another killing of an unarmed Black man by a white police officer, so has surveillance. Another set of documents found in the "Blue Leaks" stash shows a California-based "fusion center" spreading information about First Amendment-protected activities to hundreds of local law enforcement agencies. Pulling in information from all over -- including apparent keyword searches of social media accounts -- the Northern California Regional Intelligence Center (NCRIC) distributed info on protests and protesters to officers across the state.
Tone Deaf Facebook To Cripple VR Headsets Unless You Link It To Your Facebook Account
Back in 2014 when Facebook bought Oculus, there were the usual pre-merger promises that nothing would really change, or that Facebook wouldn't erode everything folks liked about the independent kickstarted product. Oculus founder Palmer Luckey, who has since moved on to selling border surveillance tech to the Trump administration, made oodles of promises to that effect before taking his money and running toward the sunset. Among those promises was the promise users would never be forced to use a Facebook login account just to use your VR headset and its games, and that the company wouldn't track your behavior for advertising.Like every major merger, those promises didn't mean much. This week, Facebook and Oculus announced that users will soon be forced to -- use a Facebook account if they want to be able to keep using Oculus hardware, so the company can track its users for advertising purposes. The official Oculus announcement tries to pretend that this is some innate gift to the end user, instead of just an obvious way for Facebook to expand its behavioral advertising empire:
Daily Deal: The Build a Strategy Game Development Bundle
The Build a Strategy Game Development Bundle has 10 courses to help you learn how to build your own game with the Unity Real-Time Development Platform. You'll learn strategy game fundamentals and mechanics, camera control, resource gathering, unit spawning mechanics, 3D isometric city-building, and more. Other courses cover Godot Game Engine, Photon, Azure, and more. It's on sale for $40.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
Copyright Troll Richard Liebowitz Reveals His Retainer Agreement: He Gets Most Of The Money
We noted last week that Judge Lewis Kaplan (like so many other judges who have copyright troll Richard Liebowitz in their courts) was fed up with Richard Liebowitz's unwillingness to follow fairly straightforward orders, including that he produce the retainer agreement with his clients, as well as present evidence that the client knew of and approved the specific lawsuits at hand. Judge Kaplan did this in at least two (and possibly more?) cases. In the case we mentioned last week -- the Chosen Figure LLC v. Smiley Miley case -- despite already receiving a benchslap from the judge for not providing the retainer agreement, Liebowitz has filed some random emails between his own staff and... his client's girlfriend? That does include an email from his client saying he doesn't check email much so to have his girlfriend on email chains instead, though it's not clear that this will be enough to satisfy the judge's request for authorization for "this case specifically," but we'll see.However, much more interesting is that, for what appears to be the first time, Liebowitz has revealed his retainer agreement with clients. And, man, do his clients get a raw deal. Liebowitz gets 50% of any proceeds after costs which come out of any settlement received. In other words, more than half (potentially a lot more than half) of the money from any settlement goes to Liebowitz. That would mean that Richard Liebowitz has a larger financial stake in the outcome of these cases than his own clients.Also, in typical bad lawyering fashion, Liebowitz tells his clients there's a possibility that they might recover some fees from the other lawyers, but leaves out that his own clients may be on the hook for the other side's legal fees. And this is not theoretical as Liebowitz's track record includes costing his clients money in legal fees. Yet his retainer agreement seems to suggest the only reason his clients should think about legal fees is in how they might get them from the other side:
Tim Wu Joins The Ban TikTok Parade, Doesn't Clarify What The Ban Actually Accomplishes
I've mentioned a few times that I don't think the TikTok ban is coherent policy.One, the majority of the politicians pearl clutching over the teen dancing app have been utterly absent from other privacy and security debates (say like U.S. network security flaws or the abuse of location data). In fact, many of them have actively undermined efforts to shore up U.S. privacy and security, whether we're talking about the outright refusal to fund election security improvements, or repeated opposition to even the most basic of privacy laws for the modern era. Let's be clear: a huge swath of these folks are simply engaged in performative, xenophobic politics and couldn't care less about U.S. privacy and security.Two, banning TikTok doesn't actually accomplish much of anything. It doesn't really really thwart Chinese intelligence, which could just as easily buy this data from an absolute ocean of barely regulated international adtech middlemen, obtain it from any one of a million hacked datasets available on the dark net, or steal it from the, you know, millions upon millions of "smart" and IOT devices we attach to our home and business networks with no security and reckless abandon. In full context of the U.S., where privacy and security standards are hot garbage, the idea that banning a Chinese teen dancing app does all that much is just silly.That said, I remain surprised by the big names in tech policy who continue to believe the Trump administration's sloppy and bizarre TikTok ban accomplishes much of anything. Case in point: Columbia law professor Tim Wu, whose pioneering work on net neutrality and open platforms I greatly admire, penned a new piece for the New York Times arguing that a "ban on Tiktok is overdue." Effectively, Wu argues that because China routinely bans U.S. services via its great firewall, turnabout is fair play:
UK Says South Wales Police's Facial Recognition Program Is Unlawful
The South Wales Police has been deploying a pretty awful facial recognition program for a few years now. Back in 2018, documents obtained by Wired showed its test deployment at multiple events attended by thousands was mostly a mistake. The system did ring up 173 hits, but it also delivered nearly 2,300 false positives. In other words, it was wrong about 92% of the time.Civil liberties activist Ed Bridges sued the South Wales Police after his image was captured by its camera system, which is capable of capturing up to 50 faces per second. Bridges lost at the lower level. His case was rejected by the UK High Court, which ruled capturing 50 faces per second was "necessary and proportionate" to achieve its law enforcement ends.Fortunately, Bridges has prevailed at the next level. The Court of Appeal has ruled in favor of Bridges and against the SWP's mini-panopticon.The decision [PDF] opens with a discussion of the automated facial recognition technology (AFR) used by the SWP, which runs on software developed by NEC called "NeoFace Watch." Watchlists are compiled and faces that pass SWP's many cameras are captured and compared to this list. On the list are criminal suspects, those wanted on warrants (or who have escaped from custody), missing persons, persons of interest for "intelligence purposes," vulnerable persons, and whatever this thing is: "individuals whose presence at a particular event causes particular concern."Here's how it works:
Paulding County School District Now Trying To Duck FOIA Requests
You will recall the brief clusterfuck that occurred earlier this month in Georgia's Paulding County. The school district there, which opened back up for in-person classes while making wearing a mask completely optional, also decided to suspend two students who took and posted pictures of crowded hallways filled with maskless students. While the district dressed these suspensions up as consequences for using a smartphone on school grounds, the school's administration gave the game away by informing all students that they would be disciplined for any criticism by students on social media in general. That, as we pointed out, is a blatant First Amendment violation.Once the blow-back really got going, the school district rescinded the suspensions. In the days following, students and teachers at the school began falling ill and testing positive for COVID-19. It got bad enough that the school decided to shut down. With so much media attention, it was a matter of who was going to get the FOIA requests in for documents on what led to the suspensions first.Vice put a request in. However, because this district can't seem to stop punching itself in the gut, the school district is attempting to duck the FOIA requests entirely. Not through redactions. It just isn't going to give up any internal documents at all, even as it acknowledges it has documents in hand.
Content Moderation Case Study: Amazon's Attempt To Remove 'Sock Puppet' Reviews Results In The Deletion Of Legitimate Reviews (November 2012)
Summary:As is the case on any site where consumer products are sold, there's always the chance review scores will be artificially inflated by bogus reviews using fake accounts, often described as "sock puppets."Legitimate reviews are organic, prompted by a buyer's experience with a product. "Sock puppets," on the other hand, are bogus accounts created for the purpose of inflating the number of positive (or -- in the case of a competitor -- negative) reviews for a seller's product. Often, they're created by the seller themself. Sometimes these faux reviews are purchased from third parties. "Sock puppet" activity isn't limited to product reviews. The same behavior has been detected in comment threads and on social media platforms.In 2012 -- apparently in response to "sock puppet" activity, some of it linked to a prominent author -- Amazon engaged in a mass deletion of suspected bogus activity. Unfortunately, this moderation effort also removed hundreds of legitimate book reviews written by authors and book readers.In response to authors' complaints that their legitimate reviews had been removed (along with apparently legitimate reviews of their own books), Amazon pointed to its review guidelines, claiming they forbade authors from reviewing other authors' books.
Not A Good Look: Facebook's Public Policy Director In India Files A Criminal Complaint Against A Journalist For A Social Media Post
In today's insanity, Facebook's top lobbyist in India, Ankhi Das, has filed a criminal complaint against journalist Awesh Tiwari. Tiwari put up a post on Facebook over the weekend criticizing Das, citing a giant Wall Street Journal article that is focused on how Facebook's rules against hate speech have run into challenges regarding India's ruling BJP party. Basically, the article said that Facebook was not enforcing its hate speech rules when BJP leaders violated the rules (not unlike similar stories regarding Facebook relaxing the rules for Trump supporters in the US).Das is named in the original article, claiming that she had pushed for Facebook not to enforce its rules against BJP leaders because it could hurt Facebook's overall interests in India. Tiwari called out Das' role in his Facebook post, and it appears Das took offense to that:
Ricky Byrdsong And The Cost Of Speech
On July 2nd,1999, Ricky Byrdsong was out for a jog near his home in Skokie, Illinois, with two of his young children, Sabrina and Ricky Jr. The family outing would end in tragedy. His children watched helplessly as their father was gunned down. He was the victim of a Neo-Nazi on a murderous rampage targeting Jewish, Asian and Black communities. Ten other people were left wounded. Won-Joon Yoon, a 26 year-old graduate student at the University of Indiana, would also be killed.When you distill someone's life down to their final minutes, it does a disservice to their humanity and how they lived. Though I didn't know Won-Joon Yoon, I met Coach Byrdsong — one of few Black men's head basketball coaches in the NCAA — through my father, who is also part of this small fraternity. As head coaches in Illinois in the late 90s, their names were inevitably linked to each other. They occasionally played one another. Beyond his passion for basketball, Coach Byrdsong's love of God, and his commitment to community and family shone bright.Coach Byrdsong was the first Black head basketball coach at Northwestern University in Evanston, Illinois. His appointment was a big deal: Northwestern is a private university in an NCAA "power conference," with a Black undergraduate population of less than 6%. I visited Northwestern's arena when my dad was an assistant coach at the University of Illinois. At 11-years old, I remember being surrounded by belligerent college students making ape noises. When I hear jangling keys at sporting events, I'm transported back to the visceral feeling of being surrounded by thousands of (white) college students, alumni and locals, shaking their car keys while smugly chanting "that's alright, that's ok, you will work for me one day."Their ditty, directed towards a basketball court overwhelmingly composed of Black, working-class student athletes, seemed to say: you don't belong here, and you never will — a sentiment that still saturates the campus. This is the world that Neo Nazi Benjamin Smith came from. Smith was raised in Wilmette, Illinois, one of the richest and whitest suburbs in the country, less than five miles from where he killed Coach Byrdsong.The digital boundaries that exist online, much like the neighborhood ones, carve up communities often by ethnicity, class, and subculture. In these nooks a shared story and ideology is formed that reinforces an "us against the world" mentality. It's debatable whether that's intrinsically bad — but in this filter bubble, it is hard to see our own reflection accurately, let alone others. This leaves both our digital and physical bodies vulnerable.Matthew Hale, Smith's mentor and founder of the World Church of the Creator, was an early adopter of Internet technology. He was part of a 90s subculture of white nationalists that flocked to the web, stitching a digital hood anonymizing those who walk and work amongst us. Hale's organization linked to white power music, computer games, and developed a website "Creativity for Kids," with downloadable white separatist coloring books. They used closed chat rooms and internet forums to rile up thirst for a race war. They understood the importance of e-commerce as a vehicle for trafficking hate, and they experimented with email bombing and infiltrating chat rooms.Beyond being tech savvy, Hale was also a lawyer, who in 1999 was being defended by the ACLU. The Illinois Bar Association had denied Hale's law license based on his incitement of racial hatred and violence against ethnic and religious groups. The ACLU has had a long run of defending white nationalists including Charlottesville "Unite the Right" organizer Jason Kessler. In 1978 they defended the organizers of a Skokie Nazi march, the same community where Coach Byrdsong was assassinated. At the time 1 in every 6 Jewish residents there was either a survivor, or directly related to a survivor of the Holocaust.Hale's law license was rejected based on three main points:
Court Says First Amendment Protects Ex-Wife's Right To Publicly Discuss Her Ex-Husband On Her Personal Blog
What appears to be a very combative divorce between two very combative people in Marin County, California has reached the point of criminal charges. Not justifiable criminal charges, but criminal charges all the same.Melissanne Velyvis has been very publicly documenting everything about her divorce proceedings and her ex-husband's (Dr. John Velyvis) alleged domestic abuse. In an apparent attempt to silence her from discussing her personal life (which necessarily involved discussing his personal life), John approached a judge and secured a restraining order forbidding his ex-wife from publishing "disparaging comments." Here's Judge Beverly Wood making her feelings clear about Melissanne's divorce-focused blogging:
Daily Deal: The 2020 Ultimate Web Developer And Design Bootcamp Bundle
The 2020 Ultimate Web Developer and Design Bootcamp Bundle has 11 courses designed to help you kick start your career as a web developer and designer. You'll learn about Java, HTML, CSS3, APIs, and more. By the end of the courses, you will be able to confidently design, code, validate, and launch websites online. The bundle is on sale for $40.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
If Oracle Buys TikTok, Would It Suddenly Change Its Tune On Section 230?
Late Monday, it came out that Oracle is one of the potential American acquirers of TikTok from the Chinese company ByteDance, after President Trump ordered Bytedance sell TikTok out of spite. Microsoft has been the most talked about potential purchaser, though there were also rumors of a potential bid by Twitter.The Oracle rumor strikes many as particularly bizarre, for good reason. Oracle is pretty much an enterprise-only focused company. However, if it has one strength it is in buying up companies and integrating them into its cashflow generation machine. I'm still not sure I see the synergies here, but perhaps Larry Ellison is finally realizing that Oracle is the opposite of cool in Silicon Valley.However, the thing that struck me most about all of this is that Oracle is one of the main companies behind the plot to undermine Section 230. Oracle has been a funder of a weird group of anti-Section 230 activists, and has been involved in multiple anti-Section 230 crusades. And, as we've pointed out in the past, it seems pretty clear why: Oracle has always been incredibly (to a petty level) jealous of Google and Facebook's success -- and seems to see Section 230 reform as a weapon it can use to attack those companies without harming itself, since Oracle doesn't really host much user generated content.Of course, that would change if Oracle actually ended up buying TikTok. Suddenly, it would have a massive platform full of user generated content, and it would be fascinating to watch if Oracle changes its tune on 230 (or calls off its attack dogs who keep misrepresenting 230). That would certainly be interesting. Of course, the general rumor is that Oracle is really just doing this to drive up the price for Microsoft (who Oracle is losing to in the fight for "cloud" supremacy), but President Trump has given his blessing for an Oracle/TikTok deal, which isn't too surprising, given that Oracle's top execs have been sucking up to Trump and praising him since he was elected.
If Oracle Buys TikTok, Would Is Suddenly Change Its Tune On Section 230?
Late Monday, it came out that Oracle is one of the potential American acquirers of TikTok from the Chinese company ByteDance, after President Trump ordered Bytedance sell TikTok out of spite. Microsoft has been the most talked about potential purchaser, though there were also rumors of a potential bid by Twitter.The Oracle rumor strikes many as particularly bizarre, for good reason. Oracle is pretty much an enterprise-only focused company. However, if it has one strength it is in buying up companies and integrating them into its cashflow generation machine. I'm still not sure I see the synergies here, but perhaps Larry Ellison is finally realizing that Oracle is the opposite of cool in Silicon Valley.However, the thing that struck me most about all of this is that Oracle is one of the main companies behind the plot to undermine Section 230. Oracle has been a funder of a weird group of anti-Section 230 activists, and has been involved in multiple anti-Section 230 crusades. And, as we've pointed out in the past, it seems pretty clear why: Oracle has always been incredibly (to a petty level) jealous of Google and Facebook's success -- and seems to see Section 230 reform as a weapon it can use to attack those companies without harming itself, since Oracle doesn't really host much user generated content.Of course, that would change if Oracle actually ended up buying TikTok. Suddenly, it would have a massive platform full of user generated content, and it would be fascinating to watch if Oracle changes its tune on 230 (or calls off its attack dogs who keep misrepresenting 230). That would certainly be interesting. Of course, the general rumor is that Oracle is really just doing this to drive up the price for Microsoft (who Oracle is losing to in the fight for "cloud" supremacy), but President Trump has given his blessing for an Oracle/TikTok deal, which isn't too surprising, given that Oracle's top execs have been sucking up to Trump and praising him since he was elected.
Indiana Cities File Doomed Lawsuit Against Disney, Netflix, Demand 5% of Gross Revenues
A coalition of cities has filed a desperate, and likely doomed, lawsuit (pdf) against streaming providers like Netflix and Disney. In it, the cities proclaim that they are somehow owed 5 percent of gross annual revenue. Why? Apparently they believe that because these streaming services travel over telecom networks that utilize the public right of way, they're somehow owed a cut:
Google Warns Australians That The Government's Plan To Tax Google To Give Money To Newspapers Will Harm Search & YouTube
Earlier this year we noted that the Australian government was setting up a you're-too-successful tax on Google and Facebook which it planned to hand over to media organizations. We should perhaps call it the "Welfare for Rupert Murdoch" tax, because that's what it is. Murdoch, of course, owns a huge share of media operations in Australia and has been demanding handouts from Google for years (showing that his claimed belief in the free market was always hogwash).In response, Google has now released an open letter to Australians pointing out that this plan to tax Google to funnel money to Murdoch will have massive unintended consequences. In particular, Google argues, under the law, Google would be required to give an unfair advantage to big media companies:
Costco Gets Trademark Judgement Overturned, Defeating Tiffany And Co.
Readers here will be sick of this, but we're going to have to keep beating it into the general populace's head: trademark law is about preventing confusion as to the source of a good or service. The idea is to keep buyers from being fooled into buying stuff from one company or person while thinking they were buying it from another. That's basically it.It's a lesson still to be learned, and one which a federal judge has imparted on famed jewelry maker Tiffany & Co. The backstory here is that back in 2013, on Valentine's Day of all days, Tiffany & Co. sued Costco over the latter's advertisement of "Tiffany" style rings.
DC Police Union Sues To Block The Release Of Names Of Officers Involved In Shootings
Washington DC responded to widespread protests following the killing of George Floyd with a set of police reforms that tried to address some systemic problems in the district's police department, starting with its lack of transparency and accountability.The reform bill -- passed two weeks after George Floyd's killing -- placed new limits on deadly force deployment, banned the Metropolitan PD from acquiring military equipment through the Defense Department's 1033 program, and mandated release of body-camera footage within 72 hours of any shooting by police officers. The names of the officers involved are covered by the same mandate, ensuring it won't take a lawsuit to get the PD to disclose info about officers deploying deadly force.But there's a lawsuit already in the mix -- one that hopes to keep the public separated from camera footage and officers' names. Unsurprisingly, it's been filed by a longtime opponent of police accountability.
Techdirt Podcast Episode 252: The Key To Encryption
This week we've got another cross-post, with the latest episode of The Neoliberal Podcast from the Progressive Policy Institute. Host Jeremiah Johnson invited Mike, along with PPI's Alec Stapp, to discuss everything about encryption: the concept itself, the attempts at laws and regulations, and more.Follow the Techdirt Podcast on Soundcloud, subscribe via iTunes or Google Play, or grab the RSS feed. You can also keep up with all the latest episodes right here on Techdirt.
Shitbirds Of A Feather Flock Together: ICE Signs $274,000 Contract With Clearview
ICE continues to not care what anyone thinks of it. Its tactics over the past few years have turned it into one of the federal government's most infamous monsters, thanks to its separation of families, caging of children, unfettered surveillance of undocumented immigrants, its fake university sting created to punish students trying to remain in the country legally, its sudden rescinding of COVID-related distance learning guidelines solely for the purpose of punishing students trying to remain in the country legally… well, you get the picture.Perhaps it's fitting ICE is buying tech from a company that appears unconcerned that most of the public hates it. Clearview -- the facial recognition software that matches uploaded facial images with billions of images scraped from the open web -- is one of the latest additions to ICE's surveillance tech arsenal.
Why Keep Section 230? Because People Need To Be Able To Complain About The Police
The storm has passed and the charges have been dropped. But the fact that someone who tweeted about police behavior, and, worse, people who retweeted that tweet, were ever charged over it is an outrage, and to make sure that it never happens again, we need to talk about it. Because it stands as a cautionary tale about why First Amendment protections are so important – and, as we'll explain here, why Section 230 is as well.To recap, protester Kevin Alfaro became upset by a police officer's behavior at a recent Black Lives Matter protest in Nutley, NJ. The officer had obscured his identifying information, so Alfaro tweeted a photo asking if anyone could identify the officer "to hold him accountable."Several people, including Georgana Szisak, retweeted that tweet. The next thing they knew, Alfaro, Sziszak, and several other retweeters found themselves on the receiving end of a felony summons pressing charges of "cyber harassment" of the police officer.As we've already pointed out, the charges were as pointless as they were spurious, because they themselves directly did the unmasking of the officer's identity, which the charges maintained was somehow a crime to ask for. Over at the Volokh Conspiracy, Eugene Volokh took further issue with the prosecution, and in particular its application of the New Jersey cyber harassment statute against the tweet. Particularly in light of an earlier case, State v. Carroll (N.J. Super. Ct. App. Div. 2018), he took a dim view:
Daily Deal: Calmind Mental Fitness App
Calmind Mental Fitness App helps you improve your quality of life by focusing on what's important and getting rid of distractions. It provides soothing and sensory stories to reduce stress and help you fall asleep faster, as well as ASMR triggers and calming tones. Calmind is on sale for $70.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
The Fortnite App Store Battle: A Real Antitrust Conundrum, Or Just A Carefully Planned Out Contract Negotiation?
Last week there was quite a lot of news paid to Apple kicking Fortnite out of the iOS app store for violating the rules by avoiding Apple's in-app payment setup (out of which Apple takes 30%). Epic, who had been hinting at this for a while, introduced a direct payment offering that effectively avoided the 30% charge that Apple (and Google) require from developers.There have been arguments over the last decade or so since Apple implemented its policy requiring subscription revenue to go through Apple's system -- but this is probably the biggest fight yet. Epic was clearly expecting Apple to do this because almost immediately after Fortnite was removed from the app store, Epic first released a Nineteen Eighty-Fortnite parody ad, mocking Apple's infamous 1984 Superbowl ad.Almost immediately, Epic also sued Apple over the removal in a legal complaint that was clearly prepared well in advance. Represented by some of the top antitrust lawyers in the country, and weighing in at 65 pages, Epic had spent some time preparing for this fight. To drive this point home, the lawsuit itself references 1984 in the opening paragraph, tying into Epic's marketing campaign:
Verizon Forced To Back Off Charging Extra For 5G
While fifth-generation (5G) wireless will result in faster, more resilient networks (once it's finally deployed at scale years from now), the technology has been over-hyped to an almost comical degree. Yes, faster, lower latency networks are a good thing, but 5G is not as paradigm-rattling as most wireless carriers and hardware vendors have led many in the press to believe. 5G is more of a useful evolution than a revolution, but it has become the equivalent of magic pixie dust in tech policy circles, wherein if you simply say "it will lead to faster deployment of 5G!" you'll immediately add gravitas to your otherwise underwhelming K Street policy pitch.Here on planet Earth, most consumers couldn't care less about 5G. In most surveys U.S. consumers -- who pay some of the highest prices in the world for mobile data -- say their top priority is usually lower prices. That's increasingly true during a pandemic and economic crisis, where every dollar counts.Enter Verizon, which, instead of reading the market, has been repeatedly trying to charge $10 extra for 5G despite consumers not seeing the value. Verizon executives had fooled themselves into thinking a "premium" upgrade warranted a premium price tag. But consumers quickly realized the extra money simply wasn't worth it. For one, Verizon's network is barely available (one study stated a full 5G signal was available about 0.4% of the time). First generation 5G devices are also expensive and tend to suffer from crappier battery life. All for admittedly faster speeds most users don't think they need yet.With consumers not really that interested, and no other wireless carriers attempting to charge extra anyway, Verizon has been forced to finally back away from the $10 monthly surcharge after flirting with it since last year:
Judge Forbids Facebook Users Being Sued By A Cop From Publishing The Cop's Name On Social Media
Eugene Volokh reports an Ohio court has hit a number of defendants in a libel lawsuit with an unconstitutional order forbidding them from posting the name of the man suing them. It's no ordinary man, though. It's a police officer who several attendees of a Cincinnati city council meeting have both identified and claimed used a racist hand sign while interacting with them.
England's Exam Fiasco Shows How Not To Apply Algorithms To Complex Problems With Massive Social Impact
The disruption caused by COVID-19 has touched most aspects of daily life. Education is obviously no exception, as the heated debates about whether students should return to school demonstrate. But another tricky issue is how school exams should be conducted. Back in May, Techdirt wrote about one approach: online testing, which brings with it its own challenges. Where online testing is not an option, other ways of evaluating students at key points in their educational career need to be found. In the UK, the key test is the GCE Advanced level, or A-level for short, taken in the year when students turn 18. Its grades are crucially important because they form the basis on which most university places are awarded in the UK.Since it was not possible to hold the exams as usual, and online testing was not an option either, the body responsible for running exams in the UK, Ofqual, turned to technology. It came up with an algorithm that could be used to predict a student's grades. The results of this high-tech approach have just been announced in England (other parts of the UK run their exams independently). It has not gone well. Large numbers of students have had their expected grades, as predicted by their teachers, downgraded, sometimes substantially. An analysis from one of the main UK educational associations has found that the downgrading is systematic: "the grades awarded to students this year were lower in all 41 subjects than they were for the average of the previous three years."Even worse, the downgrading turns out to have affected students in poorly performing schools, typically in socially deprived areas, the most, while schools that have historically done well, often in affluent areas, or privately funded, saw their students' grades improve over teachers' predictions. In other words, the algorithm perpetuates inequality, making it harder for brilliant students in poor schools or from deprived backgrounds to go to top universities. A detailed mathematical analysis by Tom SF Haines explains how this fiasco came about:
Confused Critic Of Section 230 Now In Charge Of NTIA
Multiple experts on Section 230 have pointed out that the NTIA's bizarre petition to the FCC to reinterpret Section 230 of the Communications Decency Act is complete nonsense. Professor Eric Goldman's analysis is quite thorough in ripping the petition to shreds.
Google Responds To Hong Kong's New National Security Law By Rejecting Its Government's Requests For Data
Google's on-again, off-again relationship with China is off again. A decade ago, Google threatened to pull out of China because the government demanded a censored search engine. Fast forward to 2018 and it was Google offering to build a censored search engine for the China market. A few months later -- following heavy internal and external criticism -- Google abandoned the project.China is now imposing its will on Hong Kong in violation of the agreement it made when the UK returned control of the region to the Chinese government. Its latest effort to stifle long-running pro-democracy demonstrations took the form of a "national security" law which was ratified by the far-too-obsequious Hong Kong government. The law equates advocating for a more independent Hong Kong with sedition and terrorism, allowing authorities to punish demonstrators and dissidents with life sentences for, apparently, fighting back against a government that agreed it wouldn't impose its will on Hong Kong and its residents.For years, Google has refused to honor data requests from the Chinese government. Following this latest attack on Hong Kong autonomy, it appears Google now feels the region is indistinguishable from China.
...210211212213214215216217218219...