![]() |
by Karl Bode on (#4KR57)
However bad Facebook's privacy issues are, the telecom sector's have long been as bad, if not worse. That's been most recently exemplified by the industry's headaches surrounding the collection and sale of sensitive customer location data. Scandal after scandal has revealed that for the better part of the last decade, cellular phone companies have been collecting and selling your location data to a long line of often dubious companies and organizations, who then did the bare minimum to secure this data. Everyone from law enforcement to stalkers has been allowed to abuse this data, and your privacy.The latest case in point: a new investigation by Think Progress found that Steve Bannon also managed to get a hold of this data and use it for political targeting purposes. According to the report, Bannon and a group dubbed CatholicVote used the cell-phone location data of people who had visited Roman Catholic churches in Dubuque, Iowa, in 2018 to target them with get-out-the-vote ads:
|
Techdirt
Link | https://www.techdirt.com/ |
Feed | https://www.techdirt.com/techdirt_rss.xml |
Updated | 2025-08-21 14:46 |
![]() |
by Tim Cushing on (#4KQSA)
The UK government already has the cameras -- thousands of them. So, why not add facial recognition to the mix? A number of UK law enforcement agencies already have. UK police forces compiled a legally-questionable database of 18 million face photos and went to work.Nobody did well. Failure after failure followed the rollout, with the London Metro police repeatedly claiming the "worst of the worst" title for itself. Despite this resounding lack of success, the Home Office feels the UK needs more failure, not less.
|
![]() |
by Timothy Geigner on (#4KQ5K)
There are many, many ways for big companies' attempts to use social media or smart apps to go horribly wrong. Usually these happenings involve either hacked into accounts repurposed for lulz, rogue employees having a bit too much to drink on beer Friday and then going off, or companies doing something stupid and then blaming either of the previous for it.And then there's the American Hockey League's mobile app, which for some reason alerted users that Stewart Zimmel apparently both owes someone $6k and threatens to punch people in the throat.
|
![]() |
by Mike Masnick on (#4KPWR)
Last month, we wrote a fairly long post about some interesting elements (demonstrating the flimsiness of "copyright" existing for many photographs) in a copyright lawsuit filed against model Gigi Hadid for reposting a cropped paparazzi photo on her Instagram. As we noted in that post, despite all of the interesting arguments made regarding copyright and photos, it seemed clear that this case was going to get tossed on purely procedural grounds -- namely that the lawsuit, filed by a photo agency called Xclusive-Lee (who may or may not even hold the rights to the photo), was filed prior to the photo receiving a registration from the Copyright Office. Back in March, the Supreme Court said that copyright law is quite clear that you need to wait until the registration is obtained.Here, that was not the case. It was filed before the registration was granted, and thus it's no surprise that (as first pointed out by the Hollywood Reporter) that this case was thrown out for that reason alone.
|
![]() |
by Tim Cushing on (#4KPPE)
For the third time in two months, a US city has banned the use of facial recognition tech by local government agencies.San Francisco started this movement (oh god please let it be a movement) back in May, booting the tech out of the city before local agencies had even gotten a chance to fool around with it. Earlier this month, Somerville, Massachusetts took home the silver in the anti-surveillance-state games, enacting a local ban on facial recognition tech.Oakland, California has become the third city in the nation move forward with a facial recognition tech ban, as KPIX reports:
|
![]() |
by Tim Cushing on (#4KPE5)
Maybe the LAPD doesn't have the experience its counter-coastal counterpart has in inflicting damage to rights and liberties, but it's trying, dammit! The NYPD's brushes with the Constitution are numerous and perpetual. The LAPD may have spent more time working on the Fourth and Fifth Amendments during its Rampart peak, but now it's rolling up on the First Amendment like a repurposed MRAP on a small town lawn.
|
![]() |
by Mike Masnick on (#4KP9J)
As you may have heard, a couple weeks ago, President Trump hosted what he called a "social media summit," where he brought in various Trump-supporting social media people, and where they all got to whine about the completely made up concept of anti-conservative censorship on social media sites (and, because I know the same three of you are going to show up in the comments and scream your heads off that I'm being blind to such censorship: you have yet to show any actual evidence to support your claims -- and, no, a few anecdotes of trolls, assholes, revisionists and propagandists being blocked does not actually prove your point). Trump gave a long speech at that event, most of which made literally no sense. However, he seemed pretty damn sure that social media sites are censoring conservatives.
|
![]() |
by Daily Deal on (#4KP9K)
The Microsoft PowerShell Certification Bundle has 3 courses to get you up to speed with PowerShell. You'll cover a variety of topics crucial to understanding PowerShell so you can automate small daily tasks and improve your work efficiency. Gradually, you'll scale up towards more complex tasks. You'll also discover how to automate your daily work related to Active Directory Management, and how to integrate with non-Microsoft products as well. It's on sale for $19.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
|
![]() |
by Karl Bode on (#4KP4T)
You may be shocked to learn this, but nearly all of the promises AT&T made in the lead up to its $86 billion merger with Time Warner wound up not being true.The company's promise that the deal wouldn't result in price hikes for consumers? False. The company's promise the deal wouldn't result in higher prices for competitors needing access to essential AT&T content like HBO? False. AT&T's promise they wouldn't hide Time Warner content behind exclusivity paywalls? False. The idea that the merger would somehow create more jobs at the company? False.Of course the press and public aren't the only folks AT&T misled. To glean the support of the telecom sector's biggest union, the Communications Workers of America, AT&T apparently promised that newly acquired Time Warner (and subsidiary) workers would be able to join the union. But when the time came to actually allow those employees in, guess what? AT&T suddenly declared that wouldn't be happening for the vast majority of them:
|
![]() |
by Mike Masnick on (#4KNHS)
It's becoming a tradition. A week ago, we wrote about a Friday evening news "leak" (almost certainly from Facebook) about the FTC approving a settlement with Facebook over privacy violations. And, this past Friday evening, there was a similar news dump about a similar settlement with YouTube (though at a much lower dollar amount). In both cases, the Friday evening news dump was almost certainly on purpose -- in the hopes that by Monday, something bigger will have caught the news cycles' attention. Thankfully, we don't work that way.Let's cut to the chase, though. No one (outside of, perhaps, YouTube/Google/Alphabet execs) is "happy" with this. Pretty much everyone will point out, accurately, that a "multi-million dollar" fine is effectively meaningless to YouTube. No one believes that this will magically lead to a world in which internet companies take privacy more seriously. No one believes this will lead to a world in which anyone's privacy is better protected.And while I'm sure some people will complain about the amount (pocket change for Google), I'm not sure the amount really makes much of a difference. Remember, last week's angry response to the $5 billion that the FTC is allegedly getting from Facebook. That's a much higher amount (by a massive margin) the largest the FTC has ever gotten from a company.Perhaps there's a larger issue here: this system of expecting private companies and overworked/understaffed federal (or state) agencies to somehow manage our privacy for us does not work -- no matter what your viewpoint on all of this is. Perhaps we should be looking for solutions where users themselves get better direct control over their data, and aren't reliant on giant fines or government bureaucrats "protecting" it for them. Because if we're just going to go through this charade over and over and over again, it's not clear what the benefit is for anyone.If you don't trust Google/Facebook, then no fine is going to be enough. If you do trust them to hold onto the data they collect, then this whole thing feels like a bit of privacy theater. No one ends up happy about it, and nothing is actually done to protect privacy. I've been pointing out for a while now that we're bad at regulating privacy because most people don't understand privacy, and I think these kinds of things are a symptom of that. There's this amorphous concept out there of "privacy," and people -- egged on by media stories that aren't always accurate -- have a concern that the companies don't do a very good job protecting our privacy. And they're right about that. But, there's no agreement on what privacy means or how you actually "protect" it. And the only tools in the toolbox right now are fines or crazy, confusing, misguided regulations that seem to only lock in large players and hand them an even more dominant position (allowing them to do more things that people are uncomfortable with).There needs to be a better approach -- and it has to be one that starts more from first principles about what it is that we're actually trying to accomplish here, and what will actually get us there. What we have now is not that.
|
![]() |
by Leigh Beadon on (#4KM9N)
This week, our first place winner is Gary with a simple and important take on the idea of the government seizing pharma patents:
|
![]() |
by Leigh Beadon on (#4KJV7)
Five Years AgoThis week in 2014, new revelations from Edward Snowden painted a bad picture of the culture at the GCHQ while, in an interview, he also described the NSA practice of "routinely" passing around intercepted nude photos — something the agency quickly insisted it would stop if it knew about it. The NSA was also saying it had more emails from Snowden when he still worked for the agency, but would not release them.Also this week in 2014: Google finally dumped its ill-fated real names policy, the MPAA was going after Popcorn Time, and the Supreme Court refused the Arthur Conan Doyle estate's last-gasp attempt to stop Sherlock Holmes from becoming public domain.Ten Years AgoThis week in 2009, we saw the ninth misguided lawsuit over trademark in Google AdWords, the Guinness Book of World Records used a bogus takedown to try to hide the records of a very embarrassing website fail, New Zealand was considering copyright reform but not really anything meaningful, and the newly-hugely-popular So You Think You Can Dance was blocked from doing a Michael Jackson tribute. A Norwegian ISP was fighting back against the Pirate Bay ban, the National Portrait Gallery was threatening Wikimedia over downloading public domain images, and Stephen Fry stepped up as an ally against corporate copyright abuse.Fifteen Years AgoThis week in 2004, the CEO of Streamcast was presenting evidence of collusion among record labels to blacklist file sharing companies, while a somewhat unclear study was suggesting BitTorrent usage was way up. The RIAA was predictably defending the INDUCE Act (which it basically wrote) in a letter full of misleading and untrue statements, while at the same time some people were asking if the agency's new anti-filesharing system Audible Magic was in violation of wiretapping laws, and its counterpart in Canada was fighting against a court ruling that said ISPs don't have to turn customer names over to the industry.
|
![]() |
by Timothy Geigner on (#4KHSS)
As I mentioned when we recently discussed Dean Guitars' pushback and counter-suit against Gibson Guitar's trademark lawsuit, Gibson CEO James Curleigh's vague declaration of a relaxed position on IP enforcement has calcified into something of an official corporate program. It's not all bad, but it's not all good either.We'll start with the good. Gibson has decided to recognize that there are fans inspired by its designs who want to create their own guitars and even sell them on occasion. In recognition of this, Gibson is starting an "authorized partnership" program to allow those creators to build guitars without fear of legal threat.
|
![]() |
by Karl Bode on (#4KHJW)
By now the half-baked security in most internet of things (IOT) devices has become a bit of a running joke, leading to amusing Twitter accounts like Internet of Shit that highlight the sordid depth of this particular apathy rabbit hole. And while refrigerators leaking your gmail credentials and tea kettles that expose your home networks are entertaining in their own way, it's easy to lose sight of the fact that the same half-assed security in the IOT space also exists on most home routers, your car, your pacemaker, and countless other essential devices and services your life may depend on.Case in point: just about two years ago, security researchers discovered some major vulnerabilities Medtronic's popular MiniMed and MiniMed Paradigm insulin pumps. At a talk last year, they highlighted how a hacker could trigger the pumps to either withhold insulin doses, or deliver a lethal dose of insulin remotely. But while Medtronic and the FDA warned customers about the vulnerability and issued a recall over time, security researchers Billy Rios and Jonathan Butts found that initially, nobody was doing much to actually fix or replace the existing devices.So Rios and Butts got creative in attempting to convey the scope and simplicity of the threat: they built an app that could use the pumps to kill a theoretical patient:
|
![]() |
by Glyn Moody on (#4KHCS)
Carl Malamud is one of Techdirt's heroes. We've been writing about his campaign to liberate US government documents and information for over ten years now. The journal Nature has a report on a new project of his, which is in quite a different field: academic knowledge. The idea will be familiar to readers of this site: to carry out text and data mining (TDM) on millions of academic articles, in order to discover new knowledge. It's a proven technique with huge potential to produce important discoveries. That raises the obvious question: if large-scale TDM of academic papers is so powerful, why hasn't it been done before? The answer, as is so often the case, is that copyright gets in the way. Academic publishers use it to control and impede how researchers can help humanity:
|
![]() |
by Mike Masnick on (#4KH5G)
In the past, law professor Eric Goldman has suggested that when it comes to infringing content, courts have an uncanny ability to ignore the actual law, and make up their own rules in response to the belief that "infringement bad!" An ongoing lawsuit against Cloudflare seems to be a case in point. As covered by TorrentFreak, a judge has allowed a case against Cloudflare to move forward. However, in doing so, it seems clear that the judge is literally ignoring what the law says.The case itself is... odd. In the complaint, two makers of bridal dresses are upset about the sale of counterfeits. Now, if we're talking about counterfeits, you'll probably think that this is a trademark lawsuit. But, no, Mon Cheri Bridals and Maggie Sottero Designs are trying to make a copyright case out of this, because they're arguing that sites selling counterfeits are using their copyright-protected photos to do so. And Cloudflare is, apparently, providing CDN services to these sites that are selling counterfeit dresses using allegedly infringing photographs. It is odd to go after Cloudflare. It is not the company selling counterfeit dresses. It is not the company hosting the websites of those selling counterfeit dresses. It is providing CDN services to them. This is like suing AT&T for providing phone service to a counterfeit mail order operation. But that's what's happening. From the complaint:
|
![]() |
by Tim Cushing on (#4KH1F)
Palantir is the 800-pound gorilla of data analytics. It has created a massive surveillance apparatus that pulls info from multiple sources to give law enforcement convenient places to dip into the data stream. Law enforcement databases may focus on criminals, but Palantir's efforts focus on everyone. Whatever can be collected is collected. Palantir provides both the data and the front end, making it easy for government agencies to not only track criminal suspects, but everyone they've ever associated with.Palantir is big. But being the biggest player in the market doesn't exactly encourage quality work or accountability. Multiple problems have already been noticed by the company's numerous law enforcement customers -- including the company's apparent inability to responsibly handle data -- but complaints from agencies tied into multi-year contracts are pretty easy to ignore. Palantir says it provides "actionable data." Sounds pretty cool, but in practice this means things like cops firing guns at innocent people because the software spat out faulty suspect/vehicle descriptions.Agencies must see the value in Palantir's products because few seem willing to ditch these data analytics packages. The company does a fairly good job dropping a usable interface on top of its data haystacks. It sells well. And it's proprietary, which means Palantir can get into the policing business without actually having to engage in the accountability and openness expected of government agencies.Fortunately for the public, government agencies still have to respond to public records requests -- even if the documents sought detail private vendors' offerings. Vice has obtained part of a user's manual for Palantir Gotham, which is used by a number of state and federal agencies. This software appears to be used by "fusion centers," the DHS-created abominations that do serious damage to civil liberties but produce very little usable intelligence.The manual [PDF] seems to be written for the California law enforcement agencies that work with local fusion centers. The amount of data Palantir's software provides access to is stunning:
|
![]() |
by Daily Deal on (#4KH1G)
Cloud computing has revolutionized industry and changed the way businesses manage their digital infrastructure.The Cloud Computing Architect Certification Bundle has nine courses geared to help you get familiar with one of technology's fastest growing fields. There are 3 introductory courses to introduce to the basic concepts of cloud computing. After those, the other courses cover Microsoft Azure, AWS and Google Cloud Platform. It's on sale for $39.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
|
![]() |
by Stan Adams on (#4KGR5)
We've written a few times in the past about the serious problems with the CASE Act, a bill that will create a thriving industry of copyright trolling and shakedowns. On Thursday, the Senate Judiciary Committee passed the CASE Act out of Committee, meaning that it could go to the floor for a full vote. Stan Adams, from CDT, has written a detailed, and thoughtful critique, noting that even if there are good intentions behind the CASE Act, it has many, many problems. We're reposting it here, under CDT's CC-BY license.Sometimes ideas based in good intentions are so poorly thought out that they would actually make things worse. This seems to be especially prevalent in the copyright world of late (I'm looking at you, Articles 15 and 17 of the EU Copyright Directive), but the most recent example is the Copyright Alternative in Small-Claims Enforcement Act of 2019 (CASE Act). This bill intends to give photographers and small businesses a more streamlined way to enforce their rights with respect to online infringements by reducing the costs and formalities associated with bringing infringement claims in federal court. Pursuing infringement claims can be expensive and time-consuming, so this may sound like a good thing, especially for rightsholders with limited resources. It is not.The CASE Act would establish a quasi-judicial body within the Copyright Office (part of the legislative branch) empowered to hear a limited set of claims, make "determinations" about whether those claims are valid, and assign "limited" damages. The bill structures the process so that it is "voluntary" and lowers the barriers to filing claims so that plaintiffs can more easily defend their rights. Without the "quotes", this description might sound like a reasonable approach, but that's because we haven't talked about the details. Let's start at the top.The bill would establish a Copyright Claims Board (CCB) in the Copyright Office. This would not be a court and would be entirely separated from the court system. The only option to appeal any of the CCB's determinations, based on the CCB's legal interpretation, would be to ask the Register of Copyrights to review the decision. It would be theoretically possible to ask a federal court to review the determination, but only on the grounds that the CCB's determination was "issued as a result of fraud, corruption, misrepresentation, or other misconduct" or if the CCB exceeded its authority. So if you disagree with the CCB's legal interpretation, or even its competence to make a decision, you are out of luck. This raises red flags about potential due process and separation of powers problems under the Constitution.The "small claims" part of the bill is also troubling, in that the CCB can award damages up to $30,000 per proceeding. This amount is only considered small in the context of copyright statutory damages, which range between $750-30,000 per work infringed, unless the infringement was willful, in which case, damages can be $150,000 per work. The $30K cap is a 2x-10x multiple of the maximum awards for small claims courts in 49 of 50 states. (Side note: what's going on, Tennessee?) So losing a single small-claims action before the CCB could be a financial disaster for many people, potentially for nothing more than uploading a few pictures to your blog.You may be thinking, "I won't infringe copyright, I'll just make sure not to use any protected works." Here's why that will not be as easy as you might think. First, copyright is automatic. This means that when someone snaps a new photo, they immediately hold the rights to it. If you found a photo or other work that you wanted to use, you would need to get permission from the rightsholder. In some cases, determining who to ask is relatively easy. You may know the photographer or there may be clues indicating who likely owns the rights, such as watermarks or attribution information (photo courtesy of x). However, the only sure way to identify the rightsholder for any given work is to check with the Copyright Office to see who registered the work.Even though the Supreme Court recently ruled that the registration process must be completed (either the Copyright Office granted or denied the application for registration) before filing infringement claims, registration is not required to bring an action under the CASE Act. This leaves everyone (other than the original author/photographer) with no guaranteed way to determine who holds the rights to unregistered works. Even if you identified someone as a potential rightsholder, it could be difficult or impossible to verify their claim of ownership without the official recognition by the Copyright Office. So even if you are acting in good faith and attempt to obtain permission before using a work, you may not be able to do so and there is no guarantee that you will have obtained permission from the correct party, leaving you exposed to claims via the CASE Act.For example, you see an image (perhaps a vacation photo) on a friend's social media page and ask their permission to share it with your network. They agree and you share, not realizing that your friend copied that image from somewhere else, perhaps a travel company website. Your friend did not have the rights to that photo, and you made and distributed an unauthorized copy, exposing you to the possibility of an infringement claim from the actual photographer. Sharing that single photo could cost you $7500.So, to recap, it may be impossible to obtain the correct permissions to use a work, and using a work with or without permission (relying on the fair use doctrine) may leave you exposed to claims up to $30,000, which will be determined by a panel of non-judges, whose decision you will have almost no way to appeal. Once their decision is final, you are also barred from relitigating your loss in federal court (unless you can prove fraud, etc). You may remember that this process is "voluntary." Let's talk about what that means in reality.The process created in the CASE Act allows defendants to opt-out of the process. Specifically, defendants are given 60 days from when they are notified of the claim to tell the CCB that they do not wish to be subject to the procedure. (This is how the bill's drafters hope to skirt around all the constitutional issues—by getting people to voluntarily give up their due process rights and willingly accept the legal determinations of a non-judicial body.) So it's easy, right? Simply opt-out.Yes, for many would-be defendants, especially the more legally sophisticated ones like large internet companies, opting out of each claim brought against them is not likely to be difficult, even if it is time and resource intensive. However, think about what you might do if you received an envelope claiming to be from a governmental body you have never heard of and asserting that you are potentially liable for infringing copyright. Many would simply ignore it or simply not understand the significance or the potential consequences. Others might perceive this notification as a form of phishing or a potential scam. 60 days elapse and you are now subject to the determinations of the CCB. The next letter you receive may be correspondence from a law firm (on behalf of the claimant) offering you a settlement deal that lets you buy your way out of the legal fight and the possibility of a $30,000 liability. Now what should you do: settle or try to defend yourself at the risk of a higher liability amount?This litigation model is often called "trolling" and the CASE Act sets up a process that serves that model well. Sure, the process is voluntary, which means that only the least legally savvy people will be defendants. Yes, the statutory damages are reduced (compared to those available through federal courts), but they are still plenty high enough to push defendants toward settlement, especially given the limited options for appeal.Despite its good intentions, the CASE Act is a legal disaster waiting to happen.
|
![]() |
by Karl Bode on (#4KGA1)
Netflix has certainly enjoyed its flight to the top of the heap of the streaming space, now streaming video to 60.1 million US subscribers. That's more than pay TV giants like AT&T or even Comcast, who've done their best (via usage caps and lobbying shenanigans) to unsuccessfully hamper Netflix's meteoric rise.But there's some indication that the company may have started to reach its high water mark. Netflix this week revealed it lost 130,000 subscribers last quarter, the company's first quarterly subscriber loss in history. The losses come despite Netflix having spent $3 billion on programming last quarter, and another $600 million to market its its wares. The loss was quick to rekindle memories of Netflix's bumbled Qwikster, price hike debacle from back in 2011:
|
![]() |
by Tim Cushing on (#4KFZG)
The London Metropolitan Police's spectacular run of failure continues. Sky News reports the latest data shows the Met's facial recognition tech is still better at fucking up than doing what it says on the tin.
|
![]() |
by Glyn Moody on (#4KFET)
Moore's Law is well known. But many people think it's about how chip processing power keeps increasing. It's actually about the number and/or density of components on silicon. As such, it applies just as much to memory storage products as to processor chips. It's why you can now buy a one terabyte microSD card for $449.99. Never mind the price: although it's steep, it will inevitably tumble in the next few years, just as happened with lower-capacity microSD cards. What's much more important is what you can store with one terabyte on a tiny, tiny card. Mashable has done the calculations:
|
![]() |
by Timothy Geigner on (#4KF68)
Earlier this month, we discussed how Gibson Guitar CEO James Curleigh had recently announced a shift in its IP enforcement strategy to try to be more permissive. That has since calcified into an actual formal plan, but we'll get into that more in a separate post because there is enough good and bad in it to be worth discussing. What kicked Curleigh's reveal, however, was backlash from a recent lawsuit filed by Gibson against Armadillo Distribution Enterprises, the parent owner of Dean Guitars. Dean sells several guitars that Gibson claims are trademark violations of its famed "flying v" and "explorer" body shapes. There are differences in the designs, to be clear, but there are also similarities. Even as Curleigh's plans for a more permissive IP attitude for Gibson go into effect, this lawsuit continues.But not without Armadillo punching back, it seems. In response to the suit, Armadillo has decided to counter-sue with claims that Gibson's designs are not only too generic to be worthy of trademark protection, but also that Gibson's actions constitute interference with its legitimate business. We'll start with the trademarks.
|
![]() |
by Karl Bode on (#4KEZW)
We've noted a few times now how the protectionist assault against Huawei hasn't been supported by much in the way of public evidence. As in, despite widespread allegations that Huawei helps China spy on Americans wholesale, nobody has actually been able to provide any hard public evidence proving that claim. That's a bit of a problem when you're talking about a global blackballing effort. Especially when previous investigations as long as 18 months couldn't find evidence of said spying, and many US companies have a history of ginning up security fears simply because they don't want to compete with cheaper Chinese kit.That said, a new report (you can find the full thing here) dug through the CVs of many Huawei executives and employees, and found that a small number of "key mid-level technical personnel employed by Huawei have strong backgrounds in work closely associated with intelligence gathering and military activities." This full Twitter thread by the study's author is also worth a read:
|
![]() |
by Tim Cushing on (#4KEQ0)
When the City of Baltimore agreed to settle with a victim of police brutality, it inserted the usual clauses that come with every settlement. There was the standard non-admission of wrongdoing, along with a "non-disparagement" clause the city's attorney told courts was used "in 95% of settlements" to prevent those being settled with from badmouthing the entity they sued.Ashley Overbey received a $63,000 settlement from the city for allegations she was beaten, tased, verbally abused, and arrested after calling officers to her home to report a burglary. When a local newspaper published a story about the settlement, the City Solicitor chose to disparage Overbey by saying she was "hostile" when the police arrived at her home. As the comments filled up with invective against Overbey, she showed up in person to fire back at her detractors, claiming the police had been in the wrong and detailing some of the injuries she suffered.The City -- which had chosen to skew public perception against Overbey by commenting on the settlement -- decided Overbey's defense of herself violated the non-disparagement clause. So, it clawed back half of her settlement -- $31,500 -- for violating its STFU clause.Overbey sued again, claiming this clause violated her First Amendment. Now, seven years after police showed up at her home and treated like the perpetrator -- rather than a victim -- of a crime, the Fourth Circuit Court of Appeals has ruled [PDF] these non-disparagement clauses are unconstitutional bullshit.The City argued Overbey's acceptance of the clause was actually an action of free expression. By opting for a payout, she was (and I am quoting the City here) "exercising her right not to speak in exchange for payment." Alternatively, it argued that even if it was an unconstitutional waiver of rights, the court has no reason to intercede and nullify the clause.The court agrees that it's a waiver of rights, but disagrees about what it's allowed to do about it:
|
![]() |
by Tim Cushing on (#4KEK8)
Our nation's immigration agencies wield a considerable amount of power. So much power, in fact, that they're free to dump incoming immigrants off the space-time continuum at will. If a CBP officer decides a person isn't the age they say they are, they can alter the person's age so it matches the officer's beliefs.How does the CBP accomplish this neat little trick? Well, oddly, it involves X-rays. A recent episode of This American Life details the surreal nature of this CBP-induced time warp -- one it inflicted (repeatedly!) on a 19-year-old Hmong woman coming to the United States to reunite with her fiance.Yong Xiong was questioned by Customs officers at the Chicago airport. The CBP officer thought she was being trafficked and didn't believe the birth date on her passport. After a round of questioning meant to determine whether or not Yong was being trafficked, the CBP officer arrived at the conclusion she was, despite the officer marking "No" on ten of the eleven trafficking indicators.So, how does the CBP try to determine someone's age when officers don't believe the person or the documents in front of them? They call in a dentist. Yong's teeth were x-rayed to determine her age. This may involve science on the front end, but the back end is mainly educated guesswork.From This American Life's Nadia Reiman:
|
![]() |
by Daily Deal on (#4KEK9)
Pay what you want for the Lean Six Sigma Certification Training Bundle and you get access to the Design of Experiments (DOE) course and the Measurement Systems Analysis course. If you beat the average price on the site, you'll unlock 6 more courses including the Lean Six Sigma Green, Yellow and Black Belt courses, the Statistical Process Control course, and more.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
|
![]() |
by Mike Masnick on (#4KEEM)
Drug prices are sky high. This is not news. A bunch of incredibly dumb policy decisions have been stacked up for decades and brought us to this place where drug prices -- especially for life-saving drugs -- would bankrupt most people. A huge part of the problem is our patent system and how we literally grant monopolies to companies over these drugs. Combine "life saving" with "monopoly" and, uh, you don't have to have a PhD in economics to know what happens to the price. Add into that our fucked up and convoluted hospital and insurance healthcare system, in which prices are hidden from patients, and you have a recipe for the most insanely exploitative "marketplace" ever.The NY Times has taken notice of this and its editorial board recently put forth some partial solutions that could be done right away to ease the burden. This includes having the federal government flat-out seize patents:
|
![]() |
by Karl Bode on (#4KDZK)
If you've spent any time on Twitter, you've probably seen a rising tide of folks expressing worry about the health impact of 5G.
|
![]() |
by Tim Cushing on (#4KDKV)
Biometric databases have a hunger for data. And they're getting fed. Government agencies are shoving every face they can find into facial recognition databases. Expanding the dataset means adding people who've never committed a crime and, importantly, who've never given their explicit consent to have their personal details handed over to federal agencies.Thanks to unprecedented levels of cooperation across all levels of government, FBI and ICE are matching faces using data collected from millions of non-criminals. The agencies are apparently hoping this will all work out OK, rather than create a new national nightmare of shattered privacy and violated rights. Or maybe they just don't care.
|
![]() |
by Timothy Geigner on (#4KD0S)
Between crowdsourcing and the explosion of indie video game developers, many of which are far more permissive in IP realms and far better at actually connecting with their fans, we are perhaps entering a golden age for fan involvement in the video games they love. And it's not just the indie developers getting into this game either; the AAA publishers are, too. One example of this came up last year, when Ubisoft worked with HitRECord to allow fans of the Beyond Good and Evil franchise to submit potential in-game music creations. On HitRECord, other fans would be able to vote and even remix those works. At the end of it all, any music Ubisoft used for Beyond Good and Evil 2 would be paid for out of a pool of money the company had set aside. Cool, right?Not for some in the gaming industry itself. Many who work in the industry decried Ubisoft's program as denying those who make music professionally income for the creation of the game music. Others called Ubisoft's potential payment to fans for their creations "on-spec" solicitations, in which companies only pay for work that actually makes it into the game, a practice that is seen as generally unethical in the industry. Except neither of those criticisms were accurate. Ubisoft specifically carved out a few places for fans to put music into the game, not the entire game. And the "on-spec" accusation would only make sense if these fans were in the gaming music industry, which they weren't. Instead, Ubisoft was actually just trying to connect with its own fans and create a cool program in which those fans could contribute artistically to the game they love, and even make a little money doing so.Fortunately, Ubisoft has apparently not let the criticism keep it from continuing with these experiments, as the company has put out the call for the same sort of program for its next Watchdogs game.
|
![]() |
by Tim Cushing on (#4KCR4)
Another small victory for Constitutional rights comes via the same federal magistrate who previously rejected another law enforcement request to compel production of fingerprints to unlock a phone.In May, federal magistrate judge Ronald E. Bush said compelled production of fingerprints violates both the Fourth and Fifth Amendment. He declared the fingerprint application itself to be a search, one performed with the assistance of the suspect. There's the Fourth Amendment issue.And since the government hadn't provided evidence tying the suspect to the phone, producing fingerprints would provide the government with testimonial evidence it didn't have. The government wanted to search the phone for "indica of ownership" -- something it hoped to perform after it had already compelled production of fingerprints. The government had no "foregone conclusion" to work with, so forcing a suspect to give up information only they know (namely, possibly verifying ownership by unlocking the phone) implicated his Fifth Amendment protections against being forced to testify against himself.In this case, Judge Bush has handed down another denial [PDF]. Once again, the government wants to compel the unlocking of a device but doesn't have everything it needs. What the government does have isn't much. The evidence tying the suspect to child porn possession is mostly ephemeral: IP addresses, email addresses, and online accounts. Using this as probable cause, the government is asking to search electronics seized from a searched residence. (The government also wants to search the suspect's car, presumably in case any electronics are stashed there.)As the court points out, the government wants to do things to a phone it hasn't shown will actually need to have this stuff done to it. It's working off an assumption and that assumption isn't enough for the judge to agree to the government's proposed rights violations.
|
![]() |
by Mike Masnick on (#4KCGH)
Daisy Soderberg-Rivkin, who used to work at Google as an in-house content moderator, has written a fascinating piece for the Washington Times, explaining just what a disaster Josh Hawley's anti-Section 230 bill would be for the internet. As we've discussed, Hawley's bill would require large internet companies to beg the FTC every two years to get a "certificate" granting them Section 230 protections -- and they'd only get it if they could convince 4 out of 5 of the FTC Commissioners that their content moderation efforts were "politically neutral."Soderberg-Rivkin points out how that will stifle the kind of "clean up" efforts that most everyone -- especially folks like Senator Josh Hawley -- often claim they want when they complain about all the "bad stuff" on social media. Remember, just before introducing this bill, Hawley was whining about all the bad and dangerous content on social media. Except, under his own damn bill, social media sites would be forced to keep that content up:
|
![]() |
by Tim Cushing on (#4KC92)
The CIA is pushing for an expansion of a 37-year-old law that would deter journalists from covering national security issues or reporting on leaked documents. Thanks to a disillusioned CIA case officer's actions in 1975, there are currently a few limits to what can or can't be reported about covert operatives working overseas.In 1975, Philip Agee published a memoir about his years with the CIA. Attached to his memoir -- which detailed his growing discontentment with the CIA's clandestine support of overseas dictators -- was a list of 250 CIA agents or informants. In response to this disclosure, Congress passed the Intelligence Identities Protection Act (IIPA), which criminalized disclosing the identity of covert intelligence agents.The IIPA did what it could to protect journalists by limiting the definition of "covert agent" to agents serving overseas and then only those who were currently working overseas when the disclosure occurred. It also required the government to show proof the person making the disclosure was "engaged in a pattern of activities intended to identify and expose" covert agents. The law was amended in 1999 to expand the coverage to include covert agents working overseas within five years of the disclosure.Now, the CIA is seeking to strip these protections from the IIPA. The agency wants the "overseas" requirement removed, allowing it (and other intelligence agencies) to designate whoever they want as "protected" by the IIPA in perpetuity. The removal of the overseas requirement eliminates the five-year period. Disclosing identities years after the fact will now be a criminal act.The CIA has its reasons, as Trevor Timm reports. But they're the worst reasons.
|
![]() |
by Mike Masnick on (#4KC53)
Every few years this kind of thing pops up. Some ignorant organization or policymaker thinks "oh, hey, the easy way to 'solve' piracy is just to create a giant blacklist." This sounds like a simple solution... if you have no idea how any of this works. Remember, advertising giant GroupM tried just such an approach a decade ago, working with Universal Music to put together a list of "pirate sites" for which it would block all advertising. Of course, who ended up on that list? A bunch of hip hop news sites and blogs. And even the personal site of one of Universal Music's own stars was suddenly deemed an "infringing site."These kinds of mistakes highlight just how fraught such a process is -- especially when it's done behind the scenes by organizations that face no penalty for overblocking. In such cases you always get widespread overblocking based on innuendo, speculation, and rumor, rather than any legitimate due process or court adjudication concerning infringement. Even worse, if there was actual infringement going on, one possible legal remedy would involve getting a site to take down that content. Under a "list" approach, it's just basically a death penalty for the entire site.That's why it's especially ridiculous that WIPO, the World Intellectual Property Organization, a part of the UN, has decided to leap gleefully into the space with one of these "blacklists" of evil piratey sites.
|
![]() |
by Daily Deal on (#4KC54)
Rather than bogging down your pockets with a separate power bank and charging cable, the Nomad 1.5M Battery Lightning Cable streamlines them both into a single sleek and compact design. It features a durable, nylon wrapped, MFi-Certified Lightning cable that's designed to shrug off normal wear and tear, plus a high capacity 2,800mAh portable battery that packs enough juice to bring a dead iPhone 8 back to full charge. It's on sale for $20.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
|
![]() |
by Mike Masnick on (#4KC04)
To some extent we've had this discussion before, as parts of other discussions about the regulation of content online, but it's worth calling it out explicitly: regulating internet infrastructure services the same as internet edge service providers is a really bad idea. And yet, here we are. So few people seem to even care enough to make a distinction. So, let's start with the basics: "edge providers" are the companies who provide internet services that you, as a end user, interact with. Google, YouTube, Facebook, Twitter, Twitch, Reddit, Wikipedia, Amazon's e-commerce site. These are all edge providers as currently built. Infrastructure providers, however, sit a layer (or more) down from those edge providers. They're the services that make the edge services possible. This can include domain registrars and registers, CDNs, internet security companies and more. So, companies like Cloudflare, GoDaddy, Amazon's AWS, among others are examples there.While tons of people interact with infrastructure players all the time, your average person will never even realize they're doing so -- as the interactions tend to be mediated entirely by the edge providers. For a few years now we've been seeing attempts to move the liability questions up (or, depending on your viewpoint, down) the stack from edge providers to infrastructure players. This raises a lot of significant concerns.At the simplest level, a big part of the concern is that the only real "remedy" for an infrastructure provider is to cease providing service altogether to the edge provider. This is an incredibly blunt instrument -- as a single accusation of a legal violation could lead an entire service to come crashing down if the infrastructure provider is sufficiently spooked about the potential liability. In short: imagine what happens when a copyright holder sends a DMCA notice not to the site that had an allegedly infringing image uploaded, but rather to that site's domain registrar. If the registrar fears liability, it might revoke the domain entirely (removing service) pulling down an entire website (or, at least, the way in which most people access that website).There may be good arguments for cases when infrastructure providers should be involved -- perhaps the edge provider cannot be found or is deliberately ignoring actual legal notices. Then you might understand moving to a different level in the stack. But it should be justified.Instead, it seems like many are simply targeting infrastructure because either they don't understand the difference between infrastructure and edge... or just because they know that its remedy (complete removal of service) is so big that it'll have more impact. Case in point: an Italian court has ordered Cloudflare to terminate the accounts of a few sites that the court has determined to be pirate sites. Assuming that these sites truly are engaged in infringing activities, it seems fine to hold the sites themselves accountable. But that's not what's happening here. Instead, the legal liability is being placed on Cloudflare, a company that provides CDN services, but isn't the actual host of any of the content.
|
![]() |
by Karl Bode on (#4KBH1)
Buried underneath the blistering hype surrounding fifth-generation (5G) wireless is a quiet but growing consensus: the technology is being over-hyped, and early incarnations were rushed to market in a way that prioritized marketing over substance. That's not to say that 5G won't be a good thing when it arrives at scale several years from now, but early offerings have been almost comical in their shortcomings. AT&T has repeatedly lied about 5G availability by pretending its 4G network is 5G. Verizon's falsely telling everyone 5G will help cure cancer, but its actual deployments have been spotty and expensive.5G device support barely exists. Apple is in no rush to get its first phones to market. The promise of 5G as a competitive and rural coverage panacea has been vastly overstated. And most surveys suggest US consumers (who already pay some of the highest data prices in the developed world) are more interested in lower bills than faster speeds. All of which is to say that 5G isn't quite the Earth-shattering revolution it has been heralded as by carriers and network vendors eager to sell more cell phones and network hardware.There's another wrinkle being noticed by some of the folks putting these networks through their paces. Qualcomm's first generation 5G modem chipsets appear prone to overheating in summer temps, something oddly missing from the industry's marketing hype. It's a problem that's plaguing numerous carriers, according to Sascha Segan and PCMag:
|
![]() |
by Glyn Moody on (#4KB5G)
One of the reasons that Techdirt and many others fought so hard against the worst ideas of the EU Copyright Directive is that it was clearly the thin end of the wedge. If things like upload filters and the imposition of intermediary liability become widely implemented as the result of legal requirements in the field of copyright, it would only be a matter of time before they were extended to other domains. Netzpolitik has obtained a seven-page European Commission paper sketching ideas for a new EU Digital Services Act (pdf) that suggests doing exactly that. The Act's reach is extremely wide:
|
![]() |
by Timothy Geigner on (#4KAJC)
When it comes to cable cord-cutting and the set box vs. streaming revolutions, I have always argued that professional and college sports plays an outsized role. In fact, sports programming is one of the few threads by which the cable television industry is currently hanging. Some leagues have made better use of these trends than others, with Major League Baseball still representing the gold standard in sports streaming, with the other major sports leagues riding along in its wake. And, yet, one of the most common complaints about streaming copyright infringement one can find out there is that of live-streaming professional sports. While much of this comes from the broadcast partners of these leagues, the leagues themselves still make a significant amount of noise about pirated sports streaming.It's never made sense to me. Sports league revenues generally are dominated by two categories: merchandise and advertising revenue. The former gets boosted with the maximum number of eyeballs on the product while the latter becomes something of a complicated mess, given that ad revenues have traditionally gone to broadcast partners, which translate into large contracts with revenue going from the broadcasters to the leagues. Despite that complication, the interest here is in advertising revenue. I wrote the following paragraph way back in 2012:
|
![]() |
by Mike Masnick on (#4KA8N)
Two years ago, we wrote about a stunning (and horrifying) study that explained how patents deeply contributed to the opioid crisis. It described the lengths that drug companies -- including OxyContin maker Purdue Pharma -- went through to block any and all generic competition. It was quite a story.However, on a recent episode of Terry Gross's "Fresh Air" she interviewed medical bioethicist Travis Rieder about his new book, In Pain. It tells the story of how, even as a "medical bioethicist," Rieder himself got addicted to opioids after being in a severe motorcycle accident -- and then was shocked to find that none of his doctors either knew how or cared enough to help him get off the painkillers. The story is fascinating -- and harrowing.Deep into the discussion, however, one part caught my attention. Rieder tells a story about how, rather than putting him on opioids, they could have just given him acetaminophen:
|
![]() |
by Karl Bode on (#4KA2A)
We've noted a few times now that while Facebook gets a lot of justified heat for its privacy scandals, the stuff going on in the cellular data and app market in regards to location data makes many of Facebook's privacy issues seem like a grade-school picnic. That's something that was pretty well highlighted by a wave of massive scandals showing how your daily location data has long been collected by cellular data companies, then sold to a laundry list of dubious individuals and organizations. Outfits that have repeatedly failed to prevent this data from being abused by everyone from law enforcement to stalkers.The Ajit Pai FCC has yet to lift a finger or so much as scold the companies for their cavalier treatment of private user data. And while cellular giants like AT&T, Verizon, Sprint, and T-Mobile say they've stopped the practice in light of these scandals, nobody has actually bothered to confirm this fact. Given the billions to be made, it's certainly possible these companies may have just made a few modest changes to what's collected, who they sell this data to, and what they call this collection, but are still engaged in monetizing your daily location habits in some fashion.Enter the EFF, who this week filed a new class action lawsuit against AT&T (pdf). The suit seeks an injunction to ensure that AT&T can no longer collect and sell this data. The class action represents several California AT&T users who say they were never informed, nor gave consent, for their location data to be used in this fashion:
|
![]() |
by Cathy Gellis on (#4K9SA)
Yesterday we wrote about a bad Section 230 decision against Amazon from the Third Circuit. But shortly before it came out the Sixth Circuit had issued its own decision determining that Section 230 could not protect Amazon from another products liability case. But not for the same reason.First, the bad facts, which may even be worse: the plaintiffs had bought a hoverboard via Amazon, and it burned their house down (and while two of their kids were in it). So they sued Amazon, as well as the vendor who had sold the product.From a Section 230 perspective, this case isn't quite as bad as the Third Circuit Oberdorf decision. Significantly, unlike the Third Circuit, which found Amazon to be a "seller" under Pennsylvania law, here the Sixth Circuit did not find that Amazon qualified as a "seller" under the applicable Tennessee state law. [p. 12-13] This difference illustrates why the pre-emption provision of Section 230 is so important. Internet platforms offer their services across state lines, but state laws can vary significantly. If their Section 230 protection could end at each state border it would not be useful protection.But although this case turned out differently than the Third Circuit case and the Ninth Circuit's decision in HomeAway v. City of Santa Monica, it channeled another unfortunate Ninth Circuit decision: Barnes v. Yahoo. In Barnes Yahoo was protected by Section 230 from liability in a wrongful user post. After all, it was not the party that had created the wrongful content. Because it couldn't be held liable for it, it also couldn't be forced to take it down. But Yahoo had offered to take the post down anyway. It was a gratuitous offer, one it didn’t have to make. But, per the Ninth Circuit, once having made it, Section 230 provided no more protection from liability arising from how Yahoo fulfilled that promise.Which may, on the surface, sound reasonable, except consider the result: now platforms don't offer to take posts down. It just doesn't pay to try to be so user-friendly, because if the platform can't get things exactly right on that front, they can be sued since, per the Ninth Circuit, Section 230 ceases to provide any protection. (And even if the platform might not ultimately face liability, it would still have to face an expensive lawsuit to get there.) So thanks to this case the Ninth Circuit ended up chilling platform behavior that we would have been better off instead encouraging to get more of. It may have won the battle for this person (their lawsuit could proceed) but it lost the war for the rest of the public.This case from the Sixth Circuit presents a similar problem. Amazon did not have to do anything with respect to hoverboard sales, but it created liability problems for itself when it tried to anyway. Eventually it banned them, but more at issue is that it sent an email to purchasers indicating that there had been reports of problems with them:
|
![]() |
by Mike Masnick on (#4K9MB)
While so many of the discussions and debates about content moderation focus on a few giant platforms -- namely Facebook, YouTube and Twitter -- it's fascinating to see how they play out in other arenas. Indeed, one of the reasons why we're so concerned about efforts to "regulate" content moderation practices on social media is that focusing on the manner in which those big, centralized platforms work could serve to stifle newer, more innovative platforms, whose very set up may inherently deal with the "problems" in the first place (see my protocols, not platforms discussion for one example).There are a few interesting platforms out there trying to take a different approach to nearly everything -- and one of the more well known is Mastodon, an open source "federated" system that is sort of somewhat like Twitter. If you somehow have missed the Mastodon boat, I'd recommend the long piece Sarah Jeong wrote about it two years ago, which is a pretty good intro to the topic. The really short version, though, is that anyone can set up their own Mastodon community and, if others so choose, they may "federate" with other Mastodon communities. You could build a Mastodon instance that is totally isolated from others, or you could build one that connects to others and allows "toots" to go from one instance of Mastodon to others. And, of course, the federating can change over time. It's kind of neat in that it allows for multiple communities, who can set different rules, norms and standards, and thus you get much more widespread experimentation. And, unlike a fully centralized system, like Twitter, the ability for different instances to just "go there own way" if they disagree, allows for much greater flexibility, without a centralized content moderation impossibility.I'm still more interested in much more fully decentralized protocol-based systems, but a federated system like Mastodon, that allows for a distributed set of mini-centralized instances that can join together or separate as needed, is still pretty fascinating.However, it got more fascinating and interesting earlier this month when the social network Gab moved to Mastodon. If you haven't followed this space at all, Gab likes to call itself the "free speech alternative" to Twitter, but in practice that has meant that it's the place that many trolls, racists and other general assholes have gathered after being kicked off of Twitter. Gab announced, back in May, that it was planning to shift its platform to Mastodon, setting up its own instance. In theory, this solved some "problems" that Gab had been facing -- starting with the fact that Apple and Google had removed Gab's mobile app from their app stores (something Gab sued over, in a strategy that was not very successful). Since there are a bunch of Mastodon apps that allow users to log into any particular Mastodon instance, Gab itself made it clear that this was a key reason for the move:Of course, building on top of someone else's better tested open source code probably also helps Gab with the long list of technical issues the site was having. And then there's the pure troll factor. Besides harboring social media trolls, Gab, as a company has always sort of gleefully taken on a trollish roll in the way it works as a company as well. And, considering that part of the very reason that Mastodon's creator, Eugen "Gargron" Rochko, set up Mastodon in the first place was to build an alternative to Twitter that was free of Nazis, assholes and trolls... it was a truly trollish move to jump onto that platform and at least imply to many a plan to "invade" (or, perhaps we should say brigading) the wider "fediverse" of Mastodon.The switch over happened earlier this month and it's been fascinating to watch how it's all played out. The shortest summary might be that the federated model has shown to be somewhat resilient so far. Mastodon itself put out a statement urging various Mastodon instances not to federate with Gab and also suggesting that the various Mastdodon app developers choose to blacklist Gab's domains from their apps (meaning that Gab's plan to use this to get back into the app store might not work as well as planned).The Verge has a long, in-depth article about how all of this is playing out, and it seems like, as a federated system is designed to do, different parts of the system are experimenting and figuring out what makes sense. Most of the other instances have decided they don't want to federate with Gab.
|
![]() |
by Daily Deal on (#4K9MC)
The SunFounder Robotic Arm Edge Kit for Arduino is designed for DIY electronics hobbyists to learn robot arm control. With the open-source MCU Arduino UNO and a servo expansion board, the robot arm is easy to use and full of fun. You can control its four axes by the 4 potentiometer buttons, as well as make them move on your computer. In addition, it can memorize the movements it has made and repeat again and again, making it a great tool for repeated tasks. It is on sale for $55.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
|
![]() |
by Mike Masnick on (#4K9EV)
There are so many myths about Section 230 that seem to need debunking. There's the myth that it requires platforms to be neutral. There's the myth that if you moderate too much you "lose" your status as a "platform." There's the myth that Section 230 of the CDA was "a gift" to big tech. None of those are true, and we've gone into great detail over the past few years about how Section 230 is designed to encourage the most "good" content, and discourage the most "bad" content. It's designed as a pretty straightforward balance, and it actually does a pretty good job of that.However, along with the claims that 230 is a "gift" to tech companies, is the unfortunate similar myth that 230 is somehow "exceptional" or that it treats internet companies "different than any other company." This has never been true. Instead, it's really about properly applying liability to the party actually violating the law, rather than putting the blame on the tools and services they use to violate the law. Brent Skorup and Jennifer Huddleston at the Mercatus Center have now put out an interesting paper, highlighting how -- far from being a "unique gift" to internet companies -- Section 230 was merely the codification of basic common law principles regarding liability.The paper carefully traces the history of liability in common law, finding that for decades preceding Section 230, the general common law principles had converged on a concept of "conduit liability," which is more or less what we see in Section 230: you don't blame the "conduit" for merely passing along the message.
|
![]() |
by Karl Bode on (#4K8Y7)
When the FCC recently released its "new" robocall plan, consumer groups quickly noted how it wasn't really new, and didn't actually accomplish much of anything. Outside of making it clear that carriers could implement robocall blocking tech by default, most of the plan was just a rehash of past (inadequate) FCC policies. Worse, the plan fixates almost exclusively on "scam" calls (when "legit" telemarketers and debt collectors are the biggest culprits of unwanted calls), and does absolutely nothing to punish carriers that fail to implement either call blocking tech, or call authentication tech (to thwart number "spoofing").Another criticism of the plan was that it opened the door to letting carriers using the robocall scourge as an excuse to charge consumers even more money for protection that most think should be included free by default. For example Harold Feld, a lawyer for consumer group Public Knowledge, recently predicted just this thing when I spoke to him about the FCC's (not really) new plan back in May:
|
![]() |
by Glyn Moody on (#4K8E4)
Smartphones are not just amazing pieces of technology that pack a range of advanced capabilities into a pocket-sized device. They are also the best tracking device invented so far. They reveal where we are, and what we are doing, every minute we have them with us. And the most amazing aspect is that we carry them not because we are forced to do so by authoritarian governments, but willingly.A permanent state of surveillance is something most people just accept as the price of using mobile phones. But for one class of users, the built-in tracking capabilities of smartphones are far worse than just annoying. For spies -- especially more senior ones -- the information revealed by their mobile phones is not just embarrassing but poses a serious threat to their future operational usefulness.That's evident from a new investigation carried out by the Bellingcat team in partnership with various media organizations. Techdirt was one of the first to write about Bellingcat's use of "open source information" -- material that is publicly available -- to piece together the facts about what are typically dramatic events. The latest report from the group is slightly different, in that it draws on mobile phone data leaked by a whistleblower in Russia. According to Bellingcat's research, the account seems to be that of the mid-ranking Russian military intelligence (GRU) officer Denis Sergeev:
|
![]() |
by Timothy Geigner on (#4K7XZ)
A short while ago, we discussed a rather concerning lawsuit brought by AM General LLC, the company that makes Humvees, against Activision, the game publisher that occasionally publishes Call of Duty games that include depictions of Humvees. AM General's claims are pretty silly, suggesting that players of the games will think that those games were somehow created by or endorsed by AM General. I can't imagine that's the case; instead, most people are likely to think that Activision is attempting realism in their warfare game, since you basically cannot make an American warfare game accurately without including Humvees. Activision's response was on First Amendment grounds, arguing that its games are partly an historically accurate work of art, for which including Humvees is accurate and fair use.As we pointed out in our original post, this case has great implications for the wider video game industry. Because of that, perhaps it's not hugely surprising to see that the Entertainment Software Association has jumped into the case with an amicus brief arguing for the granting of Activision's summary judgement motion. The whole thing is worth reading, but you can tell that the ESA's viewpoints on this are framed by the wider gaming industry.
|
![]() |
by Tim Cushing on (#4K7MR)
The EFF has published a primer on IMSI catchers. Harris Corporation's success in this market has led to near-genericide, as almost every one of these cell tower spoofers is usually referred to as a "stingray."The white paper [PDF], titled "Gotta Catch 'Em All," runs down what's known about cell-site simulators used by a number of government agencies. Most of this has been gleaned from secondhand info -- the stuff that leaks out during prosecutions or as the result of FOIA requests.The technical capabilities of CSSs have been kept under wraps for years. The reasoning behind this opacity is that if criminals know how these devices work, they'll be able to avoid being tracked by them. There may be a few technical details that might prove useful in this fashion, but what is known about Stingray devices is that the best way to avoid being tracked by them is to simply not use a cellphone. But who doesn't use a cellphone?The report is definitely worth reading, even if you've stayed on top of these developments over the past several years. It breaks down the technical subject matter in a way that makes clear what CSSs can and can't do -- and how they're capable of disrupting cellphone networks while in use.While CSSs can intercept communications, it's hardly worth the effort. Unless the CSS can talk the phone into accepting a 2G connection (which eliminates encryption and severely limits the type of communications originating from the dumbed-down phone), it just doesn't work. This doesn't mean the devices are never used this way. But it does mean it's not a very attractive option.On the other hand, CSSs impersonate cell towers, so they're able to pull all sorts of info from every device forced to connect with the faux cell tower. These devices are used most often to locate criminal suspects, meaning precise GPS location is a must-have. Operating on their own, cell-site simulators can't generate pinpoint accuracy. Working in conjunction with nearby towers, they can triangulate signals to provide better location info. But there's another option -- one rarely discussed in courtroom proceedings. CSSs can also force phones to give up precise location info.First, the Stingray extracts info from nearby cell towers. Using this info (which the EFF points out anyone can access), the CSS alters its signal to become the highest priority connection in the area of operation. Once it's done this, GPS info can be coaxed from phones now connected to the fake cell tower.
|