Feed techdirt Techdirt

Favorite IconTechdirt

Link https://www.techdirt.com/
Feed https://www.techdirt.com/techdirt_rss.xml
Updated 2025-08-21 19:46
EU Commissioner Gunther Oettinger Admits: Sites Need Filters To Comply With Article 13
EU Commissioner Gunther Oettinger -- well known for being a bit of a bigoted Luddite -- basically crafted the plan that became the EU Copyright Directive when he was the Commissioner for the "Digital Economy" (despite not knowing anything about it). As you may recall, for many months now, supporters of Article 13 (now Article 17) have insisted that it doesn't require filters. They would shout down anyone who pointed out that it clearly does require filters.But now that it's passed the EU Parliament, the truth is coming out. Last week, we noted that France's culture minister admitted it required filters to comply (and that he wanted sites to start installing them as soon as possible). And now, Oettinger himself is admitting that filters are required under the Directive (translated via Google):
FamilyTreeDNA Deputizes Itself, Starts Pitching DNA Matching Services To Law Enforcement
One DNA-matching company has decided it's going to corner an under-served market: US law enforcement. FamilyTreeDNA -- last seen here opening up its database to the FBI without informing its users first -- is actively pitching its services to law enforcement.
Office Depot And Partner Ordered To Pay $35 Million For Tricking Consumers Into Thinking They Had Malware
I have worked in the B2B IT services industry for well over a decade. Much of that time was spent on the sales side of the business. As such, I have become very familiar with the tools and tactics used to convince someone that they are in need of the type of IT support you can provide. One common tactic is to use software to do an assessment of a machine to determine whether it's being properly maintained and secured. If it is not, a simple report showing the risks tends to be quite persuasive in convincing a prospective client to sign up for additional support.Done the right way, these reports are factual and convincing. Done the Office Depot way, it seems only the latter is a requirement. The FTC announced on its site that Office Depot and its support partner, Support.com, Inc., has agreed to pay $35 million to settle a complaint in which the FTC alleged that consumers were tricked using a computer health application into thinking their machines were infected with malware when they often times were not.
Techdirt Podcast Episode 206: Charter Cities & Innovative Governance
We're pretty optimistic about innovation here at Techdirt, but it isn't an automatic good thing all by itself: implementation and intention matters, and that means political entanglements and complicated questions about governance, and that calls for big, new ideas. This week, we're joined by Tamara Winter from the Center for Innovative Governance Research to discuss new experiments in government like charter cities and special autonomous zones.Also, learn more about Shenzhen, the Val D'Europe, and the original vision for Disney World.Follow the Techdirt Podcast on Soundcloud, subscribe via iTunes or Google Play, or grab the RSS feed. You can also keep up with all the latest episodes right here on Techdirt.
Don't Repeat FOSTA's Mistakes
Some of the most fruitful conversations we can have are about nuanced, sensitive, and political topics, and no matter who or where we are, the Internet has given us the space to do that. Across the world, an unrestricted Internet connection allows us to gather in online communities to talk about everything from the mundane to the most important and controversial, and together, to confront and consider our societies' pressing problems. But a growing chorus of U.S. politicians is considering dangerous new policies that would limit our ability to have those complex conversations online.The Chair of the U.S. House Homeland Security Committee, Bennie Thompson, is urging tech companies to prioritize the removal of “sensitive, violent content” from their online platforms. But as we were worried might happen, the Chair didn’t stop there—he’s also threatening new legislation if the companies don’t move quickly.In a letter written shortly after the heartbreaking shooting in New Zealand, which the shooter had livestreamed on multiple platforms, Rep. Thompson told Google, Facebook, Microsoft, and Twitter that if they don’t act, “Congress must consider policies to ensure that terrorist content is not distributed on your platforms, including by studying the examples being set by other countries." Calling for more aggressive moderation policies in the face of horrifying crimes is understandable, particularly when the major online platforms have failed to address how they can be exploited by individuals who broadcast or amplify hate and violence to unsuspecting users. Some might even argue that more aggressive moderation is a lamentable but needed shift in the online landscape.But the desire to hold platforms legally accountable for the content that users post often backfires, expanding to silence legitimate voices, especially those that have long sought to overcome marginalization. These policies reward platforms for their censorship rather than for their ability to determine bad speech from good, or for meaningfully updating their business models to address how they’re feeding into this behavior. This is not to mention how the high technical bar required to implement the policies reinforces the dominance of the major platforms, which have the resources to comply with the new regulation, while new, innovative competitors do not. And if those policies are enacted into law—as has happened in other countries—the results are magnified, as platforms move to censor normal, everyday speech to protect themselves from liability.FOSTA Provides Clear Evidence Of How These Regulations FailCongress doesn’t need to look at other countries for examples of how these sorts of policies might play out. Less than a year ago, it passed FOSTA, ostensibly to fight sex trafficking. Digital rights advocates, including EFF, fought against FOSTA in Congress because they feared its passage would threaten free expression online by criminalizing large portions of online speech and targeting sex workers and their allies. Groups that work closely with sex workers and sex trafficking victims warned Congress that the bill could put both consensual sex workers and sexual trafficking victims in even more danger. Horribly, these warnings appear to have come true, as sex workers have reported being subject to violence while also being shut out of online platforms that they relied on to obtain health and safety resources, build communities, and advocate for their human rights.FOSTA sent a wider shock wave through cyberspace, resulting in takedowns of content and censorship that many wouldn’t expect to result from such a law. Although a wide range of plaintiffs are fighting the bill in court, some of the damage is already done. Some websites made changes explicitly as a result: Craigslist, for example, shut down its entire personals section, citing the risk the law created for them. Other small, community-based platforms shut down entirely rather than deal with FOSTA’s crippling criminal and civil liability. And although we cannot be certain that online platforms such as Tumblr and Facebook’s recent policy changes were the direct result of the law, they certainly appear to be. Tumblr banned all sexual content; Facebook created a new “sexual solicitation” policy that makes discussion of consensual, adult sex taboo.Regardless of a direct link to FOSTA, however, it’s readily apparent that digital rights advocates’ worst fears are coming true: when platforms face immense liability for hosting certain types of user speech, they are so cautious that they over-correct and ban a vast range of discussions about sex, sexuality, and other important topics, because they need to stay far clear of content that might lead to legal liability. Given the incredible chilling effect that FOSTA has had on the Internet and the community of sex workers and their allies who relied on online platforms, Internet users need to ensure that Congress knows the damage any law aimed at shifting liability for “terrorist” content to platforms would cause.A bill that makes platforms legally responsible for “terrorist content”—even one that seems like it would only impact a small range of speech—would force platforms to over-censor, and could affect a range of people, from activists discussing strategies and journalists discussing newsworthy events to individuals simply voicing their opinions about the real and terrible things that happen in our world. Banishing topics from the Internet stunts our ability to grow and solve issues that are real and worthy of our full attention. These types of regulations would not just limit the conversation—they would prevent us from engaging with the world's difficulties and tragedies. Just as an automated filter is not able to determine the nuanced difference between actual online sex trafficking and a discussion about sex trafficking, requiring platforms to determine whether or not a discussion of terrorist content is the same as terrorist content—or face severe liability—would inevitably lead to an over-reliance on filters that silence the wrong people, and as with FOSTA, would likely harm those who are affected by terrorist acts the most.Online platforms have the right to set their own policies, and to remove content that violates their community standards. Facebook, for example, has made clear that it will take down even segments of the horrendous video that are shared as part of a news report, or posts in which users “actually intended to highlight and denounce the violence.” It’s also updated its policy on removing content that refers to white nationalism and white separatism. But formally criminalizing the online publication of even a narrow definition of “terrorist content” essentially forces platforms to shift the balance in one direction, resulting in them heavily policing user content or barring certain topics from being discussed at all—and potentially silencing journalists, researchers, advocates, and other important voices in the process.Remember: without careful—and expensive—scrutiny from moderators, platforms can’t tell the difference between hyperbole and hate speech, sarcasm and serious discussion, or pointing out violence versus inciting it. As we’ve seen across the globe, users who engage in counter-speech against terrorism often find themselves on the wrong side of the rules. Facebook has deactivated the personal accounts of Palestinian journalists, Chechen independence activists, and even a journalist from the United Arab Emirates who posted a photograph of Hezbollah leader Hassan Nasrallah with a LGBTQ pride flag overlaid on it—a clear case of parody counter-speech that Facebook’s filters and content moderators failed to grasp.Creating Liability for Violent Content Would Be UnconstitutionalAssuming members of Congress make good on their promise to impose legal liability on platforms that host “sensitive, violent content,” it would be plainly unconstitutional. The First Amendment sharply limits the government’s ability to punish or prohibit speech based on its content, especially when the regulation targets an undefined and amorphous category of “sensitive, violent content.” Put simply: there isn’t an exception to the First Amendment for that category of content, much less one for extremist or terrorist content, even though the public and members of Congress may believe such speech has little social value or that its dissemination may be harmful. As the Supreme Court has recognized, the “guarantee of free speech does not extend only to categories of speech that survive an ad hoc balancing of relative social costs and benefits.” Yet this is precisely what Chairman Thompson purports to do.Moreover, although certain types of violent speech may be unprotected by the First Amendment, such as true threats and speech directly inciting imminent lawless activities, the vast majority of the speech Chairman Thompson objects to is fully protected. And even if online platforms hosted unprotected speech such as direct incitement of violent acts, the First Amendment would bar imposing liability on the platforms unless they intended to encourage the violent acts and provided specific direction to commit them.The First Amendment also protects the public’s ability to listen to or otherwise access others’ speech, because the ability to receive that information is often the first step before exercising one’s own free speech. Because platforms will likely react to the threat of legal liability by simply not publishing any speech about terrorism—not merely speech directly inciting imminent terrorist attacks or expressing true threats, for example—this would deprive platform users of their ability to decide for themselves whether to receive speech on certain content. This runs directly counter to the First Amendment, and imposing liability on platforms for hosting “sensitive, violent content” would also violate Internet users’ First Amendment rights.Around the World, Laws Aimed At Curbing Extremist Speech Do More Harm Than GoodIf Congress truly wants to look to other countries as an example of how policy may be enacted, it should also look at whether or not that country’s policy has been successful. By and large, requiring platforms to limit speech through similar regulations has failed much like FOSTA.In France, an anti-terrorism law passed after the Charlie Hebdo shooting “leaves too much room for interpretation and could be used to censor a wider range of content, including news sites,” according to the Committee to Protect Journalists. Germany’s NetzDG, which requires companies to respond to reports of illegal speech within 24 hours, has resulted in the removal of lawful speech. And when democratic countries enact such regulations, more authoritarian governments are often inspired to do the same. For example, cybercrime laws implemented throughout the Middle East and North Africa often contain anti-terrorism provisions that have enabled governments to silence their critics.The EU’s recently proposed regulation—which would require companies to take down “terrorist content” within one hour—might sound politically popular, but would be poisonous to online speech. Along with dozens of other organizations, we’ve asked that MEPs consider the serious consequences that the passing of this regulation could have on human rights defenders and on freedom of expression. Asking companies to remove content within an hour of its being posted essentially forces them to bypass due process and implement filters that censor first and ask questions later.If anyone should think that our government would somehow overcome the tendency to abuse these sorts of regulations, take note: Just this month, the Center for Media Justice and the ACLU sued the FBI for refusing to hand over documents related to its surveilling of “Black Identity Extremists,” a “new domestic terror threat,” that, for all intents and purposes, it seems to have made up. Government agencies have a history of defining threats without offering transparency about how they determine those definitions, giving them the ability to determine who to surveil with impunity. We should not give them the ability to decide who to censor on online platforms as well. While allowing Internet companies to self-moderate may not be a perfect solution, the government should be extremely careful considering any new regulations that would limit speech—or else it will be wading into ineffective, dangerous, and unconstitutional, territory.Reposted from the EFF's Deeplinks blog
Ariana Grande Demands All Photographers At Her Concerts Transfer Copyright To Her, NPPA Revolts
We've seen plenty of ridiculous demands from performing artists over the years as to what photographers can and cannot do while attending their performances. This sort of thing typically amounts to a desire for some kind of control over which images get released and which don't. That kind of attempt at control is silly, of course, and runs counter to the journalistic principles that many of these photographers employ.But if you want to see the really batshit crazy extreme of all this, we need apparently only look to the rules that Ariana Grande's tour puts on photographers.
Daily Deal: VAVA Voom 23 IPX6 Rugged Portable Speaker
Bring your soundtrack on the go, wherever you go, with the VAVA Voom 23. With a 5,200mAh rechargeable battery, this Bluetooth speaker streams audio for up to 24 consecutive hours at an 80% volume level. With that battery life and its rugged engineering, you can take it on hikes, climbs, and all types of outdoor adventures. This little 8" speaker was built for the outdoors, with an integrated carabiner to attach to your bag, and the ability to withstand sand, dust, water, drops, knocks, impacts, and the wear of daily usage. It's on sale for $25.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
Welcome To The Prude Internet: No More Sex Talk Allowed
While we talk about Section 230 of the Communications Decency Act, we almost never talk about any other section of the law. And there's a good reason for that, a few years after it was put into law, every other part of the CDA was ruled unconstitutional. The original part of the CDA that is no longer law included criminalizing the knowing transmission of "obscene or indecent" messages to anyone under 18 or anything "that, in context, depicts or describes, in terms patently offensive as measured by contemporary community standards, sexual or excretory activities or organs." The Supreme Court, rightly, judged that this was a clear 1st Amendment violation.However, with last year's passing of FOSTA beginning to eat away at CDA 230, we're actually moving back to a world described in the original CDA -- where plenty of "sexual" content is being barred, in part out of a fear of getting sued under FOSTA. Take for example, the writer Violet Blue, who we've linked to many times in the past. Last week, she revealed that Amazon has now cut off her Associates' account, which she had been using to support herself for years.
The FTC Says It's Totally Cool With Anti-Competitive Internet Fast Lanes
As we've noted for a while, the FCC's attack on net neutrality did much more than just kill net neutrality. It also gutted much of the FCC's authority over broadband providers entirely, making it harder than ever for the agency to police the behavior of historically anti-competitive giants like Comcast NBC Universal and AT&T Time Warner. What authority the government now has to oversee one of the more broken sectors in American industry got shoveled instead to the FTC, an agency critics say lacks the authority or resources to police broadband. That's the entire reason ISP lobbyists pushed for the plan.Yet throughout the repeal, broadband providers and FCC head Ajit Pai stated that people didn't need to worry because if ISPs did anything wrong, the FTC and antitrust enforcement would stand as a last line of defense. But any expectations that modern, eroded antitrust authority would protect consumers and competitors were quickly ruined by the recent AT&T and Time Warner legal face plant, widely mocked as one of the more clueless rulings in tech policy history.And last week, Trump FTC boss Joseph Simons made it abundantly clear that the FTC isn't likely going to be helping much either. Again throughout the repeal efforts, folks like FCC Commissioner Brendan Carr penned editorials like this one, insisting that post net neutrality, agencies like the FTC would be quick to crack down on anti-competitive ISP actions like "paid prioritization," which lets a company buy an competitive advantage from an ISP:
The EU's Catastrophic Copyright Directive Can Still Be Stopped, If Governments Of Sweden And Germany Do The Right Thing
Last week, the EU's Copyright Directive was passed by the European Parliament. Its supporters have wasted no time in dropping the mask, and revealing their true intent: installing upload filters on the Internet. First, France's Minister of Culture announced a "mission to promote and supervise content recognition technologies". More recently, EU Commissioner Günther Oettinger has confirmed that upload filters will be unavoidable. It's cold comfort that those who said that Article 13 (now officially Article 17) would inevitably bring in upload filters have now been proved right.However, it turns out that the situation is not completely hopeless. Even though the vote in the European Parliament was the main hurdle the new copyright law needed to clear, there is one more stamp of approval required before it goes into effect. The little-known EU Council of Ministers must also agree, and it seems that is not a foregone conclusion.Everything hinges on Sweden. As an article on the Bahnhof site (original in Swedish) explains, Sweden has previously voted in favor of the EU Copyright Directive, but can still change its mind. One way of achieving that is through a special parliamentary committee that helps to formulate Sweden's EU policy. The Swedish government's Web page about the committee says:
Facial Recognition Tech Now Capable Of Getting You Kicked Out Of The Mall
Facial recognition tech continues it kudzu-like growth. It's not just government contractors providing tech for law enforcement and security agencies. It's also making inroads in the private sector -- a place where there's even less oversight of its use.Cameras everywhere have long been part of retailers' operations. But retailers are now adding third party facial recognition software to the mix, further increasing the chance innocent people will be punished for software screw-ups.
Court Documents Show Canadian Law Enforcement Operated Stingrays Indiscriminately, Sweeping Up Thousands Of Innocent Phone Owners
A wide-ranging criminal investigation involving eleven suspects has resulted in the reluctant disclosure of Stingray data by Canadian law enforcement. The Toronto PD and the Royal Canadian Mounted Police joined forces to deploy a surveillance dragnet that swept up thousands of innocent Canadians, as Kate Allen reports for the Toronto Star.
Another California City Allowed Police To Destroy Misconduct Records Ahead Of New Transparency Law
California law enforcement agencies knew the reckoning was coming. A new law took effect at the beginning of this year, opening up records of police misconduct and use of force to the public for the first time. Some decided to engage in preemptive legal challenges. Some quietly complied. Some decided to ignore the law's author and pretend it didn't apply to any record created before 2019.A couple of law enforcement agencies got really proactive and just started destroying records before the public could get its hands on them. The Inglewood PD got the green light from the city government to destroy hundreds of records subject to the new transparency law. The city and the PD claimed this was just regular, periodic housecleaning. But the timing seemed ultra-suspicious, given that it happened only days before the law took effect. Not that it matters. The records are gone and all the bad press in the world isn't going to bring them back.KQED brings us some more bad press targeting a police department. And, again, it's not going to unshred the destroyed records. But it is important to call out the hugely disingenuous actions of the Fremont Police Department, which chose to greet the impending transparency with a final blast of opacity.
Mark Zuckerberg To Congress: Okay, Fine, Please Regulate Me And Lock In My Dominant Market Position
Let's get a few things out of the way: Facebook deserves much of the crap it's gotten over the past few years. Indifference to serious problems, bad management, and worse practices have put it at the receiving end of a ton of bad press.At the same time, however, the company's near total inability to do the right things means that when it actually does try to do the right things (like increasing accountability, transparency, and due process in its content moderation practices), people freak out and attack the company. Sometimes people will automatically suspect the worst possible motives. Other times, they'll completely twist what is being proposed into something else. And sometimes their expectations just aren't reasonable. (Of course, sometimes people will be correct that Facebook is just fucking things up again.)It makes sense that the company may be frustrated by the impossible position it finds itself in, where any solution it trots out gets the company attacked, even when it might actually be a good idea. But that does not mean it should just throw in the towel. Yet it's difficult not to read Mark Zuckerburg's new op-ed in the Washington Post (possible paywall) as an exasperated throwing up of his hands, as if to say, "fine, fuck it, no one likes what we're doing, so here, government, you take over."
Telecom Lobby Suddenly Pretends To Care About Accurate Broadband Maps
For a country that likes to talk about "being number one" a lot, that's sure not reflected in the United States' broadband networks, or the broadband maps we use to determine which areas lack adequate broadband or competition (resulting in high prices and poor service). Our terrible broadband maps are, of course, a feature not a bug; ISPs have routinely lobbied to kill any efforts to improve data collection and analysis, lest somebody actually realize the telecom market is a broken mono/duopoly whose dysfunction reaches into every aspect of tech.If you want to see our terrible broadband maps at work, you need only go visit the FCC's $350+ million broadband availability map, which is based on the Form 477 data collected from ISPs. If you plug in your address, you'll find that not only does the FCC not include prices (at industry behest), the map hallucinates speed and ISP availability at most U.S. addresses. Part of the problem is that the FCC declares an entire region "served" with broadband if just one home in a census block has service. Again, ISPs fight efforts to reform this in a bid to protect the status quo.Curiously, USTelecom (which is a policy and lobbying vessel for AT&T and Verizon) last week launched a new PR campaign in which they now profess to be immensely troubled by the country's terrible broadband maps. As such, in an editorial over at CNET, the group stated it's now "leading the charge" in better broadband mapping via several new trials it's conducting in Missouri and Virginia:
Daily Deal: Pay What You Want: The Complete Learn to Design Bundle
Pay what you want for The Complete Learn to Design Bundle and you'll learn about Canva, a cloud-based design alternative for those overwhelmed by Photoshop and Illustrator. It's a simple tool you can use to create logos, posters, business cards, and more. If you beat the average price ($7.48 as of the time of writing), you'll unlock 9 more courses covering Photoshop, InDesign, and Illustrator. There are also courses focused on learning graphic design, logo design and typography.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
Getty Images Sued Yet Again For Trying To License Public Domain Images
Back in 2016, we wrote about two separate lawsuits involving claims that Getty Images was selling "licenses" to images it had no rights to sell licenses to. The first one was brought by photographer Carol Highsmith, who sued Getty after Getty had sent a demand letter to her over her own images, which she had donated to the Library of Congress to be put into the public domain. That lawsuit mostly flopped when Getty pointed out (correctly) that Highsmith had no standing, seeing as she had given up the copyright in the photos. The second lawsuit was even more bizarre, involving questions about Getty's rights to various collections it licensed, and whether it had changed the metadata on photos from photo agency Zuma Press. At the time, we noted that little in that lawsuit seemed to make sense, but it still went on for over two years before Getty prevailed, and basically said the only mistakes were done by Zuma.Well, now we've got another lawsuit against Getty over allegedly licensing public domain images. This one was brought by CixxFive Concepts, and... also seems to be a stretch. How much of a stretch? Well, it starts out by alleging RICO violations, and as Ken "Popehat" White always likes to remind everyone: IT'S NOT RICO, DAMMIT. This lawsuit is also not RICO and it's not likely to get very far.
7th Circuit Punts On Border Smartphone Searches; Says Riley Decision Doesn't Affect Anything
The "border search" exception again trumps the Constitution. The Seventh Circuit Court of Appeals has determined [PDF] that the Supreme Court's Riley decision that implemented a warrant requirement for phone searches does not apply at our hypersensitive border areas. In this case, the border area affected is [squints at ruling] Chicago's O'Hare Airport.Anyway, CBP and DHS investigators had their eyes on a man returning from a suspicious trip to the Philippines. Suspecting the man was engaged in sex tourism, he was stopped by CBP officers upon his return. Lots of things didn't add up so the CBP asked for permission to search his phone. The officers made it clear this request was simply them being polite. They were going to search his devices anyway.
Here Comes The Splinternet: How The EU Is Helping Break Apart The Internet
In the wake of last week's unfortunate decision by the EU Parliament to vote for the terrible EU Copyright Directive, Casey Newton over at the Verge has a thoughtful piece about how this could lead to the internet splitting into three.
Funniest/Most Insightful Comments Of The Week At Techdirt
This week, our first place winner on the insightful side is Stephen T. Stone with a short and simple response to the EU's approval of Article 13:
Game Jam Winner Spotlight: God Of Vengeance
We're down to our second last winner from our public domain game jam, Gaming Like It's 1923! This week, we're looking at winner of Best Adaptation for the game that most faithfully and meaningfully adapted its source material: God of Vengeanceby JR Goldberg.One of the great things about remix culture, and one of the reason's the public domain is so important, is that creators can turn old work into something completely new with a different meaning, or something that subverts or critiques the original's purpose — but there's also a lot to be said for faithful adaptations that carry an old work's meaning forward into a new era and a new medium. And that's what God of Vengeance does with the Yiddish language play of the same name — which was first translated to English and performed in America in 1923, and led to an obscenity charge, conviction, and eventual successful appeal. Based on that you can probably figure out that this dramatic, improvisational roleplaying game is not for everyone and certainly not for children — but for those prepared to explore its subject matter, including domestic violence, sex work, and a Jewish crime family in Poland, it promises to be an engaging and challenging exercise.God of Vengeance needs four players who take on the roles in the play, with three playing the main characters and one playing the ensemble of other smaller characters. Each receives a brief description of their character and motivations, such as:
Police Misconduct Records Show California Police Officer Busting Sober Drivers For DUI
Not every law enforcement agency is refusing to comply with California's new transparency law. Effective January 1st, the law makes police misconduct and use of force records accessible to the public for the first time in the state's history.The state's attorney general isn't happy. Neither are many of the state's law enforcement agencies. And the state's law enforcement unions are definitely opposed to the new transparency, not to mention the law's apparently retroactive reach. But while the unions are busy trying to keep the law from exposing historical misconduct records, some law enforcement agencies are quietly complying with both the letter and the intent of the law.The Modesto Bee is one of the first beneficiaries of the new law. It has obtained misconduct records dating back to 2003 from the Modesto Police Department. The details contained in these are exactly the reason law enforcement unions are fighting so hard to keep these records out of the public's hands.
Journalist Maria Ressa Arrested Yet Again As Philippines Keeps Finding Bogus Reasons To Arrest Vocal Critic
As we've discussed before, reporter Maria Ressa is a powerhouse journalist, who started an important Filipino news site, Rappler.com. Rappler has been (quite reasonably) highly critical of the Filipino government under President Duterte, and over the past few years, the Duterte government has responded with a bunch of highly questionable criminal complaints against Ressa, which all appear to be in direct violation of the country's 4th Amendment, which is a near carbon copy of the American 1st Amendment. It forbids any law that abridges the freedom of the press (among other things).And yet... for over a year now, the government has been trying to claim that Rappler violated the so-called anti-Dummy law in the Philippines. Apparently, the Philippines has a law that says, in certain types of industries, Filipino companies cannot have foreign ownership (this, by itself, already seems silly, but leaving that aside...). Rappler does not have any foreign owners. However, it did receive a grant from the well known Omidyar Network, and in order to receive the grant, Rappler used a semi-complicated system called a Philippine Depository Receipt (PDR), in which the company sells these assets to Omidyar, and the assets are pegged to the value of shares in the company, but they grant no ownership benefits or rights. The Filipino government has said for a while that these create a "dummy status" in pretending Omidyar isn't really taking an ownership stake when it is.All of that is nonsense, though. This is entirely about intimidating Ressa and Rappler. Last month she was arrested on bogus "cyber libel" charges (over violating a law that wasn't even a law when the supposed "libel" happened). And now, on arriving back in the country from a journalism conference abroad, Ressa was immediately arrested yet again. As Rappler notes, this is actually the 11th case filed against Rappler, its directors and its staff since the government first claimed that the Omidyar grant violated the law.This is shameful, if not surprising, by the Duterte government. Of course, it also demonstrates just how scared they are of a tiny independent news organization. If that's the case, it makes you wonder just what it is they're afraid Rappler will be reporting going forward...
Alabama Court Decides Publicity Rights Trump First Amendment In S-Town Lawsuit
We've written for some time about the scourge that is publicity rights laws and the fairly blatant way in which they tend to butt up against the First Amendment. While famous folk certainly do have the right to reserve the use of their likeness and names from those that would use either for commercial purposes, too often these laws are instead used to silence non-commercial speech, or speech that revolves instead around journalistic efforts. A famous person, for instance, cannot use publicity rights laws to keep a newspaper from printing factual information about them, or a movie maker from producing a documentary about them. This is First Amendment 101.But it seems some in the legal field skipped that class. One judge in Alabama has decided to shoulder the First Amendment to one side and favor instead the state's publicity rights laws to allow a lawsuit against the producers of famed podcast S-Town to move forward.
Three Lessons In Content Moderation From New Zealand And Other High-Profile Tragedies
Following the terrorist attacks on two mosques in Christchurch, New Zealand, social media companies and internet platforms have faced renewed scrutiny and criticism for how they police the sharing of content. Much of that criticism has been directed at Facebook and YouTube, both platforms where video of the shooter's rampage found a home in the hours after the attacks. The footage was filmed with a body camera and depicts the perpetrator's attacks over 17 minutes. The video first appeared on Facebook Live, the social network's real-time video streaming service. From there, Facebook says, it was uploaded to a file-sharing site, the link posted to 8Chan, and began to spread.While the world struggles to make sense of these horrific terrorist attacks, details about how tech companies handled the shooter's video footage and written manifesto have been shared, often by the companies themselves. Collectively, these details in combination with the public discourse on and reaction to, as the New York Times referred to it, "a mass murder of, and for, the internet," have made clear three fundamental facts about content moderation, especially when it comes to live and viral content:1. Automated Content Analysis is Not a Magic WandIf you remember nothing else about content moderation, remember this: There is no magic wand. There is no magic wand that can be waved and instantly remove all of the terrorist propaganda, hate speech, graphically violent or otherwise objectionable content. There are some things that automation and machine learning are really good at: functioning within a specific and particular environment (rather than on a massive scale) and identifying repeat occurrences of the exact same (completely unaltered) content, for example. And there are some things they are really bad at: interpreting nuance, understanding slang, and minimizing discrimination and social bias, among many others. But perfect enforcement of a complex rule against a dynamic body of content is not something that automated tools can achieve. For example, the simple change of adding a watermark was enough to defeat automated tools aimed removing video of the New Zealand shooter.Some, then, have suggested banning of all live video. However, that overlooks activists' use of live streams to hold government accountable and report on corruption as it is happening, among other uses. Further, the challenges of automated content analysis are by no means limited to video. As a leaked email from Google to its content moderators reportedly warned: "The manifesto will be particularly challenging to enforce against given the length of the document and that you may see various segments of various lengths within the content you are reviewing."All of this is to reiterate: There is no magic wand and there never will be. There is absolutely a role for automated content analysis when it comes to keeping certain content off the web. Use of PhotoDNA and similar systems, for example, have reportedly been effective at ensuring that child pornography stays off platforms. However, the nuance, news value, and intricacies of most speech should give pause to those calling for mass implementation of automated content removal and filtering.2. The Scale, Speed, and Iterative Nature of Online Content – Particularly in This Case – is EnormousIt is a long-standing fact of the internet that it enables communication on a vast scale. Reports from YouTube and Facebook about the New Zealand attack seem to indicate that this particular incident was unprecedented in its volume, speed, and variety. Both of these companies have dedicated content moderation staff and it would be easy to fall into the trap of thinking that this staff could handily keep up with what seems to be multiple copies of a single live video. But that overlooks a couple of realities:
What If Google And Facebook Admitted That All This Ad Targeting Really Doesn't Work That Well?
You may have heard the famous line from early department store magnate John Wanamaker that "half the money I spend on advertising is wasted; the trouble is I don't know which half." Over the past decade or so, various companies have argued that their ability to provide a ton of data, combined with whatever algorithmic magic they could throw at their platforms, could lead to a magical mythical world in which there were perfectly targeted advertisements. And, of course, in the past few years there have been literally just two places where advertisers believe they can get perfectly targeted advertisements that don't waste half (or more) of their ad spend: Google and Facebook.The end result of this thinking is that Google and Facebook need to engage in what people refer to as "surveillance capitalism," collecting a ton of data on everyone, building a huge profile about every user, and snooping on basically everything everyone does all day. This is why people have been getting more and more annoyed about the privacy trade-offs over the past few years (though, not so annoyed that they've stopped using these platforms in any significant way -- though, that could happen). It also has resulted in advertisers assuming that they must put the bulk of their ad dollars into those two platforms on the assumption that the money is better spent there. Indeed, the most recent IAB report on this noted that while the internet ad market continues to rise, 90% of the growth went to Facebook and Google (together the two companies represent about 58% of the total market share for online ads, but 90% of the growth in 2017).Advertisers have been completely sucked into the belief that if you want to get results for your ads, you simply have to throw money at those two giants, and they'll mix some magic pixie dust with all the data they've collected, and voila: perfectly targeted advertising. Everyone get so focused on magic words like "big data" and "artificial intelligence" and "machine learning" that they rarely ask the larger question: does any of it actually matter?As more and more questions are raised about the data practices of Facebook and Google, it seems worth questioning whether or not they actually need to be collecting all this data, and how much of a loss it actually is if they don't. Just recently, Facebook announced that -- as part of a settlement with the ACLU -- it was drastically changing how it handles certain ads: specifically that it would no longer allow such granular targeting for housing, employment, or credit ads -- all three of which were seen in the past as leading to discriminatory outcomes.If such targeting really was important and useful, you'd think that this would have resulted in Facebook's stock price cratering. Instead, it went up.The little secret behind all of this that very few people want to admit is that, in most cases super-targeted ads are crap. They don't perform well. That's because even if you're putting the ad in front of the right demographic, most of the time they don't care or don't want to see whatever it is that you're pushing. Or, it shows an ad for something you already have (or the ever popular laugher: something you just bought and don't need to buy again).Unfortunately, most advertisers don't quite realize this yet, and Google and Facebook are in no rush to tell anyone (though, frankly, they should be more upfront about all of this). Some are realizing this through other means. It didn't get that much attention, but back in January it was reported that, because of the GDPR, the NY Times stopped using behavioral targeting for ads... and found its revenue went up. The Times is doing much more basic targeting now: just contextual and geographical.And, if anyone should know this, it should be Google. For much of Google's existence, its big secret sauce was not deep knowledge about the people seeing the ads: it was just matching them against their search terms. That is, just a bit of simple contextual information, rather than tying it to a giant portfolio of data about you. It's really just over the last decade that Google really focused hard on building data profiles on everyone and "customizing" everything. There may be some advantages to some of those customizations -- and there are certain useful things that come with the data -- but better targeted advertisements... don't really seem to be among them.Frankly, if Facebook and Google want to get regulators off their backs, they might start by coming to terms with this basic fact themselves and choosing to stop collecting so much data on everyone. Recognizing that they can still build incredibly powerful ad-driven businesses without so much data would be a big step forward. Right now, unfortunately, it seems that everyone remains bought into the myth that they need this data, that their business models are dependent on this data, and that this data is actually useful in the advertising context. Bursting that myth might mean that advertisers aren't quite as enamored with Google and Facebook over the long haul (though, they'd still spend a ton of money with them), but it might lead to a better overall experience for users, and a hell of a lot less regulatory pressure.
Daily Deal: The Complete Microsoft Data Analysis Expert Bundle
The Complete Microsoft Data Analysis Expert Bundle features 6 courses to help you master Microsoft data tools like Excel and Access. Start off with master classes in Excel and Access, then learn how to link the two. You'll learn about VBA and Power BI as well. The bundle is on sale for $20.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
Section 230 Holds On As Grindr Gets To Use It As A Defense
It's not really possible to predict the outcome of a court case. No matter how convinced you are that things look to be heading one way, there are still a zillion ways things can turn out otherwise.That said, however, I'm glad to discover that my cautious optimism about the Herrick v. Grindr case was not misplaced. This was a case where a terrible ex-boyfriend set up a phony Grindr profile for Herrick, which led to him being harassed by would-be suitors thinking it was genuine. It was an awful situation and no one can fault Herrick for wanting to hold someone responsible. The problem was, if he were to succeed in holding the dating app liable, it would represent a serious weakening of Section 230's platform protection, which, as we've discussed many times, would lead to the reduction of online services and censorship.Grindr has now prevailed, however, and, perhaps more importantly, so has Section 230 as a defense in the Second Circuit (albeit in a non-precedential decision).
FTC Launches Probe Into Telecom Privacy Issues. But Whether They'll Act Is Another Matter Entirely
This week the FTC announced that it would be launching a broad privacy investigation into a sector that's somehow been forgotten during our collective, justified obsession with Facebook: telecom. According to the full FTC announcement, the agency will be collecting data from all manner of broadband providers and wireless carriers to take a look at how these companies "collect, retain, use, and disclose information about consumers and their devices." From the announcement:
Free Software Foundation Comes To Its Senses After Calling For EU To Fund Open Source Upload Filters
Most EU digital rights groups are still reeling from the approval of the EU Copyright Directive and its deeply-flawed idea of upload filters, which will seriously harm the way the Internet operates in the region and beyond. Matters are made even worse by the fact that some MEPs claim they blundered when they voted -- enough of them that Article 13 might have been removed from the legislation had they voted as they intended.But one organization quick off the mark in its response was the Free Software Foundation Europe (FSFE), the local offshoot of the main Free Software Foundation. Shortly after the EU vote, it issued a press release entitled "Copyright Directive -- EU safeguards Free Software at the last minute". This refers to a campaign spear-headed by the FSFE and Open Forum Europe called "Save Code Share" that sought successfully to exclude open source software sharing from Article 13. As the press release said:
LAPD Watchdog Says Department's Data-Based Policing Is Producing Nothing But Wasted Time And Rights Violations
The Los Angeles Police Department has just received some bad news from its oversight. It's probably good news for the policed -- many of whom are being disproportionately targeted thanks to biased input data -- but the LAPD can't be pleased that its reliance on expensive, mostly-automated tools hasn't produced worthwhile results.The department relies on a handful of tech tools to aid in its policing, but it doesn't appear to be helping. It has CompStat -- a holdover from the early 2000's when Bill Bratton still ran the department. To that framework, it has added LASER -- a nifty acronym that stands for "Los Angeles' Strategic Extraction and Restoration." The program with the reverse engineered nickname actually relies on input from human analysts to determine where officers should be deployed. But this reliance on data-driven policing isn't making the city any safer, despite LASER's focus on violent crime.Here's what the LAPD's human analysts put together for the department's patrol officers.
Australian Prosecutors Trying To Throw Reporters In Jail For Accurately Reporting On Cardinal George Pell's Conviction
As we've covered over the past few months, Australian courts put an absolutely ridiculous gag order on anyone trying to report about the conviction of Cardinal George Pell, the former CFO of the Vatican (often described as the 3rd most powerful person in the Vatican). Pell was convicted of sexually molesting choir boys in Australia in the 1990s. This is obviously quite newsworthy, but the courts used what's known as a "suppression order" in Australia to bar anyone from revealing the information. The reasoning was that there was still another trial for Pell over different accusations, and knowing he was convicted for one might somehow unfairly influence a jury. Of course, in the US we've long dealt with this through a process of vetting potential jurors on their familiarity, and then simply barring just that juror pool from doing any further research on the issue -- and that system works mostly fine, without keeping the public in the dark about important news, and without stifling a free press.Eventually the suppression order was lifted, after prosecutors decided to drop the second trial (which, at the very least, suggests that all this fuss to protect the sanctity of said second trial was silly all along). And, yet, prosecutors then sent out a bunch of threatening letters to journalists -- most of whom did not report publicly on the case, but who did complain about the suppression order.And now, to show just how far Australian prosecutors will go to spit on free speech and a free press, they are now seeking jail time for members of the media over this whole mess:
Cohen Payment Kerfuffle Forces AT&T To Be Slightly More Transparent About Lobbying
Though it kind of flew under the radar given countless other scandals, you might recall how Trump fixer and former lawyer Michael Cohen was also busted selling access to the President. One of the companies involved in this particular aspect of Cohen's grift was AT&T, which was found to have doled out $600,000 to Cohen, presumably under the belief that it would gain additional access and influence.AT&T's received more than a few favors in the Trump administration since, including an FCC willing to self-immolate on lobbyist demand, and the death of both broadband privacy and net neutrality rules at the agency. Not to mention the Trump tax cuts, which netted AT&T more than $20 billion up front, and at least $3 billion in savings annually in perpetuity. And while the Trump DOJ did sue to thwart the AT&T Time Warner merger, that may have had more to do with Trump's close ties to Rupert Murdoch -- and Trump's disdain for CNN -- than any animosity toward AT&T.Aside from AT&T throwing top policy man Bob Quinn under the bus for behavior AT&T has engaged in for years, AT&T saw little to nothing in the form of accountability. Amusingly, the little accountability they did witness came courtesy of AT&T's own investors. After the Cohen fiasco highlighted the secretive costs of AT&T's influence machine, some investors pushed AT&T for more transparency. The company recently responded by providing marginally more insight into the vast network of groups and organizations AT&T routinely pays to support its (usually anticompetitive and anti-consumer) policies:
9th Circuit's Bad AirBnB Decision Threatens Basic Internet Business Models
I'm not done excoriating the Ninth Circuit's recent decision dismissing Homeaway and Airbnb's challenge of the Santa Monica ordinance that holds them liable if their users illegally list their properties for rent. As I wrote before, that's what the ordinance in fact does, even though Section 230 is supposed to prevent local jurisdictions from enforcing laws on platforms that have this effect. Perhaps this decision may not be as obviously lethal to the Internet as the EU's passage of the Copyright Directive with Articles 11 and 13, but only because its consequences may, at the moment, be less obvious – not because they stand to be any less harmful.Which is not to say that the court intended to herald the end of the Internet. Indeed there is a somewhat apologetic tone throughout the decision, as if the court felt it had no choice but to reach the conclusion that it did. But there is also a tone of dismissiveness that runs throughout the decision as well. The court largely minimized the platforms' arguments about how the ordinance will affect them, and by ignoring the inevitable consequences thus opened the door to them, now and in the future, far beyond the facts of this particular case.Ultimately there are (at least) two big problems with the decision. The earlier post highlighted one of them, noting how chilling it is to speech if a law effectively forces platforms to police their users' expression in order to have any hope of avoiding being held liable for it. The problem with the court's decision in this regard is that it kept [see pages 13-14, 17, 20...] incorrectly insisting, over the platforms' protest, that the Santa Monica ordinance does not force them to monitor their users' expression when, in actuality, it most certainly does.The second major problem with the decision is that the court kept trying to create an artificial distinction between imposing liability on platforms for facilitating user expression, which the court acknowledged would be prohibited by Section 230, and imposing liability on platforms for facilitating online transactions — which, per the court, Section 230 would apparently not prevent.
Salesforce Sued For Sex Trafficking... Because Backpage Used Salesforce's CRM
In the latest insane lawsuit regarding the internet and sex trafficking, a group of women who were tragic victims of sex trafficking have decided not to sue those responsible for trafficking them... but online customer relationship management (CRM) provider Salesforce.com. What? Huh? Why? You might ask? Well, apparently it's because everyone's favorite sex trafficking bogeyman, Backpage.com, used Salesforce.com for its CRM. Yup.While most of the reports on this don't show the lawsuit, CNBC thankfully posted a copy (though it's locked up in Scribd, so we can't embed our own version, unfortunately). The lawsuit makes a bunch of leaps to argue that Salesforce is somehow magically responsible for people doing illegal things on Backpage. The levels of separation between the criminal actions and the liability here are simply ridiculous. Much of the lawsuit tries to suggest that because Salesforce is good at its job in customizing its offerings to its customers, that's proof that it's magically responsible for sex trafficking:
Daily Deal: Wave.Video
Create sharp, professional videos in minutes with Wave.Video. A streamlined content solution, Wave.Video makes it easy to produce your own marketing and social videos, even if you're not too savvy with a camera. You'll be able to create 720p HD video clips of up to 1 minute long, and enhance your projects with 300,000 royalty-free audio clips. Download to .mp4 or resize your projects to 11 popular formats for easy sharing on any social platform. A one year subscription is on sale for $9, and a lifetime subscription is on sale for $29.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
After Insisting That EU Copyright Directive Didn't Require Filters, France Immediately Starts Promoting Filters
For months now we've all heard the refrain: Article 13 (now Article 17) of the EU Copyright Directive would not require filters. We all knew it was untrue. We pointed out many times that it was untrue, and that there was literally no way to comply unless you implemented filters (filters that wouldn't work and would ban legitimate speech), and were yelled at for pointing this out. Here's the MEP in charge of the Directive flat out insisting that it won't require filters last year:Over and over and over again, this is what they insisted. Of course, we all knew it wasn't true, and the German government quietly admitted that filters were necessary a few weeks ago. That didn't stop the vote from happening, of course, and the Parliament questionably moving forward with this plan. Still, it's rather striking that just a day after the vote, as pointed out to us by Benjamin Henrion, France's Minister for Culture gave a speech in which he admits that it requires filters and hopes that France will implement the law as quickly as possible in order to start locking down the internet. The quotes here are based on Google translate, so they may not be perfect, but you get the idea. Incredibly, in talking about the Directive, Riester starts off by saying that the passing of the Directive was "despite massive campaigns of misinformation" which seems rather ironic, since it's now clear the misinformation came from those who insisted it didn't require filters, because soon after that he says:
Another Study Finds Verizon's 5G Is Barely Available, Not Scaleable
We've talked a lot about how while fifth-generation (5G) wireless is a good thing (in that faster, more reliable networks are always good), it's been comically over-hyped by cellular carriers and network hardware vendors looking to justify high prices and sell network gear and handsets. It has also been accompanied by what appears to be a race between cellular carriers to broadly misrepresent what 5G is capable of, and where and when it will actually be available.At the heart of a lot of this hype has been Verizon, which routinely insists 5G is the "fourth industrial revolution," and will almost mystically result in a universe of smart cities and smarter supporting technologies. Ironically, while saying all of this, Verizon executives publicly warn about carriers over-hyping 5G. For example here's a Verizon blog post from last January:
Utah Senate Passes Bill That Would Lock The Government Out Of Warrantless Access To Third Party Records
Perhaps no state has unrolled and rolled up a welcome mat set out for a federal guest faster than Utah. What was once a shiny new installation with 5-10,000 jobs attached swiftly became a PR black eye after Ed Snowden exited the NSA and sprung a leak.Suddenly, the sweetheart deal on water given to the NSA seemed like an attempt to curry favor with domestic spies, placing local politicians on the receiving end of reflected wrath from the general public. Utah's government reversed course, setting itself up as a champion of the people. An attempt was made to shut down the spy center's water supply. It never made its way into law, but the anti-panopticon tone was set. But the state is still moving forward with efforts taking on the federal government, engaged in the always-awkward grappling of the The Man sticking it to The Man.Bills forbidding state agencies from participating in domestic surveillance have been introduced elsewhere in the country. Few of these have moved forward. But the Utah legislature -- burned by its close ties with the spy agency non grata -- has proven more tenacious than most. As Molly Davis reports for Wired, the Utah government is one step away from locking the government out of access to third party records.
Stupid Law Making Assaulting Journalists A Federal Crime Revived By Congress
As an overreaction to President Trump's mostly-hyperbolic verbal attacks on the journalism profession, a few legislators from the other side of the political fence have revived their stupid idea from last year. Here's the law's author in his own words twit:
Netflix Asks Court To Dismiss Chooseco's Lawsuit For All The Obvious Reasons
You will recall that Chooseco LLC, the company behind the Choose Your Own Adventure books that people my age remember with such fondness, decided quite stupidly to sue Netflix over Black Mirror's audience-influenced production called Bandersnatch. The lawsuit is silly for any number of reasons, including that the whole thing rests on a character in Bandersnatch mentioning a CYOA book as the inspiration behind his fictional video game coupled with the fact that the film (a third medium) lets viewers choose how the story progresses. How Chooseco thinks any of that legal pixelation resolves into an actual trademark or copyright violation is anyone's guess, because it most certainly does not. Storytelling mechanics are most definitely not protectable as intellectual property. On top of that, Chooseco subsequently announced its own licensed deal with Amazon for Alexa. The timing of it all sure seems to indicate that Chooseco might have wanted to send Netflix a thank you for revitalizing interest in its products, rather than filing a lawsuit.But since the lawsuit was filed, it was only a matter of time before Netflix tried to have it tossed.
Federal Prosecutors Recommend Paul Hansmeier Spend The Next 12 Years In Prison
Cue the Ron Paul "It's Happening!!!!" gif. The wheels of justice have been grinding away for years now, but they've finally generated several years for longtime copyright troll/supervillain Paul Hansmeier. After making a career out of extorting settlements from alleged porn-watching infringers, extorting settlements from small businesses with bogus ADA complaints, attempting to hide his wealth from his creditors (some of which were owed money for sanctions imposed in copyright trolling cases), and otherwise putting on a one-man show entitled "Why We Hate Lawyers," Hansmeier is facing the possibility of spending the next decade in prison.The sentencing recommendation [PDF] prepared by the prosecutors has nothing good to say about Hansmeier. In fact, the prosecutors make it clear they'd have given him even more than the 12+ years they've recommended. (h/t Virgil Abt)Here are the numbers:
Comcast's New Rented Streaming Box Is A Flimsy Attempt To Remain Relevant
Like countless other cable giants, Comcast continues to bleed cable TV subscribers at an alarming rate. These users, tired of sky-high prices, continue to flee to more competitive streaming alternatives and better customer service. That's not great news for Comcast, which has spent decades enjoying a stranglehold over traditional TV, thanks in part to the industry's walled gardens and monopoly over the cable box. And while cable giants could counter the streaming threat by competing on price, they instead continue to double down on ideas that don't make a whole lot of sense.Case in point: in a bid to try and keep users from "cutting the cord," Comcast last week introduced Xfinity Flex. According to the Comcast press release, this new Flex streaming box will be made available to existing Comcast broadband customers for a $5 monthly rental fee, providing access to a limited number of streaming services (sans live streaming services like Playstation Vue, SlingTV, or DirecTV Now that directly compete with Comcast's own offerings):
New York City Apartment Residents Sue Landlord Over New Smart Locks [Updated]
UPDATE: A spokesperson for Latch sends the following message, clarifying that the locks at the center of this lawsuit do not require a smartphone to open (emphasis in the original):
New York City Apartment Residents Sue Landlord Over New Smart Locks
Nothing like rushing home to put your phone on the charger only to realize you can't get into your own apartment without a charged phone. Getting into locked out of your own place: there's an app for that. Maybe the app -- and the smart lock it engages with -- works fine 99% of the time. The other 1%, however, will see you locked out, even after performing an interpretive dance with your emotionless partner.The Software Shuffle:
Daily Deal: Zoolz Cloud Storage Subscription Of 1TB Instant Vault And 1TB Of Cold Storage
Let's face it, cloud storage can get pricey no matter how good the bargain. With this Zoolz Cloud Storage Subscription of Cold Storage and Instant Vault, you'll have an extremely affordable place to safely store 1 TB of data that rarely gets revisited, as well as a home for 1 TB of data you need to access regularly. You can quickly and easily select the files you want to store with Smart Selection. The Instant Vault is drag and drop via web browser and theCold Storage has swift auto backup. You may download Zoolz on two machinesand it is on sale for $44.95.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
Thomas Goolnik Again Convinces Google To Forget Our Story About Thomas Goolnik Getting Google To Forget Our Story About Thomas Goolnik
Remember Thomas Goolnik? Apparently, he doesn't think you should. But let's start this post off with some special notes for two specific parties, and then we'll get into some more details:
Bill To Restore Net Neutrality Moves Forward, And The Public Is Still Pissed
A new bill that would fully restore the FCC's 2015 net neutrality rules took a major step forward this week.Earlier this month Democrats introduced the Save The Internet Act, a three page bill that would do one thing: restore the 2015 net neutrality rules stripped away by Ajit Pai, as well as restore the FCC's authority over broadband providers. As we've long noted, the net neutrality repeal didn't just kill net neutrality, it gutted FCC authority over natural broadband monopolies, shoveling any remaining authority to an FTC experts have repeatedly warned lacks the authority or resources to adequately police giants like Comcast (the entire point of the lobbyist gambit).This week the bill was marked up and approved by the House Subcommittee on Communications and Technology, though not before the telecom industry tried to shovel in some amendments to water down the bill. Those efforts didn't work, at least according to net neutrality activists, because of the attention ordinary folks kept on what would have otherwise been an ignored process if we were talking about any other tech-related markup effort:
Vigilant And Its Customers Are Lying About ICE's Access To Plate Records
Everyone's hooking up ICE with automatic license plate reader (ALPR) data. And everyone's misleading the public about it, starting with ALPR manufacturer, Vigilant. The EFF has been investigating California law enforcement's data sharing claims with relation to its Vigilant ALPRs and finding their public statements are directly contradicted by internal communications obtained with public records requests.Vigilant tries to keep as much information about data sharing under wraps by forcing purchasers to sign restrictive non-disclosure and non-disparagement agreements. Law enforcement agencies are secretive by default, so this allows them to double down on opacity. Vigilant has taken a hardline approach to negative press, threatening journalists with lawsuits for asking too many questions and publishing the answers they've received.
Nevada Judge Says Online News Publications Aren't Protected By The State's Journalist Shield Law
The internet has upended journalism. It's no longer limited to long-established press outlets known for printing physical newspapers and periodicals. It can be performed by anyone, using a vast amount of resources, including search engines, public records requests, and the occasional application of shoe leather.The First Amendment provides protection to these endeavors. Except when it doesn't. Well-meaning legislators seeking to protect journalists use older definitions of journalism to exclude bloggers and freelancers. Some judges make the same mistake as well, deciding the word "journalist" only covers people trafficking in ink and paper, rather than bits and pixels.This older definition was in play in a recent decision handed down by a Nevada judge. Rather than recognize that the intent of Nevada's shield law is to protect journalists, Judge James Wilson decided the law only protects a narrow subset of those practicing the art of journalism.
...256257258259260261262263264265...