Feed techdirt Techdirt

Favorite IconTechdirt

Link https://www.techdirt.com/
Feed https://www.techdirt.com/techdirt_rss.xml
Updated 2026-01-16 08:17
China Exporting Its Surveillance Tech And Philosophy To Other Countries, Helped By Equipment Donations
It will probably come as zero surprise to Techdirt readers to learn the following:
Portland Surrenders To Old Town Brewing Over Stag Sign Trademark
For some time, we've been following an odd trademark dispute between the city of Portland and a small brewery, Old Town Brewing, all over a famous city sign featuring a leaping stag. Old Town has a trademark for the image of the sign and uses that imagery for its business and beer labels. Portland, strangely, has pursued a trademark for the very same market and has attempted to invalidate Old Town's mark for the purpose of licensing the image to macro-breweries to fill the municipal coffers. What I'm sure city officials thought would be the quiet bullying of a local company without the breadth of legal resources Portland has at its disposal has instead ballooned into national coverage of that very same fuckery, with local industry groups rushing to the brewery's aid.The end result of all of this has been several months of Portland officials looking comically bad in the eyes of the public. Of all places, the people of Portland were never going to sit by and let their city run roughshod over a local microbrewery just so that the Budweisers of the world could plaster local iconography over thin, metal cans of pilsner. And now, despite sticking their chins out in response to all of this backlash for these past few months, it seems that the city has finally decided to cave in.
James Woods Saved By A Question Mark, But Still A Total Hypocrite
Karma works in funny ways sometimes. Over the past few years, we covered how actor James Woods filed a totally ridiculous defamation lawsuit against an anonymous internet troll who made some hyperbolic statements about Woods -- statements that were little different than what Woods had said about others. The case never went anywhere... because the defendant died. But Woods gloated over the guy's death, which just confirmed what a horrible, horrible person Woods appears to be.So, while we found the karmic retribution of someone else then suing Woods for defamation on similarly flimsy claims noteworthy, we still pointed out just how weak the case was and noted that, as much of an asshole as Woods was in his case against his internet troll, he still deserved to prevail in the case against him. And prevail he has. The case has been tossed out on summary judgment. While the opinion also details Woods continuing to do the assholish move of trying to avoid being served (his lawyers refused to give an address where he could be served and Woods refused to have his lawyer waive service requirements -- which is usually a formality in these kinds of things). Not surprisingly, the judge is not impressed by Woods hiding out from the process server:
We Need To Shine A Light On Private Online Censorship
On February 2nd, Santa Clara University is hosting a gathering of tech platform companies to discuss how they actually handle content moderation questions. Many of the participants have written short essays about the questions that will be discussed at this event -- and over the next few weeks we'll be publishing many of those essays, including this one.In the wake of ongoing concerns about online harassment and harmful content, continued terrorist threats, changing hate speech laws, and the ever-growing user bases of major social media platforms, tech companies are under more pressure than ever before with respect to how they treat content on their platforms—and often that pressure is coming from different directions. Companies are being pushed hard by governments and many users to be more aggressive in their moderation of content, to remove more content and to remove it faster, yet are also consistently coming under fire for taking down too much content or lacking adequate transparency and accountability around their censorship measures. Some on the right like Steve Bannon and FCC Chairman Ajit Pai have complained that social media platforms are pushing a liberal agenda via their content moderation efforts, while others on the left are calling for those same platforms to take down more extremist speech, and free expression advocates are deeply concerned that companies' content rules are so broad as to impact legitimate, valuable speech, or that overzealous attempts to enforce those rules are accidentally causing collateral damage to wholly unobjectionable speech.Meanwhile, there is a lot of confusion about what exactly the companies are doing with respect to content moderation. The few publicly available insights into these processes, mostly from leaked internal documents, reveal bizarrely idiosyncratic rule sets that could benefit from greater transparency and scrutiny, especially to guard against discriminatory impacts on oft-marginalized communities. The question of how to address that need for transparency, however, is difficult. There is a clear need for hard data about specific company practices and policies on content moderation, but what does that look like? What qualitative and quantitative data would be most valuable? What numbers should be reported? And what is the most accessible and meaningful way to report this information?Part of the answer to these questions can be found by looking to the growing field of transparency reporting by internet companies. The most common kind of transparency report that companies voluntarily publish gives detailed numbers about government demands for information about the companies’ users—showing, for example, how many requests were received, from what countries or jurisdictions, what kind of data was requested, and whether they were complied with or not. As reflected in this history of the practice published by our organization, New America’s Open Technology Institute (OTI), transparency reporting about government demands for data has exploded over the past few years, so much so that projects like the Transparency Reporting Toolkit by OTI and Harvard’s Berkman-Klein Center for Internet & Society have emerged to try and define consistent standards and best practices for such reporting. Meanwhile, a decent number of companies have also started publishing reports about the legal demands they receive for the takedown of content, whether copyright-based or otherwise.However, almost no one is publishing data about what we're talking about here: voluntary takedowns of content by companies based on their own terms of service (TOS). Yet especially now, as private censorship gets even more aggressive, the need for transparency also increases. This need has led to calls from a variety of corners for companies to report on content moderation. For example, a working group of the Freedom Online Coalition, composed of representatives from industry, civil society, academia, and government, called for meaningful transparency about companies’ content takedown efforts, complaining that “there is very little transparency” around TOS enforcement mechanisms. The 2015 Ranking Digital Rights Corporate Accountability Index found that every company surveyed received a failing grade with respect to reporting on TOS-based takedowns; the 2017 Index findings fared only slightly better. Finally, David Kaye, the United Nations Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression, called for companies to “disclose their policies and actions that implicate freedom of expression.” Specifically, he observed that “there are … gaps in corporate disclosure of statistics concerning volume, frequency and types of request for content removals and user data, whether because of State-imposed restrictions or internal policy decisions.”The benefits to companies issuing such transparency reports around their content moderation activities would be significant: For those companies under pressure to “do something” about problematic speech online, this is a an opportunity to outline the lengths to which they have gone to do just that; for companies under fire for “not doing enough,” a transparency report would help them express the size and complexity of the problems they are addressing, and explain that there is no magic artificial intelligence wand they can wave and make online extremism and harassment disappear; and finally, public disclosure about content moderation and terms of service practices will go a long way toward building trust with users—a trust that has crumbled in recent years. Putting aside the benefit to companies, though, there is the even more significant need of policymakers and the public. Before we can have an intelligent conversation about hate speech, terrorist propaganda, or other worrisome content online, or formulate fact-based policies about how to address that content, we need hard data about the breadth and depth of those problems, and about the platforms' current efforts to solve those problems.While there have been calls for publication of such information, there has been little specificity with respect to what exactly should be published. No doubt this is due, in great part, to the opacity of individual companies’ content moderation policies and processes: It is difficult to identify specific data that would be useful without knowing what data is available in the first place. Anecdotes and snippets of information from companies like Automattic and Twitter offer a starting point for considering what information would be most meaningful and valuable. Facebook has said they are entering a new of era transparency for the platform. Twitter has published some data about content removed for violating its TOS, Google followed suit for some of the content removed from YouTube, and Microsoft has published data on “revenge porn” removals. While each of these examples is a step in the right direction, what we need is a consistent push across the sector for clear and comprehensive reporting on TOS-based takedowns.Looking to the example of existing reports about legally-mandated takedowns, data that shows the scope and volume of content removals, account removals, and other forms of account or content interference/flagging would be a logical starting point. Information about content that has been flagged for removal by a government actor—such as the U.K.’s Counter Terrorism Internet Referral Unit, which was granted “super flagger” status on YouTube, allowing the agency to flag content in bulk—should also be included, to guard against undue government pressure to censor. More granular information, such as the number of takedowns in particular categories of content (whether sexual content, harassment, extremist speech, etc.), or specification of the particular term of service violated by each piece of taken-down content, would provide even more meaningful transparency. This kind of quantitative data (i.e., numbers and percents) would be valuable on its own, but would be even more helpful if paired with qualitative data to shed more light on the platforms’ opaque content moderation practices and tell users a clear story about how those processes actually work, using compelling anecdotes and examples.As has already and often happened with existing transparency reports, this data will help keep companies accountable. Few companies will want to demonstrably be the most or least aggressive censor, and anomalous data such as huge spikes around particular types of content will be called out and questioned by one stakeholder group or another. It will also help ensure that overreaching government pressure to takedown more content is recognized and pushed back on, just as in current reporting it has helped identify and put pressure on countries making outsized demands for users’ information. And most importantly, it will help drive policy proposals that are based on facts and figures rather than on emotional pleas or irrational fears—policies that hopefully will help make the internet a safer space for a range of communities while also better protecting free expression.Unquestionably, the major platforms have become our biggest online gatekeepers when it comes to what we can and cannot say. Whether we want them to have that power or not, and whether we want them to use more or less of that power in regard to this or that type of speech, are questions we simply cannot answer until we have a complete picture of how they are using that power. Transparency reporting is our first and best tool for gaining that insight.Kevin Bankston is the Director of the Open Technology Institute at New America). Liz Woolery is Senior Policy Analyst at the Open Technology Institute at New America.
ICE Finally Gets The Nationwide License Plate Database It's Spent Years Asking For
ICE is finally getting that nationwide license plate reader database it's been lusting after for several years. The DHS announced plans for a nationwide database in 2014, but decided to rein that idea in after a bit of backlash. The post-Snowden political climate made many domestic mass surveillance plans untenable, if not completely unpalatable.Times have changed. The new team in the White House doesn't care how much domestic surveillance it engages in as long as it might aid in rooting out foreign immigrants. The first move was the DHS's updated Privacy Impact Assessment on license plate readers -- delivered late last year -- which came to the conclusion that any privacy violations were minimal compared to the national security net benefits.The last step has been finalized, as Russell Brandom reports for The Verge.
Fighting The Future: Teamsters Demand UPS Ban Drones And Autonomous Vehicles
As I've occasionally mentioned in the past, my undergraduate studies were in (of all things) "industrial and labor relations," which involved many, many courses of study on the history of unions, collective bargaining and the economics around such things. I tend to have a fairly nuanced view of unionizing that I won't get into here, other than to note that a big part of the reasons why unions get a bad name is when they take indefensible positions that they think will "protect" their members, but which actually are long term suicidal. This is one of those stories. Reports are coming out that as the Teamsters are entering negotiations on a new contract with shipping giant UPS, their demands include a ban on both drone deliveries and on the use of autonomous vehicles. These are, not surprisingly, both technologies that UPS has been experimenting with lately (as has nearly every other delivery company).You can understand the short term thinking here, of course, UPS drivers see both of those options as potential "competition" that would decrease the number of drivers and potentially cause many to lose their jobs. And that might be true (though, it also might not be true as we'll discuss below). But, at the very least, demanding that the company that employs you directly choose not to invest in the technologies of the future is demanding that a company commit suicide -- in which case all those jobs for drivers would likely be eliminated anyway. While there are obviously a lot more variables at work here, it's not hard to see how a competing delivery company -- whether Fedex, the US Postal Service, Amazon or someone else entirely -- could get drone/driverless car delivery right, and suddenly UPS's service is seen as slower, more expensive and less efficient in many cases. If that's the case, UPS would likely have to layoff tons of workers anyway.The other key point: the idea that these technologies are simply going to destroy all the jobs is almost certainly highly overstated. They very likely will change the nature of jobs, but not eliminate them. Professor James Bessen has been doing lots of research on this for years, and has found that in areas of heavy automation, jobs often increase (though they may be changed). That links to an academic paper he wrote, but he also wrote a more general audience targeted piece for the Atlantic on what he calls the automation paradox. As Bessen explains:
Daily Deal: FresheTech Splash Tunes Bluetooth Shower Speaker
Nothing makes a shower or bath experience complete like your favorite podcast or playlist. With the waterproof FresheTech Splash Tunes Bluetooth Shower Speaker, you can hit play, skip songs, adjust the volume, take phone calls, and more. Just suction cup it to any surface and you'll always have your tunes within arm's reach. It's on sale for $19.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
My Question To Deputy Attorney General Rod Rosenstein On Encryption Backdoors
Never mind all the other reasons Deputy Attorney General Rod Rosenstein's name has been in the news lately... this post is about his comments at the State of the Net conference in DC on Monday. In particular: his comments on encryption backdoors.As he and so many other government officials have before, he continued to press for encryption backdoors, as if it were possible to have a backdoor and a functioning encryption system. He allowed that the government would not itself need to have the backdoor key; it could simply be a company holding onto it, he said, as if this qualification would lay all concerns to rest.But it does not, and so near the end of his talk I asked the question, "What is a company to do if it suffers a data breach and the only thing compromised is the encryption key it was holding onto?"There were several concerns reflected in this question. One relates to what the poor company is to do. It's bad enough when they experience a data breach and user information is compromised. Not only does a data breach undermine a company's relationship with its users, but, recognizing how serious this problem is, authorities are increasingly developing policy instructing companies on how they are to respond to such a situation, and it can expose the company to significant legal liability if it does not comport with these requirements.But if an encryption key is taken it is so much more than basic user information, financial details, or even the pool of potentially rich and varied data related to the user's interactions with the company that is at risk. Rather, it is every single bit of information the user has ever depended on the encryption system to secure that stands to be compromised. What is the appropriate response of a company whose data breach has now stripped its users of all the protection they depended on for all this data? How can it even begin to try to mitigate the resulting harm? Just what would government officials, who required the company to keep this backdoor key, now propose it do? Particularly if the government is going to force companies to be in this position of holding onto these keys, these answers are something they are going to need to know if they are going to be able to afford to be in the encryption business at all.Which leads to the other idea I was hoping the question would capture: that encryption policy and cybersecurity policy are not two distinct subjects. They interrelate. So when government officials worry about what bad actors do, as Rosenstein's comments reflected, it can't lead to the reflexive demand that encryption be weakened simply because, as they reason, bad actors use encryption. Not when the same officials are also worried about bad actors breaching systems, because this sort of weakened encryption so significantly raises the cost of these breaches (as well as potentially makes them easier).Unfortunately Rosenstein had no good answer. There was lots of equivocation punctuated with the assertion that experts had assured him that it was feasible to create backdoors and keep them safe. Time ran out before anyone could ask the follow-up question of exactly who were these mysterious experts giving him this assurance, especially in light of so many other experts agreeing that such a solution is not possible, but perhaps this answer is something Senator Wyden can find out...
The Same FCC That Ignored Science To Kill Net Neutrality Has Created An 'Office Of Economics & Analysis'
You'll recall that the FCC ignored the public, the people who built the internet, and all objective data as it rushed to repeal net neutrality at Verizon, Comcast and AT&T's behest. Things got so absurd during the proceeding, the FCC at one point was directing reporters who had questions regarding the FCC's shaky justifications to telecom industry lobbyists, who were more than happy to molest data until it "proved" FCC assertions on this front (most notably the false claim that net neutrality killed sector investment):
UK Appeals Court Says GCHQ's Mass Collection Of Internet Communications Is Illegal
The UK's mass surveillance programs haven't been treated kindly by the passing years (2013-onward). Ever since Snowden began dumping details on GCHQ surveillance, legal challenges to the lawfulness of UK bulk surveillance have been flying into courtrooms. More amazingly, they've been coming out the other side victorious.In 2015, a UK tribunal ruled GCHQ had conducted illegal surveillance and ordered it to destroy intercepted communications between detainees and their legal reps. In 2016, the UK tribunal declared GCHQ's bulk collection of communications metadata illegal. However, the tribunal did not order destruction of this collection, meaning GCHQ is likely still making use of illegally-collected metadata.A second loss in 2016 -- this time at the hands of the EU Court of Justice -- found GCHQ's collection of European communications being declared illegal due to the "indiscriminate" (untargeted) nature of the collection process. The UK government appealed this decision, taking the ball back to its home court. And, again, it has been denied a victory.
EU's Highest Court Says Privacy Activist Can Litigate Against Facebook In Austria, But Not As Part Of A Class Action
Last November we reported on the legal opinion of one of the Advocates General that advises the EU's top court, the Court of Justice of the European Union (CJEU). It concerned yet another case brought by the data protection activist and lawyer Max Schrems against Facebook, which he claims does not follow EU privacy laws properly. There were two issues: whether Schrems could litigate against Facebook in his home country, Austria, and whether he could join with 25,000 people to bring a class action against the company. The Advocate General said "yes" to the first, and "no" to the second, and in its definitive ruling, the CJEU has agreed with both of those views (pdf). Here's what Schrems has to say on the judgment (pdf):
Minnesota Supreme Court Says Unlocking A Phone With A Fingerprint Isn't A Fifth Amendment Issue
When it comes to the Fifth Amendment, you're better off with a password or PIN securing your device, rather than your fingerprint. Cellphone manufacturers introduced fingerprint readers in an effort to protect users from thieves or other unauthorized access. But it does nothing at all to prevent law enforcement from using their fingerprints to unlock seized devices.The US Supreme Court hasn't seen a case involving compelled production of fingerprints land on its desk yet and there's very little in the way of federal court decisions to provide guidance. What we have to work with is scattered state court decisions and the implicit understanding that no matter how judges rule, a refusal to turn over a fingerprint or a password is little more than a way to add years to an eventual sentence.The Minnesota Supreme Court has issued the final word on fingerprints and the Fifth Amendment for state residents. In upholding the appeals court ruling, the Supreme Court says a fingerprint isn't testimonial, even if it results in the production of evidence used against the defendant. (h/t FourthAmendment.com)From the ruling [PDF]:
Techdirt Podcast Episode 152: Free Speech & The Marketplace Of Ideas
Last week, Mike sparked lots of conversation with his post about rethinking the marketplace of ideas without losing sight of the importance of the fundamental principles of free speech. Naturally, there's plenty more to discuss on that topic, so this week we're joined by Buzzfeed general counsel Nabiha Syed — whose recent article in the Yale Law Journal, Real Talk About Fake News, offered a thorough and insightful look at free speech online — to try to cut through all the simplistic takes on free speech and talk about where things are going.Follow the Techdirt Podcast on Soundcloud, subscribe via iTunes or Google Play, or grab the RSS feed. You can also keep up with all the latest episodes right here on Techdirt.
Why The History Of Content Moderation Matters
On February 2nd, Santa Clara University is hosting a gathering of tech platform companies to discuss how they actually handle content moderation questions. Many of the participants have written short essays about the questions that will be discussed at this event -- and over the next few weeks we'll be publishing many of those essays, including this one.The first few years of the 21st century saw the start of a number of companies whose model of making user-generated content easily amplified and distributable continues to resonate today. Facebook was founded in 2004, YouTube began in 2005 and Twitter became an overnight sensation in 2006. In their short history, countless books (and movies and plays) have been devoted to the rapid rise of these companies; their impact on global commerce, politics and culture; and their financial structure and corporate governance. But as Eric Goldman points out in his essay for this conference, surprisingly little has been revealed about how these sites manage and moderate the user-generated content that is the foundation for their success.Transparency around the mechanics of content moderation is one part of understanding what exactly is happening when sites decide to keep up or take down certain types of content in keeping with the community standards or terms of service. How does material get flagged? What happens to it once it's reported? How is content reviewed and who reviews it? What does takedown look like? Who supervises the moderators?But more important than understanding the intricacies of the system is understanding the history of how it was developed. This gives us not only important context for the mechanics of content moderation, but a more comprehensive idea of how policy was created in the first place, so as to know how best to change it in the future.At each company, there were various leaders who were charged with developing the content moderation policies of the site. At YouTube (Google) this was Nicole Wong. At Facebook, this was Jud Hoffman and Dave and Charlotte Willner. Though it seems basic now, the development of content moderation policies was not a foregone conclusion. Early on, many new Internet corporations thought of themselves as software companies—they did not think about "the lingering effects of speech as part of what they were doing."As Jeff Rosen wrote in one of the first accounts of content moderation's history, while "the Web might seem like a free-speech panacea: it has given anyone with Internet access the potential to reach a global audience. But though technology enthusiasts often celebrate the raucous explosion of Web speech, there is less focus on how the Internet is actually regulated, and by whom. As more and more speech migrates online, to blogs and social-networking sites and the like, the ultimate power to decide who has an opportunity to be heard, and what we may say, lies increasingly with Internet service providers, search engines and other Internet companies like Google, Yahoo, AOL, Facebook and even eBay."Wong, Hoffman and the Willners all provide histories of the hard questions dealt with at each corporation related to speech. For instance, many problems existed simply because flagged content lacked necessary context in order to apply a given rule. This was often the case with online bullying. As Hoffman described, "There is a traditional definition of bullying—a difference in social power between two people, a history of contact—there are elements. But when you get a report of bullying, you just don't know. You have no access to those things. So you have to decide whether you're going to assume the existence of some of those things or assume away the existence of some of those things. Ultimately what we generally decided on was, 'if you tell us that this is about you and you don't like it, and you're a private individual not a public figure, we'll take it down.' Because we can't know whether all these other things happened, and we still have to make those calls. But I'm positive that people were using that function to game the system. . . I just don't know if we made the right call or the wrong call or at what time."Wong came up against similar problems at Google. In June 2009, a video of a dying Iranian Green Movement protestor shot in the chest and bleeding from the eyes was removed from YouTube as overly graphic and then reposted because of its political significance. YouTube's policies and internal guidelines on violence were altered to allow for the exception. Similarly, in 2007, a YouTube video of a man being brutally beaten by four men in a cell and was removed for violence, but restored by Wong and her team after journalists contacted Google to explain that the video was posted by Egyptian human rights activist Wael Abbas to inform the international community of human rights violations by the police in Egypt.What the stories of Wong and Hoffman reveal is that much of the policy and the enforcement of that policy developed in an ad hoc way at each company. Taking down breastfeeding was a fine rule, until it wasn't. Removing an historic photo of a young girl running naked in Vietnam following a napalm attack was acceptable for years, until it was a mistake. A rule worked until it didn't.Much of the frustration that gets expressed towards Facebook, Twitter, and YouTube seems to build itself off a fundamentally flawed premise: that online speech platforms have had one seminal moment in their history where they established a fundamental set of values that would guide their platform. Instead, however, most of these content moderation policies were a series of piecemeal long, hard, and deliberations about the policies to put in place. There was no "Constitutional Convention" moment at these companies, decisions were made reactively in response to signals that were reported to companies through media pressure, civil society groups, government, or individual users. Without a signal, these platforms couldn't develop, change or "fix" their policy.Of course, it's necessary to point out that even when these platforms have been made aware of a problematic content moderation policy, they don't always modify their policies, even when they say they will. That's a huge problem -- especially as these sites become an increasingly essential part of our modern public square. But learning the history of these policies, alongside the systems that enforce them, is a crucial part of advocating effectively for change. At least for now, and for the foreseeable future, online speech is in the hands of private corporations. Understanding how to communicate the right signals through amidst the noise will continue to be incredibly useful.Kate Klonick is a PhD. in Law candidate and a Resident Fellow at the Information Society Project at Yale.
Everything That's Wrong With Social Media And Big Internet Companies: Part 2
Late last year I published Part I of a project to map out all the complaints we hear about social media in particular and about internet companies generally. Now, here's Part 2.This Part should have come earlier; Part 1 was published in November. I'd hubristically imagined that this is a project that might take a week or a month. But I didn't take into account the speed with which the landscape of the criticism is changing. For example, just as you're trying to do more research into whether Google really is making us dumber, another pundit (Farhad Manjoo at the New York Times) comes along and argues that Apple -- a tech giant no less driven by commercial motives than Google and its parent company, Alphabet -- ought to redesign its products to make us smarter (by making them less addictive). That is, it's Apple's job to save us from Gmail, Facebook, Twitter, Instagram, and other attention-demanding internet media — which we connect to through Apple's products, as well as many others.In these same few weeks, Facebook has announced it's retooling the user experience for Facebook users in ways aimed at making the experience more personal and interactive and less passive. Is this an implicit admission that Facebook, up until now, has been bad for us? If so, is it responding to the charges that many observers have leveled at social-media companies — that they're bad for us and that they're bad for democracy.And only this last week, social-media companies have responded to concerns about political extremists (foreign and domestic) in Senate testimony. Although the senators had broad concerns (ISIS recruitment, bomb-making information on YouTube), there was, of course, some allocation of time on the ever-present question of Russian "misinformation campaigns," which may not have altered the outcome of 2016's elections but still may aim to affect 2018 mid-terms and beyond.These are recent developments, but coloring them all is a more generalized social anxiety about social media and big internet companies that is nowhere better summarized than in Senator Al Franken's last major public policy address. Whatever you think of Senator Franken's tenure, I think his speech was a useful accumulation of the growing sentiment among commentators that there's something out of control with social media and internet companies that needs to be brought back into control.Now, let's be clear: even if I'm skeptical here about some claims that social media and internet giants are bad for us, that doesn't mean these criticisms necessarily lack any merit at all. But it's always worth remembering that, historically, every new mass medium (and mass-medium platform) has been declared first to be wonderful for us, and then to be terrible for us. So it's always important to ask whether any particular claim about the harms of social media or internet companies is reactive, reflexive... or whether it's grounded in hard facts.Here are reasons 4, 5, and 6 to believe social media are bad for us. (Remember, reasons 1, 2, and 3 are here.)(4) Social media (and maybe some other internet services) are bad for us because they're super-addictive, especially on our sweet, slick handheld devices."It's Time for Apple to Build a Less Addictive iPhone," according to New York Times tech columnist Farhad Manjoo, who published a column to that effect recently. To be sure, although "Addictive" is in the headline, Manjoo is careful to say upfront that, although iPhone use may leave you feeling "enslaved," it's not "not Apple's fault" and it "isn't the same as [the addictiveness] of drugs or alcohol." Manjoo's column was inspired by an open letter from an ad-hoc advocacy group that included an investment-management firm and the California State Teachers Retirement System (both of which are Apple shareholders). The letter, available here at ThinkDifferentlyAboutKids.com (behind an irritating agree-to-these-terms dialog) calls for Apple to add more parental-control choices for its iPhones (and other internet-connected devices, one infers). After consulting with experts, the letter's signatories argue, "we note that Apple's current limited set of parental controls in fact dictate a more binary, all or nothing approach, with parental options limited largely to shutting down or allowing full access to various tools and functions." Per the letter's authors: "we have reviewed the evidence and we believe there is a clear need for Apple to offer parents more choices and tools to help them ensure that young consumers are using your products in an optimal manner."Why Apple in particular? Obviously, the fact that two of the signatories own a couple of billion dollars' worth of Apple stock explains this choice to some extent. But one hard fact is that Apple's share of the smartphone market mostly stays in the 12-to-20-percent range. (Market leader Samsung has held 20-30 percent of the market since 2012.) Still, the implicit argument is that Apple's software and hardware designs for the iPhone will mostly lead the way for other phone-makers going forward, as they mostly have for the first decade of the iPhone era.Still, why should Apple want to do this? The idea here is that Apple's primarily a hardware-and-devices company — which distinguishes Apple from Google, Facebook, Amazon, and Twitter, all of which primarily deliver an internet-based service. Of course, Apple's an internet company too (iTunes, Apple TV, iCloud, and so on), but the company's not hooked on the advertising revenue streams that are the primary fuel for Google, Facebook, and Twitter, or on the sales of other, non-digital merchandise (like Amazon). The ad revenue for the internet-service companies creates what Manjoo argues are "misaligned incentives" — when ad-driven businesses' economic interests lie in getting more users clicking on advertisements, he reasons, he's "skeptical" that (for example) Facebook is the going to offer any real solution to the "addiction" problem. Ultimately, Manjoo agrees with the ThinkDifferentlyAboutKids letter -- Apple's in the best position to fix iPhone "addiction" because of their design leadership and independence from ad revenue.Even so, Apple has other incentives to make iPhones addictive — notably, pleasing its other investors. Still, investors may ultimately be persuaded that Apple-led fixes will spearhead improvements, rooted in our devices, of our social-media experience. (See, for example, this column: Why Investors May Be the Next to Join the Backlash Against Big Tech's Power.)It's worth remembering that the idea technology is addictive is itself an addictive idea — not that long ago, it was widely (although not universally) believed that television was addictive. This New York Times story from 1990 advances that argument, although the reporter does quote a psychiatrist who cautions that "the broad definition" of addiction "is still under debate." (Manjoo's "less addictive iPhone" column inoculates itself, you'll recall, by saying iPhone addiction is "not the same.")"Addiction" of course is an attractive metaphor, and certainly those of us who like using our electronics to stay connected can see the appeal of the metaphor. And Apple, which historically has been super-aware of the degree to which its products are attractive to minors, may conclude—or already have concluded, as the ThinkDifferentlyAboutKids folks admit — that more parental controls are a fine idea.But is it possible that smartphones maybe already incorporate a solution for addictiveness? Just the week before Manjoo's column, another Times writer, Nellie Bowles asked whether we can make our phones less addictive just by playing with the settings. (The headline? "Is the Answer to Phone Addiction a Worse Phone?") Bowles argues, based on interviews with researchers, that simply setting your phone to use grayscale instead of color inclines users to respond less emotionally and impulsively—in other words, more mindfully—when deciding whether to respond to their phones. Bowles says she's trying the experiment herself: "I've gone gray, and it's great."At first it seems odd to focus on the device's user interface (parental settings, or color palette) if the real problem of addictiveness is internet content (social media, YouTube and other video, news updates, messages). One can imagine a Times columnist in 1962—in the opening years of widespread color TV— responding to Newt Minow's famous "vast wasteland" speech by arguing that TV-set manufacturers should redesign sets so that they're somewhat more inconvenient—no remote controls, say—and less colorful to watch. (So much for NBC's iconic Peacock opening logo)In the interests of science, I'm experimenting with some of these solutions myself. For years already I've configured my iDevices not to bug me with every Facebook and Twitter update or new-email notice. Plus, I was worried about this grayscale thing on my iPhone X—one of the major features of which is a fantastic camera. But it turns out that you can toggle between grayscale and color easily once you've set gray as the default. I kind of like the novelty of all-gray—no addiction-withdrawal syndrome yet, but we'll see how that goes.(5) Social media are bad for us because they make us feel bad, alienating us from one another and causing is to be upset much of the time.Manjoo says he's skeptical whether Facebook is going to fix the addictiveness of its content and interactions with users, thanks to those "misaligned incentives." It should be said of course that Facebook's incentives—to use its free services to create an audience for paying advertisers—at least have the benefit of being straightforward. (Apple's not dependent on ads, but they still want new products to be attractive enough for users to want to upgrade.) Still, Facebook's Mark Zuckerberg has announced that the company is redesigning Facebook's user experience, (focusing first on its news feed) to emphasize quality time ("time well spent") over more "passive" consumption of the Facebook ads and video that may generate more hits for some advertisers. Zuckerberg maintains that Facebook, even as it has operated over the last decade-plus of general public access, had been good for many and maybe for most users:
Daily Deal: The Project Management Professional Certification Training Bundle
The Project Management Professional Certification Training Bundle features 10 courses designed to get you up and running as a project manager. You'll prepare for certification exams by learning the fundamental knowledge, terminology, and processes of effective project management. Various methods of project management are covered as well including Six Sigma, Risk Management, Prince and more. The bundle is on sale for $49.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
Unintended Consequences Of EU's New Internet Privacy Rules: Facebook Won't Use AI To Catch Suicidal Users
We've written a few times about the GDPR -- the EU's General Data Protection Regulation -- which was approved two years ago and is set to go into force on May 25th of this year. There are many things in there that are good to see -- in large part improving transparency around what some companies do with all your data, and giving end users some more control over that data. Indeed, we're curious to see how the inevitable lawsuits play out and if it will lead companies to be more considerate in how they handle data.However, we've also noted, repeatedly, our concerns about the wider impact of the GDPR, which appears to go way too far in some areas, in which decisions were made that may have made sense in a vacuum, but where they could have massive unintended consequences. We've already discussed how the GDPR's codification of the "Right to be Forgotten" is likely to lead to mass censorship in the EU (and possibly around the globe). That fear remains.But, it's also becoming clear that some potentially useful innovation may not be able to work under the GDPR. A recent NY Times article that details how various big tech companies are preparing for the GDPR has a throwaway paragraph in the middle that highlights an example of this potential overreach. Specifically, Facebook is using AI to try to catch on if someone is planning to harm themselves... but it won't launch that feature in the EU out of a fear that it would breach the GDPR as it pertains to "medical" information. Really.
FCC 'Broadband Advisory Panel' Faces Accusations Of Cronyism
Last year we noted how the FCC had been hyping the creation of a new "Broadband Deployment Advisory Panel" purportedly tasked with coming up with solutions to the nation's broadband problem. Unfortunately, reports just as quickly began to circulate that this panel was little more than a who's who of entrenched telecom operators with a vested interest in protecting the status quo. What's more, the panel featured few representatives from the countless towns and cities that have been forced to build their own broadband networks in the wake of telecom sector dysfunction.One report showed how 28 of the 30 representatives on the panel had some direct financial ties to the telecom sector, though many attempted to obfuscate this connection via their work for industry-funded think tanks.You'll recall that FCC boss Ajit Pai consistently insists he's breathlessly dedicated to closing the digital divide, despite the fact his policies (like killing net neutrality or protecting business broadband monopolies) will indisputably make the problem worse. Regardless, Pai has spent the last few weeks insisting in speeches like this one (pdf) that his advisory council is the centerpiece of his efforts to close the digital divide:
First Amendment Lawsuit Results In Louisiana Police Department Training Officers To Respect Citizens With Cameras
Another police department has "learned" it has to respect the First Amendment rights of citizens. A settlement obtained by the ACLU as the result of a civil rights lawsuit will result in additional training that surely should be redundant at this point in time.
Salt Lake Comic Con Files For A New Trial And Seeks Round 2
In the wake of San Diego Comic-Con winning its years-long lawsuit against Salt Lake Comicon over its trademark on the term "comic-con", much of the media coverage was somewhat apocalyptic as to what the consequences would be for cons across the country. Despite the payout for winning the suit being a paltry $20k, more focus was put on just how other cons would react. The early returns are mixed, with some proactively undergoing name-changes to avoid litigation and others staying stalwart. The point we have made all along is that this win for SDCC was not some ultimate final act on the matter.And, as many predicted, it appears that win wasn't even the final act with regards to its SLCC foe, as the Utah-based con has filed for a new trial.
Senators Demand Investigation Of Intelligence Community's Refusal To Implement Whistleblower Protections
When the Snowden leaks dropped, plenty of people rushed to criticize his actions, saying he should have brought his concerns to officials via the proper channels. Always assumed to be mostly worthless, the intervening four years have proven nothing shoots messengers faster than the "proper channels." Despite periodic legislative attempts to institute better whistleblower protections, working within the system rarely produces positive changes. It does, however, subject the whistleblower to plenty of retaliation.This sad fact is personified by Dan Meyer -- the former official whistleblower channel for the Intelligence Community. Meyer blew the whistle himself, pointing out wrongdoing by top IC officials. Now, he's being forced out of office, clearing the path for the IC's attempt to rebrand whistleblowers as "insider threats." Meyer is facing an ad hoc Star Chamber of IC Inspector Generals, all of them apparently gunning for his swift removal.
It's Time to Talk About Internet Companies' Content Moderation Operations
As discussed in this post below, on February 2nd, Santa Clara University is hosting a gathering of tech platform companies to discuss how they actually handle content moderation questions. Many of the participants have written short essays about the questions that will be discussed at this event -- and over the next few weeks we'll be publishing many of those essays. This first one comes from Professor Eric Goldman, who put together the conference, explaining the rationale behind the event and this series of essays.Many user-generated content (UGC) services aspire to build scalable businesses where usage and revenues grow without increasing headcount. Even with advances in automated filtering and artificial intelligence, this goal is not realistic. Large UGC databases require substantial human intervention to moderate anti-social and otherwise unwanted content and activities. Despite the often-misguided assumptions by policymakers, problematic content usually does not have flashing neon signs saying "FILTER ME!" Instead, humans must find and remove that content—especially with borderline cases, where machines can't make sufficiently nuanced judgments.At the largest UGC services, the number of people working on content moderation is eye-popping. By 2018, YouTube will have 10,000 people on its "trust & safety teams." Facebook's "safety and security team" will grow to 20,000 people in 2018.Who are these people? What exactly do they do? How are they trained? Who sets the policies about what content the service considers acceptable?We have surprisingly few answers to these questions. Occasionally, companies have discussed these topics in closed-door events, but very little of this information has been made public.This silence is unfortunate. A UGC service's decision to publish or remove content can have substantial implications for individuals and the community, yet we lack the information to understand how those decisions are made and by whom. Furthermore, the silence has inhibited the development of industry-wide "best practices." UGC services can learn a lot from each other—if they start sharing information publicly.On Friday, a conference called "Content Moderation and Removal at Scale" will take place at Santa Clara University. (The conference is sold out, but we will post recordings of the proceedings, and we hope to make a live-stream available). Ten UGC services will present "facts and figures" about their content moderation operations, and five panels will discuss cutting-edge content moderation issues. For some services, this conference will be the first time they've publicly revealed details about their content moderation operations. Ideally, the conference will end the industry's norm of silence.In anticipation of the conference, we assembled ten essays from conference speakers discussing various aspects of content moderation. These essays provide a sample of the conversation we anticipate at the conference. Expect to hear a lot more about content moderation operational issues in the coming months and years.Eric Goldman is a Professor of Law, and Co-Director of the High Tech Law Institute, at Santa Clara University School of Law. He has researched and taught Internet Law for over 20 years, and he blogs on the topic at the Technology & Marketing Law Blog.
Fitness Tracker Data Exposes Military Operations, Shows What Damage That Can Be Done With 'Just Metadata'
Last November, Strava Labs released its "global heatmap" -- a stockpile of data created by millions of health-conscious people worldwide. Strava Labs is the GPS brain many fitness trackers rely on, allowing devices to record billions of steps recorded by millions of users. The company pulls data from big players like FitBit and Jawbone, as well as having its own fitness-tracking app. Here's what Strava Labs handed over to the general public:1 billion activities3 trillion latitude/longitude points13 trillion pixels rasterized10 terabytes of raw input dataA total distance of 27 billion km (17 billion miles)A total recorded activity duration of 200 thousand years5% of all land on Earth covered by tilesHere's what Strava's activity data looks like transposed on a map.All this metadata -- anonymized GPS points -- builds up quite a record of human movement. On top of tracking favorite jogging routes, the data is detailed enough to indicate where frequent exercisers live and work. This has been a problem for a few years now.Two years before this data was published, Strava announced a new feature which allowed users to turn solo workouts into ad hoc competitions.
Leaked Trump Plan To 'Nationalize' Nation's 5G Networks A Bizarre, Unrealistic Pipe Dream
There's been a lot of hand wringing and hyperventilation over a new report claiming that the Trump administration wants to nationalize the nation's looming fifth-generation (5G) wireless networks, despite the fact the proposal has a snowball's chance in hell of ever actually materializing. According to a leaked PowerPoint deck and memo drafted by a "Senior National Security Council official," the Trump administration wants the U.S. government to build and own a centralized, government-controlled 5G network in order to, purportedly, fight Chinese hackers.More specifically, the memo claims this plan would be akin to the "21st century equivalent of the Eisenhower National Highway System," creating a "new paradigm" for the wireless industry and for national security. Fear of Chinese hackers drives the proposal from stem to stern, suggesting the plan needs to be completed in three years to protect American interests worldwide:
Daily Deal: The Big Data Bundle
Learn how to work with large databases and massive data sets with the Big Data Bundle. The 9 courses will help you learn how to process and manage enormous amounts of data efficiently. You'll learn to use Hadoop, Spark, Pig and more. This bundle is on sale for only $19.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
Dutch Approach To Asset Forfeiture Will Literally Take The Clothes Off Pedestrians' Backs
We've long complained about civil asset forfeiture in the United States. Law enforcement agencies, thanks to a series of perverse incentives, have grown to love taking people's property (usually cash) without charging them for crimes. The excuse is that lifting a few thousand dollars from some random person somehow chips away at drug cartels located overseas.It would seem to be more crippling if criminal charges were pursued and suspects interrogated, jailed, and flipped. But law enforcement has no time for that, not when a pile of cash is only a few pieces of paperwork away from changing ownership.They're taking asset forfeiture to a whole new level over in the Netherlands. Dutch cops will now be taking the clothes off people's back if they "suspect" the clothing might be out of the spending range of the person wearing it. (h/t Charles C.W. Cooke)
Another Day, Another Flimsy Report Claiming TV Cord Cutting Won't Save You Money
Once a month like clockwork, somebody in the tech press proudly decides to inform their readers that you can't save any money by cutting the traditional TV cord and going with cheaper, more flexible streaming alternatives. The logic in these reports almost always goes something like this: "Once I got done signing up for every damn streaming video service under the sun, I found that I wasn't really saving much money over traditional cable."Writers leaning into this lazy hot take almost always tend to forget a few things.One, the same broadcasters dictating cable TV rates dictate streaming video rates, so in some ways pricing will be lateral. Two, adding a dozen streaming services to exactly match your bloated, 300 channel cable subscription misses the entire point of cord cutting, which is about customization and flexibility. Three, if writers actually stopped and talked to real consumers (like in the cord cutting subreddit), they'd be told (repeatedly) how customers routinely save money each month by breaking free of the traditional, bloated cable TV bundle.Last week it was Quartz's turn to prop up the flimsy narrative that "streaming’s live-TV bundles aren’t actually saving cord-cutters money." Their report was at least somewhat more scientific in nature, leaning heavily on data provided by a research firm by the name of M Science, which acknowledged that the average cord cutter saves around $20 per month by going with a streaming alternative. But the firm then tried to claim that this savings disappeared when you factored in cable company "triple play" bundles:
The NFL Pretending Trademark Law Says Something It Doesn't Leads To Hilariously Amateurish Ads For 'The Big Game'
Every year, right about this time, this site is forced to remind everyone that the NFL is completely full of crap when it comes to how it enforces its supposed trademark rights for the Super Bowl. While the NFL does indeed have some rights to the phrase and to controlling how it's used, those rights generally amount to prohibiting companies from falsely implying sponsorship of the game or a relationship with the NFL in commercial speech. What the NFL pretends is the case, on the other hand, is that it can somehow prohibit any company from even mentioning the Super Bowl in any context, up to and including simple factual statements.All of this leads to the absurdity of every company that has chosen not to sponsor the NFL diving into the euphamism business, gleefully referring to the Super Bowl by any other name. "The Big Game" is the most popular of these, although the NFL has actually gone so far as to look into trademarking that phrase as well. The end result is the Picasso-ing of reality in which companies make references which every member of the public gets but that fall short of calling the NFL's biggest show by its proper name, something you would think the NFL would want everyone everywhere talking about.With the Super Bowl a week away, we're already seeing this practice ramp back up. In Philadelphia, the home city of one of the competing teams, some small local businesses are getting into the act in hilarious ways.
Funniest/Most Insightful Comments Of The Week At Techdirt
This week, our first place winner on the insightful side comes in response to our post about law enforcement's use of "parallel construction". That One Guy suggested a tweak to the language:
This Week In Techdirt History: January 21st - 27th
Five Years AgoThis week in 2013, the world continued to react to the death of Aaron Swartz, with more attention being turned towards prosecutorial misconduct, and direct criticism of the handling of Swartz's case — though US Attorney Carmen Ortiz doubled down and said her office wouldn't change anything. Meanwhile, we looked at the many other cases of prosecutors bullying "hackers", while misguided editors at the Globe & Mail were spewing nonsense and hackathons around the world were preparing to carry on Aaron Swartz's work.Ten Years AgoThis week in 2008, while AT&T was getting ready to filter copyrighted content at the ISP level, Time Warner was rolling out its overage charges for heavy users — and, funny thing, Time Warner-owned HBO was simultaneously putting its shows online for the first time. And Canadian lobbyists were pushing to make ISPs liable for piracy themselves. Meanwhile, we saw too trends in their infancy: adults moving into the young person's world of social media (to the consternation of many young people), and PC game companies experimenting with the freemium model that would later become a staple of mobile gaming (this was before the PC publishers figured out they could charge $60 for the game and have microtransactions).Fifteen Years AgoEarly this week in 2003, it was the RIAA seeking money from ISPs, with then-head Hillary Rosen calling for a P2P levy — though a journalist who called the RIAA found them denying she said it, and claiming the opposite. But then, midweek, Rosen announced her resignation. Meanwhile, Microsoft was introducing its own DRM technology, while Sony was trying out some DRM that charged people $2 to copy a song from a CD. Amidst this anti-circumvention obsession, tech firms were getting more aggressive in their fight against Hollywood's DRM demands.
'We Shall Overcome' Overcomes Bogus Copyright Claim -- Officially In The Public Domain
The same legal team that helped get the song "Happy Birthday" officially cleared into the public domain has done it again with the song "We Shall Overcome." As we wrote about, the same team filed a similar lawsuit against The Richmond Organization and Ludlow Music, who claimed a highly questionable copyright in the famous song "We Shall Overcome." As the lawsuit showed, the song had a lengthy history long before Ludlow's copyright claim.Last September, the judge made it clear that the song's claimed copyright was on weak grounds, rejecting arguments that key parts of the song were subject to copyright. Apparently, Ludlow Music tried to salvage something out of the wreck by just promising to offer a "covenant not to sue" against the plaintiffs... which the judge said wasn't good enough earlier this month.So, now the two sides have come to a settlement clearly admitting that the song is in the public domain:
Pablo Escobar's Brother Gives Up His Quest For A Billion Dollar Extortion Of Netflix Over 'Narcos'
You will likely know that we've been following the absurd threats that Roberto Escobar, brother to and former accountant for noted drug kingpin Pablo Escobar, launched at Netflix and the makers of its hit show Narcos. The threats kicked off as something of a publicity rights challenge, with Roberto Escobar demanding one billion dollars over a show in which he does not appear and is not named. Escobar has appeared to believe that his knowledge of the inner workings of the Escobar cartel somehow granted him authority over the show, while pretty much everyone else has agreed that the First Amendment would ultimately torpedo any lawsuit that might actually get filed.But then things got even stranger. Escobar's lawyers began making noises that indicated the show was about to capitulate to the threats and demands. Meanwhile, the legal team on the other side were at the exact same time pointing out just how absurd and ficticious some of Escobar's claims were, such as that he had been using the term "Narcos" in conjunction with operating a website and providing computer gaming services on a computer network since 1986. For those of you who are too young to remember a time without a widespread internet, there basically was no such thing as a publicly facing website in 1986. Meanwhile, a location scout for the show was murdered in Mexico while scouting for the series' fourth season, with Escobar offering cryptic and coy commentary on the matter that bordered on suggesting he was somehow involved.All of that had just been happening in the fall, which might make it slightly less surprising that this whole thing will now go away.
New York Police Union Sues NYPD To Block Public Release Of Body Camera Footage
The route to equipping the NYPD with body cameras ran through a federal courtroom. As part of the remedies handed down in a civil rights lawsuit against the NYPD's stop-and-frisk program, body cameras became required equipment for officers.NYPD officials seemed to support the plan. Not so the ostensible representative of the NYPD, the Patrolmen's Benevolent Association (PBA). The NYPD's union fought the cameras much as they have fought anything with a hint of accountability. A report by the NYPD's internal oversight found officers much less concerned about body cameras and access to footage than their supposed union reps.A long-delayed camera policy finally rolled out, making it clear cameras would serve officers and prosecutors much more than they would the general public. Now, the PBA is going to court to block the release of camera footage to the public. The PBA hopes the court will read public records laws the way it does, tossing body cam footage into the gaping hole of New York public records exemptions.
Harvard Study Shows Community-Owned ISPs Offer Lower, More Transparent Prices
We've routinely noted how countless communities have been forced to explore building their own broadband networks thanks to limited competition in the market. As most of you have experienced first hand, this lack of competition routinely results in higher prices, slower speeds, worse customer service, and massive broadband deployment gaps. And thanks to telecom industry regulatory capture (taken to an entirely new level during the Trump administration), countless well-heeled lawmakers make it a personal mission to keep things that way.Needless to say, the threat posed by angry users building or supporting their own networks is a major reason ISPs have lobbied (read: literally bought and written) laws in twenty-one states banning towns and cities from pursuing this option. In some states, towns and cities are even banned from striking public/private partnerships, often the only creative solution available to the traditional broadband duopoly logjam.Not too surprisingly, a new study out of Harvard details just what AT&T, Verizon, Comcast and Charter (Spectrum) are afraid of.The study found that averaged over a four year period, service offered by community ISPs tends to be significantly cheaper that broadband service made available from privately-owned alternatives. In some areas, the researchers couldn't directly compare community-owned broadband with private service, either because the private ISP in question couldn't even offer the FCC definition of broadband (25 Mbps downstream, 3 Mbps upstream), or because ISPs went to great lengths to prevent users from seeing their actual prices. But in 23 out of 27 cases, the community option provided lower prices:
Senate IT Tells Staffers They're On Their Own When It Comes To Personal Devices And State-Sponsored Hackers
Notification of state-sponsored hacking attempts has revealed another weak spot in the US government's defenses. The security of the government's systems is an ongoing concern, but the Senate has revealed it's not doing much to ensure sensitive documents and communications don't end up in the hands of foreign hackers.The news of the hacking attempt was greeted with assurances that nothing of value was taken.
Daily Deal: Paww WaveSound 3 Noise-Cancelling Bluetooth Headphones
The WaveSound 3 headphones strike an elegant chord of premium sound quality, active noise cancellation, and comfort. Combining a state-of-the-art CSR chipset with multiple microphones, the WaveSound 3's block out as much as 20dB of unwanted ambient noise, independent of ANC function. They feature two 40mm Neodymium drivers to create a balanced, punchy sound, and fold easily into the included case. Whether you fly a lot, work in a noisy office, or just enjoy a precious silence, these headphones will give you a listening experience free from distractions. They are on sale for $80.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
FBI Director Chris Wray Says Secure Encryption Backdoors Are Possible; Sen. Ron Wyden Asks Him To Produce Receipts
I cannot wait to see FBI Director Christopher Wray try to escape the petard-hoisting Sen. Ron Wyden has planned for him. Wray has spent most of his time as director complaining about device encryption. He continually points at the climbing number of locked phones the FBI can't crack. This number signifies nothing, not without more data, but it's illustrative of Wray's blunt force approach to encryption.I'm sure Wray views himself as a man carefully picking his way through the encryption minefield. But there's nothing subtle about his approach. He has called encryption a threat to public safety. His lead phone forensics person has called Apple "evil" for offering it to its users. He has claimed the move to default encryption is motivated by profit. And if that's not the motivation, then it's probably just anti-FBI malice. Meanwhile, he claims the FBI has nothing but the purest intentions when it calls for encryption backdoors, even while Wray does everything he can to avoid using that term.He claims the solution is out there -- a perfect, seamless blend of secure encryption and easy law enforcement access. The solution, he claims, is most likely deliberately being withheld by the "smart people." These tech companies that have made billionaires of their founders are filled with the best nerds, but they're just not applying themselves. Wray asserts -- without evidence -- that secure encryption backdoors are not only possible, but probable.Senator Ron Wyden has had enough. He's calling out Director Wray on his bullshit. Publicly. His letter [PDF] demands Wray hand over information on his encryption backdoor plans. Specifically, Wyden wants Wray to name names. [via Kate Conger at Gizmodo]
FCC Hopes Its Phony Dedication To Rural Broadband Will Make You Forget It Killed Net Neutrality
The FCC and its large ISP allies are trying to change the subject in the wake of their hugely unpopular attack on net neutrality. With net neutrality having such broad, bipartisan support, the FCC is trying to shift the conversation away from net neutrality (which remember, is just a symptom of a lack of broadband competition), toward a largely-hollow focus on expanding broadband to rural areas. The apparent goal: to convince partisans that net neutrality is only a concern among out of touch Hollywood elites, and the FCC is hard at work on the real problem: deploying broadband to forgotten America.This attempted pivot was exemplified in a statement last week by FCC boss Ajit Pai, when he tried to argue his attack on net neutrality was already magically paying dividends for broadband expansion:
Sarajevo's City Government Says No One Can Use The Name 'Sarajevo' Without Its Permission
The city of Sarajevo passed a law in 2000 forbidding anyone but the city of Sarajevo from using the name Sarajevo. Not much has been said about it because the Sarajevo city council hasn't done much about it. But recently owners of Facebook pages containing the word "Sarajevo" have been receiving legal threats from the city's government.Sarajevo resident Aleksandar Todorović wrote a long blog post detailing the stupidity of this law, which contains firsthand accounts of Facebook page owners who've been threatened with criminal proceedings for failing to secure permission to use the name of a city on their pages. As Todorović notes, his blog post is illegal, simply because it hasn't been pre-approved by Sarajevo's city council.The law can be read here (and loosely translated by Google). It basically states the city owns the name and all others wishing to use it must ask the city council for permission before using it. It also states there are some requests that just aren't going to be granted.
Genome Of A Man Born In 1784 Recreated From The DNA Of His Descendants
The privacy implications of collecting DNA are wide-ranging, not least because they don't relate solely to the person from whom the sample is taken. Our genome is a direct product of our parents' genetic material, so the DNA strings of siblings from the same mother and father are closely related. Even that of more distant relations has many elements in common, since they derive from common ancestors. Thus a DNA sample contains information not just about the donor, but about many others on the relevant family tree as well. A new paper published in Nature Genetics (behind a paywall, unfortunately) shows how that fact enables the genomes of long-dead ancestors to be reconstructed, using just the DNA of their descendants.As an article in Futurism explains, the unique circumstances of the individual chosen for the reconstruction, the Icelander Hans Jonatan, aided the research team as they sought to piece together his genome nearly two centuries after his death in 1827. The scientists mainly came from the Icelandic company deCODE Genetics, one of the pioneers in the world of genomics, and highly-familiar with Iceland's unique genetic resources. The following factors were key:
Vice Media Goes After Vice Industry Token, A Porn Crypto-Currency Company, For Trademark
The last time we checked in with Vice Media it was firing off a cease and desist letter to a tiny little punk band called ViceVersa, demanding that it change its name because Vice Media has a trademark for the word "vice" for several markets. In case you thought that occurrence was a one-off for Vice Media, or the result of an overzealous new hire to the company's legal team, Vice Media is again trademark bullying another comany, Vice Industry Token. VIT is apparently a pornography cryptocurrency company, which is a three-word combination that I bet god herself could never have imagined being uttered. The claim in the C&D notice that VIT got is, of course, that Vice Media has a "vice" trademark and that this use infringes upon it.
Harris Stingray Nondisclosure Agreement Forbids Cops From Telling Legislators About Surveillance Tech
The FBI set the first (and second!) rules of Stingray Club: DO NOT TALK ABOUT STINGRAY CLUB. Law enforcement agencies seeking to acquire cell tower spoofing tech were forced to sign a nondisclosure agreement forbidding them from disclosing details on the devices to defendants, judges, the general public… sometimes even prosecutors.A new wave of parallel construction washed over the land, distancing defendants from the source of evidence used against them. Pen register orders -- used to cover the tracks of Stingray searches -- started appearing en masse, as though it was 1979 all over again. If curious lawyers and/or judges started sniffing around, agencies were instructed to let accused criminals roam free rather than expose details about Stingray devices. According to the FBI, public safety would be irreparably damaged if Stingray details were exposed. Apparently the return of dangerous criminals to the street poses no harm to the public.Another NDA has been uncovered, thanks to a lengthy public records lawsuit. The document finally handed over by Delaware State Police to the ACLU was once referred to as "mythical" by the DSP in court. Yes, the State Police once claimed this NDA never existed. It did so while claiming it had zero communications with Harris while acquiring its Stingray. The ACLU obviously found this hard to believe and the court sent the DSP back to search harder. The Harris NDA is real. And it's spectacular.
TPP Is Back, Minus Copyright Provisions And Pharma Patent Extensions, In A Clear Snub To Trump And The US
As Techdirt noted back in November, the Trans Pacific Partnership (TPP) agreement was not killed by Donald Trump's decision to pull the US out of the deal. Instead, something rather interesting happened: one of the TPP's worst chapters, dealing with copyright, was "suspended" at the insistence of the Canadian government, which suddenly took on a leading role. At the time, it wasn't clear whether this was merely a temporary ploy, or was permanent. With news that the clumsily-named "Comprehensive and Progressive Agreement for Trans-Pacific Partnership" (CPTPP) has been "concluded", it now seems that the exclusion of both copyright and pharma patent extensions is confirmed. As Michael Geist writes:
Spanish Government Uses Hate Speech Law To Arrest Critic Of The Spanish Government
Spain's government has gotten into the business of regulating speech with predictably awful results. An early adopter of Blues Lives Matter-esque policies, Spain went full police state, passing a law making it a crime to show "disrespect" to law enforcement officers. The predictable result? The arrest of someone for calling cops "slackers" in a Facebook post.Spain's government is either woefully unaware of the negative consequences of laws like this or, worse, likes the negative consequences. After all, it doesn't hurt Spain's government beyond a little reputational damage. It only hurts residents of Spain. When you're already unpopular, thanks to laws like these and suppression of a Catalan independence vote, what difference does it make if you're known better for shutting down dissent than actually protecting citizens from hateful speech?One Catalan resident is getting the full "hate speech" rap-and-ride.
Daily Deal: LithiumCard Pro Retro Series Lightning Battery Chargers
Hop on that '80s nostalgia train with these portable LithiumCard Pro Retro Series Lightning Battery Chargers! The $40 battery uses 3.0 amp HyperCharging Generation 2 technology to deliver an ultra fast charge via lightening cable for your Apple devices and you can charge a second USB-equipped device by plugging your personal USB charging cable into the additional USB type-A port. The fully integrated and retractable charge/sync cable lets you keep all the wires organized while a tri-color LED battery capacity gauge keeps you in the know of when the battery is dying. Choose between a mix tape, boom box, or classic nintendo controller.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
Rupert Murdoch Admits, Once Again, He Can't Make Money Online -- Begs Facebook To Just Give Him Money
There's no denying that Rupert Murdoch built up quite a media empire over the decades -- but that was almost all entirely focused on newspaper and pay TV. While he's spent the past few decades trying to do stuff on the internet, he has an impressively long list of failures over the years. There are many stories of him buying internet properties (Delphi, MySpace, Photobucket) or starting them himself (iGuide, Fox Interactive, The Daily) and driving them into the ground (or just flopping right out of the gate). While his willingness to embrace the internet early and to try things is to be commended, his regular failures to make his internet ventures successful has pretty clearly soured him on the internet entirely over the years.Indeed, over the past few years, Murdoch or Murdoch surrogates (frequently News Corp's CEO Robert Thomson) have bashed the internet at every opportunity, no matter how ridiculous. Almost all of these complaints can be summed up simply: big internet companies are making money and News Corp. isn't -- and therefore the problem is with those other companies which should be forced to give News Corp. money.A few years back, I ended up at a small media conference where Rupert's son James Murdoch spoke at great length about his plans for News Corps' internet business -- and what struck me was that he was almost 100% focused on copying the pay TV model. This wasn't a huge surprise -- I think at the time he was running Sky TV -- but it shocked me that he appeared to think through force of will he could turn the internet into a walled garden a la cable and satellite TV systems. Not surprisingly, Rupert is thinking along similar lines, and earlier this week released a bizarre and silly statement saying Facebook should start paying news sites "carriage fees" a la cable companies:
The GAO Says It Will Investigate Bogus Net Neutrality Comments, Eventually
The General Accounting Office (GAO) says the agency will launch an investigation into the fraud that occurred during the FCC's rushed repeal of net neutrality rules. Consumers only had one real chance to weigh in during the public comment period of the agency's misleadingly-named "Restoring Internet Freedom" proposal. But "somebody" paid a group or individual to fill the comment period with bogus comments from fake or even dead people, in a ham-fisted attempt to downplay massive, legitimate public opposition to the plan.The FCC then blocked a law enforcement investigation into the fraud, refusing to hand over server logs or API key data that could easily disclose the culprit(s). FOIA requests and public requests for help (one coming from myself) were also promptly ignored by the Trump FCC.To help speed things along, the GAO says it will launch an investigation into the bogus comments and the FCC's response to them, though they warn in a letter that it may be at least five months before they have the staff and resources for such an inquiry:
Disrupting The Fourth Amendment: Half Of Law Enforcement E-Warrants Approved In 10 Minutes Or Less
Law enforcement officers will often testify that seeking warrants is a time-consuming process that subjects officers' sworn statements to strict judicial scrutiny. The testimony implies the process is a hallowed tradition that upholds the sanctity of the Fourth Amendment, hence its many steps and plodding pace. The problem is law enforcement officers make these statements most often when defending their decision to bypass the warrant process.Criminals move too fast for the warrant process, they argue. Officers would love to respect the Fourth Amendment, but seem to feel this respect is subject to time constraints. Sometimes they have a point. And when they have a legitimate point, they also have a legitimate exception: exigent circumstances. In truly life-threatening situations, the Fourth Amendment can be shoved aside momentarily to provide access to law enforcement officers. (The exception tends to swallow the rule, though. Courts have pushed back, but deference to officers' assertions about exigency remains the status quo in most courtrooms.)The exigent circumstances exception remains intact, something law enforcement can lean on when the warrant process takes too long. When lives or evidence are at stake, sometimes corners have to be cut to ensure officers can get their man/woman and any evidence on hand. But the oft-stated claim that warrant acquisition is a long and difficult process is undercut completely when underlying facts about warrant approval are examined. Jessica Miller and Aubrey Weaver of the Salt Lake City Tribune took a close look at electronic warrants approved by Utah judges and found even the most exigent of exigent circumstances rarely evolve faster than warrants can be obtained.
Danish Police Charge Over 1,000 People With Sharing Underage Couple's Sexting Video And Images
Techdirt posts about sexting have a depressingly similar story line: young people send explicit photos of themselves to their partners, and one or both of them end up charged with distributing or possessing child pornography. Even more ridiculously, the authorities typically justify branding young people who do this as sex offenders on the grounds that it "protects" the same individuals whose lives they are ruining. Judging by a story in The Local, reporting on a press release that first appeared on the MyNewsDesk site (original in Danish), the police in Denmark seem to be taking a more rational approach. Rather than charging the two young people involved for sexting, they are charging 1,004 people who shared the video and images afterwards, some several hundred times:
Denuvo Sold To Irdeto, Which Boasts Of Acquiring 'The World Leader In Gaming Security'
Any reading of our thorough coverage of Denuvo DRM could be best summarized as: a spasm of success in 2015 followed by one of the great precipitous falls into failure in the subsequent two years. While some of us are of the opinion that all DRM such as Denuvo are destined for eventual failure, what sticks out about Denuvo is just how stunningly fast its fall from relevancy has come about. Once heralded as "the end of game piracy," even the most recent iterations of Denuvo's software is being cracked on the timeline of days and hours. You would be forgiven if, having read through all of this, you thought that Denuvo was nearly toxic in gaming and security circles at this point.But apparently not everyone thinks this is true. Irdeto, the company out of the Netherlands we last saw pretending that taking pictures of toys is copyright infringement and insisting that a real driver of piracy was winning an Oscar, has announced that it has acquired Denuvo.
...324325326327328329330331332333...