|
by Daily Deal on (#3F8BJ)
Even the best writers make errors, WhiteSmoke checks your work for grammar, spelling, punctuation, and style errors - so you never send off a flawed work email again. Whether you're writing on mobile or desktop, this easy-to-use software is compatible with all browsers, includes a translator for over 50 languages, and lets you perfect your writing virtually anywhere you do it. A 1 year subscription is on sale for $19.99 or pay once for unlimited access for $69.99.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
|
Techdirt
| Link | https://www.techdirt.com/ |
| Feed | https://www.techdirt.com/techdirt_rss.xml |
| Updated | 2025-11-21 09:15 |
|
by Tim Cushing on (#3F83Y)
The Privacy Commissioner of Canada is proposing something dangerous. Given the Canadian Supreme Court's ruling in the Equustek case -- which basically said Canada's laws are now everybody's laws -- a recent report issued by the Commissioner that reads something into existing Canadian law should be viewed with some concern. Michael Geist has more details.
|
|
by Karl Bode on (#3F7HG)
If you've been playing along at home, Trump's FCC hasn't been particularly kind to consumers, competition, or the health of the internet. It has, however, been a massive boon to major ISPs terrified of disruption and competition, especially those looking to forge new media monopolies where they dominate both the conduit -- and the content -- coming to the home.Under Pai, the FCC has gutted broadband programs for the poor, protected the cable industry's monopoly over the cable box from competition, made it easier for prison phone monopolies to rip off inmate families, dismantled generations old media consolidation rules simply to aid Sinclair Broadcasting's merger ambitions, killed meaningful broadband privacy protections, tried to weaken the standard definition of broadband (to help hide competition gaps) and weakened rules preventing business broadband and backhaul monopolies from abusing smaller competitors, hospitals, or schools.And that's before you even get to Pai's attack on net neutrality, potentially one of the least popular tech policy decisions in the history of the modern internet. That entire calamity is a universe unto itself, with the FCC currently under investigation for turning a blind eye to identity theft and fraud during the open comment period, as well as for bizarrely making up a DDOS in a ham-fisted attempt to downplay the public's disdain for Pai's agenda. It will take many years and numerous lawsuits for the problems with Pai's rushed repeal of the rules to fully materialize.With Pai's tenure seen as a shitshow in the wake of the net neutrality repeal, the FCC recently tried to undertake an image reclamation effort. That came in the form of a press release (pdf) lauding what the FCC calls a "year of action and accomplishment" in terms of "protecting consumers," "promoting investment," and "bridging the digital divide." You just know the FCC under Pai is doing a good job because, uh, graphics:Amusingly, the lion's share of the agency's listed "accomplishments" were noncontroversial projects simply continued from the last FCC under Tom Wheeler. That includes efforts to open additional spectrum for wireless use, attempts to speed up cell tower placement, or ongoing efforts to reduce robocalls (the impacts of which aren't apparent). Many of the listed efforts are just the FCC doing its job, ranging from conducting an investigation into the recently botched Hawaii ballistic missile snafu, to "approving new wireless charging tech" that nobody thought should be blocked anyway.Elsewhere, the agency's accomplishment list engages in willful omission. For example, while the FCC pats itself on the back for creating a "broadband deployment advisory council," it ignores the fact that said counsel is plagued by allegations of cronyism and dysfunction in the wake of recent resignations. The FCC similarly pats itself on the back for the agency's Puerto Rico hurricane response, despite the fact that locals there say the federal government and the FCC failed spectacularly in its response to the storm.But it's the agency's claims of consumer protection that continue to deliver the best unintentional comedy. As you might expect, Pai's FCC continues to claim that killing net neutrality rules was a good thing because the rules devastated sector investment, a proven lie the agency simply can't stop repeating:
|
|
by Tim Cushing on (#3F754)
Missouri governor Eric Greitens, along with his staff, are the targets of a recently-filed public records-related lawsuit [PDF]. Two St. Louis County attorneys are accusing the governor of dodging public records laws with his use of Confide, an app that deletes text messages once they're read and prevents users from saving, forwarding, printing, or taking screenshots of the messages.The governor's use of the app flies in the face of the presumption of openness. The attorneys are hoping the court will shut down the use of Confide to discuss official state business. The governor has argued an injunction would constitute prior restraint.
|
|
by Timothy Geigner on (#3F6FW)
Nearly three years ago, Bell's Brewery, whose products I used to buy greedily, decided to oppose a trademark for Innovation Brewing, a tiny operation out of North Carolina. The reasons for the opposition are truly difficult to comprehend. First, Bell's stated that it uses the slogan "Bottling innovation since 1985" on some merchandise. This was only barely true. The slogan does appear on some bumper stickers that Bell sells and that's pretty much it. It appears nowhere in any of the brewery's beer labels or packaging. Also, Bell's never registered the slogan as a trademark. Bell's also says it uses the slogan "Inspired brewing" and argues that Innovation's name could create confusion in the marketplace because it's somehow similar to that slogan.This is a good lesson in why trademark bullying of this nature is a pox on any industry derived largely of small players, because it's only in the past weeks that the Trademark Trials and Appeals Board in Virginia has ruled essentially that Bell's is full of crap.
|
|
by Mike Masnick on (#3F643)
It is no secret that the estate of Martin Luther King Jr. have a long and unfortuate history of trying to lock up or profit from the use of his stirring words and speeches. We've talked about this issue going back nearly a decade and it pops up over and over again. By now you've probably heard that the car brand Dodge (owned by Chrysler) used a recording of a Martin Luther King Jr. speech in a controversial Super Bowl ad on Sunday. It kicked up quite a lot of controversy -- even though his speeches have been used to sell other things in the past, including both cars and mobile phones.King's own heirs have been at war with each other and close friends in the past few years, suing each other as they each try to claim ownership over rights that they don't want others to have. Following the backlash around the Super Bowl ad, the King Center tried to distance itself from the ad, saying that they have nothing to do with approving such licensing deals:
|
|
by Adelin Cai on (#3F5VK)
Last week, Santa Clara University hosted a gathering of tech platform companies to discuss how they actually handle content moderation questions. Many of the participants in the event have written essays about the questions that were discussed at the event. Last week we published five of those essays and this week we're continuing to publish more of them, including this one.The way platforms develop content moderation rules can seem mysterious or arbitrary. At first glance, the result of this seemingly inscrutable process is varying guidelines across different platforms, with only a vague hint of an industry standard -- what might be banned on one platform seems to be allowed on another. While each platform may have nuances in the way they create meaningful content moderation rules, these teams generally seek to align with the platform's/company's purpose, and use policies and guidelines to support an overarching mission. Different platforms delivering unique value propositions to users' accounts for variations in content moderation approaches.At Pinterest, our purpose is clear: we help people discover and do what they love by showing them ideas that are relevant, interesting, and personal. For people to feel confident and encouraged to explore new possibilities, or try new things on Pinterest, it's important that the Pinterest platform continues to prioritize an environment of safety and security. To accomplish that, a team of content policy professionals, skilled in collaborating across different technical and non-technical functions at the company, decide where we draw the lines on what we consider acceptable boundaries for content and behavior. Drawing upon the feedback of Pinterest users, and staying up to date on prevailing discourse about online content moderation, this team of dedicated content generalists brings diverse perspectives to bear upon the guidelines and processes that keep divisive, disturbing, or unsafe content off Pinterest.We know how impactful Pinterest can be in helping people make decisions in their daily life, like what to eat or what to wear, because we hear directly from the Pinterest community. We've also heard how people use Pinterest to find resources to process illness or trauma they may have experienced. Sometimes, the content that people share during these difficult moments can be polarizing or triggering to others, and we have to strike the right balance of letting people rely on Pinterest as a tool for navigating these difficult issues, and living up to our goal of removing divisive, disturbing, or unsafe content. As a team, we have to consider the broad range of use cases for content on Pinterest. For example, important historical yet graphic images of war can be collected in the context of learning about world events, or to glorify violence. Our team takes different contextual signals into account during the review process in order to make meaningful content moderation choices that ensure a positive experience for our community. If we wish to have the impact we hope to have in people's lives, we must also take responsibility for their entire experience.To be responsible for the online environment that our community experiences, and to be aware of how that experience connects in a concrete way to their life offline, means we cultivate the humility to realize our team's limitations. We can't claim to be experts in fields like grief counseling, eating disorder treatment, or suicide prevention -- areas that many groups and individuals have dedicated their careers to supporting -- so it's crucial that we partner with experts for the guidance, specialized skills, and knowledge that will enable us to better serve our community with respect, sensitivity, and compassion.A couple years ago, we began reexamining our approach to one particularly difficult issue - eating disorders - to understand the way our image-heavy platform might contribute to perpetuating unhealthy stereotypes about the ideal body. We had already developed strict rules about content promoting self-harm, but wanted to ensure we were being thoughtful about content offering "thinspiration" or unhealthy diets from all over the internet. To help us navigate this complicated issue, we sought out the expertise of the National Eating Disorder Association (NEDA) to audit our approach, and understand all of the ways we might engage with people using the platform in this way.Prior to reaching out to NEDA, we put together a list of search queries and descriptive keyword terms that we believed strongly signaled a worrying interest in self-harm behaviors. We limit the search results we show when people seek out content using these queries, and also use these terms as a guide for Pinterest's operational teams to decide if any given piece of self-harm-related content should be removed or hidden from public areas of the service. The subject matter experts at NEDA generously agreed to review our list to see if our bar for problematic terms was consistent with their expert knowledge, and they provided us with the feedback we needed to ensure we were aligned. We were relieved to hear that our list was fairly comprehensive, and that our struggle with grey area queries and terms was not unique. Since beginning that partnership with NEDA, they have developed a rich Pinterest profile to inspire people by sharing stories of recovery, content about body positivity, and tips for self-care and illness management. By maintaining a dialogue with NEDA, the Pinterest team has continued to consider and operationalize innovative features to facilitate possible early intervention on the platform. For example, we provide people seeking eating disorder content with an advisory that also links to specialized resources on NEDA's website, and supported their campaign for National Eating Disorder Awareness Week. Through another partnership and technical integration with Koko, a third party service that provides platforms with automated and peer-to-peer chat support for people in crisis, we're also able to provide people who may be engaging in self-harm behaviors with direct, in-the-moment crisis prevention.Maintaining a safe and secure environment in which people can feel confident to try new things requires a multifaceted approach and multifaceted perspectives. Our team is well-equipped to grapple with broad online safety and content moderation issues, but we have to recognize when we might lack in-house expertise in more complex areas that require additional knowledge and sensitivity. We have much more work to do, but these types of partnerships help us adapt and grow as we continue to support people using Pinterest to discover and do the things they love.Adelin Cai runs the Policy Team at Pinterest
|
|
by Timothy Geigner on (#3F5N2)
The evolution of the music industry's response to the fact that copyright infringement exists on the internet has been both plodding and frustrating. The industry, which has gone through stages including a focus on high-profile and punitive lawsuits against individual "pirates", its own flavors of copyright trolling, and misguided attempts to "educate" the masses as to why their natural inclinations are the worst behavior ever, have since settled into a mantra that site-blocking censorship of the internet is the only real way to keep the music industry profitable. All of this stems from a myopic view on piracy held by the industry that it is always bad for every artist any time a music file is downloaded for free as opposed to purchased off of iTunes or wherever. We have argued for years that this view is plainly wrong and far too simplistic, and that there is actually plenty of evidence that, for a large portion of the music industry, piracy may actually be a good thing.Well, there has been an update to a study first publicized as a work in progress several years ago run by the Information Economics and Policy Journal out of Queen's University. Based on that study, it looks like attempts to shut down filesharing sites would not just be ineffectual, but disastrous for both the music industry as a whole and especially new and smaller-ticket artists. The most popular artists, on the other hand, tend to be more hurt by piracy than helped. That isn't to be ignored, but we must keep in mind that the purpose of copyright law is to get more art created for the benefit of the public and it seems obvious that the public most benefits from a larger successful music ecosystem as opposed to simply getting more albums from the largest audiences.The methodology in the study isn't small peanuts, either. It considered 250,000 albums across five million downloads and looked to match up the pirating of those works and what effect that piracy had in the market for that music.
|
|
by Tim Cushing on (#3F5G7)
What has DRM taken from the public? Well, mainly it's the concept of ownership. Once an item is purchased, it should be up to the customer to use it how they want to. DRM alters the terms of the deal, limiting customers' options and, quite often, routing them towards proprietary, expensive add-ons and repairs.But the question "What would we have without DRM?" is a bit more slippery. The answers are speculative fiction. This isn't to say the answers are unimportant. It's just that it's tough to nail down conspicuous absences. The nature of DRM is that you don't notice it until it prevents you from doing something you want to do.DRM -- and its enabler, the anti-circumvention clause of the DMCA -- ties customers to printer companies' ink. It ties Keurig coffee fans to Keurig-brand pods. It prevents farmers from repairing their machinery and prevents drivers from tinkering with their cars. It prevents the creation of backups of digital and physical media. It can even keep your cats locked out of their pricey restroom.To better show how DRM is stifling innovation, Boing Boing's Cory Doctorow and the EFF have teamed up to produce a catalog of "missing devices": useful tech that could exist, but only without DRM.
|
|
by Daily Deal on (#3F5G8)
Qi wireless charging is all the rage now that the new iPhones are finally compatible with it. Handcrafted from North American walnut wood, this minimalist charger is easily portable and capable of charging your smartphone fast. Yoiur device is automatically disconnected when it is fully charged and a smart light indicates when the power is on, charging status, and fully charged status. The Takieso Walnut Qi Charger is on sale for $34.90.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
|
|
by Mike Masnick on (#3F5B2)
The VC Star has a slightly bizarre article about a school board trustee of the Conejo Valley Unified School District (in Southern California) named Mike Dunn, who apparently was upset about a speech given by a mother at a board meeting. That mother -- Jessica Weihe -- also blogs on the site AnonymousMommy.com (though as far as I can tell, she was not "anonymous" in that people in the community appeared to know who she was). Weihe gave a perhaps slightly rambling speech at a recent board meeting. The details appear to be somewhat specific to some district policies on handling "mature" books, but suffice it to say that it appears that Dunn was arguing against certain books being on the curriculum because he felt their content was inappropriate. Among the books that there was some controversy about was Sherman Alexie's quite well known book The Absolutely True Diary of a Part-Time Indian. Weihe's speech mocked Dunn for having tried to get it off the curriculum, and accuses him of not having read the book, and over-reacting to why it might be a problem. Here's a snippet from what she said:
|
|
by Tim Cushing on (#3F5B3)
House intelligence oversight leader Devin Nunes released his supposed bombshell Friday. The Nunes memo was supposed to contain info showing the FBI had engaged in a questionable, politically-motivated investigation of Trump staff. How this news was supposed to be shocking was anyone's guess. Anyone who has followed the FBI's activities since the days of J. Edgar Hoover already knows the FBI engages in questionable, politically-motivated investigations. The only new twist is the FISA court's involvement and the use of secretive surveillance powers to collect domestic communications.The FBI responded by noting the memo [PDF] contained "material omissions of fact." What's contained in the memo likely provides rhetorical ammo to those who believe Trump and his advisors did nothing wrong during the run-up to the election. But it will only provide limited support. What's contained in the memo are accusations the FBI sought (and obtained) FISA warrants to surveill one-time Trump advisor Carter Page. The FBI -- according to the memo -- used the dubious Christopher Steele dossier to buttress its allegations. It apparently continued to do so even after it knew the Steele dossier had been paid for by the Democratic National Committee.The memo notes this interception was not performed under Title VII, which covers the recently-renewed Section 702 collection powers. This surveillance was performed under Title I -- a more "traditional" FISA process in which the government seeks probable cause-based warrants from the FISA court, much like law enforcement officers seek warrants from magistrate judges.The memo suggests the FBI should have dropped the investigation -- or at least given the FISA court heads up -- once it became apparent the Steele dossier was politically compromised. But the FBI continued to ask for renewals and these requests were approved by law enforcement officials Trump and most of the Republican party no longer care for. The list includes James Comey (fired), Andrew McCabe (resigned), Sally Yates (fired), and Rod Rosenstein (who Trump would apparently like to fire).The memo also points out that Christopher Steele was "terminated" (as a source) by the FBI for disclosing his relationship with the agency to the press. Steele also apparently stated he was very interested in preventing Trump from winning the national election. There's also mention of a conflict of interest: a deputy attorney general who worked with those pursuing an investigation of Carter Page was married to a woman who worked for Fusion GPS, the research group paid by the DNC to dig up dirt on Trump.This all seems very damning at first blush. The Nunes memo is the party's attempt to derail the FBI's ongoing investigation of the Trump campaign and its involvement with Russian meddling in the presidential election. But there's a lot missing from the memo. The facts are cherry-picked to present a very one-sided view of the situation.The rebuttal letter [PDF] from Democratic legislators is similarly one-sided. But adding both together, you can almost assemble a complete picture of the FBI's actions. The rebuttal points out Christopher Steele had no idea who was funding his research beyond Fusion GPS. It also points out the dirt-digging mission was originally commissioned by the Washington Free Beacon, a right-leaning DC press entity.It also points out something about the paperwork needed to request a FISA warrant. To secure a renewal, the FBI would have to show it had obtained evidence of value with the previous warrant. If it can't, it's unlikely the renewal request would be approved by FBI directors and/or US attorneys general. The multiple renewals suggest the FBI had actually obtained enough evidence of Carter Page's illicit dealings with the Russians to sustain an ongoing investigation.Beyond that, there's the fact that Devin Nunes -- despite spending days threatening to release this "damning" memo -- never bothered to view the original documents underlying his assertions of FBI bias. In an interview with Fox News after the memo's release, Nunes admitted he had not read the FBI's warrant applications. So, the assertions are being made with very limited info. Nunes apparently heard the Steele dossier was involved and that was all he needed to compile a list of reasons to fire current Trump nemesis Robert Mueller... disguised as a complaint about improper surveillance.It's this complaint about abuse of surveillance powers that really chafes. Nunes throttled attempts at Section 702 reform last month and now wants to express his concerns that the FBI and FISA court may not be protecting Americans quite as well as they should. Marcy Wheeler has a long, righteously angry piece at Huffington Post detailing the rank hypocrisy of Nunes' self-serving memo.
|
|
by Tim Cushing on (#3F5B4)
When an idea fails, legislators resurrect it. The problem must not be with the idea, they reason. It must be with the implementation. So it goes in Europe, where the Bulgarian government is trying to push an idea that has demonstrably failed elsewhere on the continent.
|
|
by Leigh Beadon on (#3F5B5)
This week, we've been running a series of posts dealing with discussion moderation, which garnered our top comments on both sides. For insightful, the first place winner is an anonymous commenter taking the opportunity to give Techdirt a tip of the hat:
|
|
by Leigh Beadon on (#3F5B6)
Five Years AgoThis week in 2013, something that's now the norm was fresh and surprising: Netflix released the entire season of its new show House of Cards at once. Something less pleasant was born the same week, with the W3C's first official mention of adding DRM to HTML5. We also saw Alan Cooper sue John Steele and Prenda Law, leading to a bit of a scramble by everyone's favorite law firm. Meanwhile, this was the week that the DMCA exemption for phone unlocking was eliminated, and the legal battle over Barbie and Bratz (the subject of a recent episode of our podcast) finally came to an end.Ten Years AgoThere was lots of copyright back-and-forth this week in 2008, with U2's manager jumping on the "make the internet pay us!" bandwagon, a fresh flare-up over the copyright status of jokes, an EU court telling ISPs they don't have to hand over downloader names, Swiss officials pushing back against the aggressive tactics of anti-piracy groups, and a judge telling the RIAA (which had recently struggled to explain exactly why copyright damages need to be higher) that it should be fined for bundling downloading lawsuits. Meanwhile, as had been expected, Swedish prosecutors caved to US pressure and took action against The Pirate Bay.Fifteen Years AgoThis week in 2003, Kazaa pre-empted the heated race to kill it in the music industry by filing a lawsuit against record labels for misusing their copyrights. Declan McCullough was musing about the scary possibility of the DOJ going after file sharers as felons, Business Week was pushing the ol' "don't litigate, educate" line on piracy (which is half right), and record stores were trying to save their future by teaming up with digital distributors. Telemarketers were suing the FTC in an attempt to block its proposed do-not-call list, an internet cafe in the UK was found guilty of piracy, and the format war for the future of disc-bound music was raging despite nobody caring.
|
|
by Tim Cushing on (#3F5B7)
The Blue Lives Matter movement has traveled overseas. Here in the US, we've seen various attempts to criminalize sassing cops, although none of those appear to be working quite as well as those already protected by a raft of extra rights would like. Meanwhile, we had Spain lining itself up for police statesmanship by making it a criminal offense to disrespect police officers.Over in Hong Kong, the police chief -- while still debating whether or not he should offer an apology for his officers' beating of bystanders during a 2014 pro-democracy protest -- has thrown his weight behind criminalization of insults directed at officers.
|
|
by Timothy Geigner on (#3F5B8)
The Super Bowl is here and this Sunday many of us will bear witness to the spectacle that is million dollar advertising spots mashed together over four hours with a little bit of football thrown in for intermissions. As we've discussed before, this orgy of revenue for the NFL is, in some part, propagated by the NFL's never ending lies about just how much protection the trademark it holds on the term "Super Bowl" provides. While the league obviously does have some rights due to the trademark, it often enjoys playing make believe that its rights extend to essential control over the phrase on all mediums and by all persons for all commercial or private interests. This, of course, is not true, and yet a good percentage of the public believes these lies.Why? The NFL, pantheon of sports power though it may be, is not so strong as to be able to single handedly confuse millions of people into thinking they can't talk about a real life event whenever they want. No, the NFL has been helped along in this by members of the media who repeat these lies, often in very subtle ways. Ron Coleman of the Likelihood Of Confusion site has a nice write up publicly shaming a number of these media members, including Lexology's Mitchell Stabbe.
|
|
by Jacob Rogers on (#3F5B9)
Today, Santa Clara University is hosting a gathering of tech platform companies to discuss how they actually handle content moderation questions. Many of the participants have written short essays about the questions that are being discussed at this event. We've published a bunch of essays this week from the conference, and will continue with more next week.Have you ever wondered why it can be hard to find out what some old paintings look like? Why there seem to be so few pictures of artistic works available from many countries even though they're filled with public sculptures and murals? Or why prices for books and movies can be so wildly different in different countries? The answer is that copyright law is different all over the world, and these differences can make figuring out what to do with these works so difficult or risky that most websites are not willing to have them around at all. This essay talks about a few of these works and why they add a major challenge to content moderation online.To begin, Wikipedia and the Wikimedia Foundation that hosts it have a mission to host freely available educational content, which means that one of the areas that comes up for us quite often when we receive content moderation requests is whether something is truly free or not. This can come up in a bunch of different ways, and I'd like to talk about a few of them, and why they make it quite difficult to figure out what's really available to the public and what's not.The first one is old pictures and manuscripts. It's generally accepted that if a work was published before 1923, then it's old enough that the author's rights have expired and the ability to freely copy, share, and remix the work shouldn't be limited by the law anymore. But that raises a couple questions. First, how do you know when something was published, especially back then? There's a whole swath of old pictures and writings that were prepared before 1923, but may have never been published at all until later, which then requires figuring out a different timing scheme or figuring out when the work was published: a sometimes very difficult affair due to records lost during the World Wars and various upheaval in the world over the last century. For just one example, a dispute about an old passport photo recently came down to whether it was taken in Egypt or Syria during a time when those national borders were very fluid. If it had been in Egypt, it would have been given U.S. copyright and protected because it was after 1923, but if it had been in Syria at the time, it would not have been protected because that country wasn't extended recognition for copyrights at the time.A second example is works from countries with broad moral rights. All the works on Wikimedia projects that were made recently are dedicated by their authors to the public domain or licensed under free culture licenses like Creative Commons. However, these sorts of promises only work in some countries. There are international copyright treaties that cover a certain agreed-upon set of protections for every country, but many countries add additional rights on top of the treaties such as what are called moral rights. Moral rights in many countries give the creator the power to rescind a license and they cannot give up that power no matter how hard they try. It ends up looking something like this: "I promise that you can use my work forever as long as you give me attribution, and anyone else can reuse it too, and I want this to be irrevocable so that the public can benefit without having to come back to me." And then a couple years later, it's "oh, sorry, I've decided that I changed my mind, just forget my earlier promise." In some places that works, and because of that possibility, people can't always be sure that the creative works being offered to them are reliable.A third problem is pictures of artwork. This one applies, though a bit differently, to both new and old works. With new photos of old works, it's a question of creativity. Copyrights are designed to reward people for their original creativity: you don't get a new "life of the author plus 70 years" of protection for making a photocopy. But in some places, they again go past the international rights agreed upon in the copyright treaties and add extra protections. In this case, many countries offer a couple decades worth of protection for taking a straight on 2-D photograph of an old work of art. The Wikimedia Foundation is currently in a lawsuit about this with the Reiss Engelhorn Museum in Germany, where the museum argues that photographs on its website are copyrighted even though the only thing shown in the photo is a public domain painting such as a portrait of Richard Wagner.The other variation of problems with photos of art is photographs of more recent works out in the public. Did you know that in many places if you're walking in a park and you take a snapshot with a statue in it, you're actually violating someone's copyright? This varies from country to country: some places allow you to photograph artistic buildings but not sculptures or mosaics, other places let you take photographs of anything out in public, and others prohibit photographs of anything artistic even if it's displayed in public. This issue, called freedom of panorama, is one that many Wikimedians are concerned over, and is currently being debated in the European Parliament, but in the meantime can lead to very confused expectations about what sorts of things can be photographed as the answer varies depending on where you are.The difficulty around so many of these types of works is that they put the public at risk. The works on Wikipedia, and works in the public domain or that are freely licensed more generally are supposed to be free for everyone to use. Copyright is built on a balance that rewards authors and artists for their creativity by letting them have a monopoly on who uses their works and how they're used. But the system has become so strong that even when the monopoly has expired and the creator is long dead, or when the creator wants to give their work away for free, it's extremely difficult for the public to understand what is usable and to use it safely and freely as intended. The public always has to be worried that old records might not be quite accurate, or that creators in many places will simply change their minds no matter how many promises and assurances they provide that they want to make something available for the public good.These kinds of difficulties are one of the reasons why the Wikimedia Foundation made the decision to defer to the volunteer editors. The Wikimedia movement consists of volunteers from all over the world, and they get to decide on the rules for each different language of Wikipedia. This often helps to avoid conflicts, such as many languages spoken primarily in Europe choosing not to host images that might be allowed under U.S. fair use law, whereas English language does allow fair use images. It's difficult for a small company to know all the rules in hundreds of different countries, but individual volunteers from different places can often catch issues and resolve them even where the legal requirements are murky. As just one example, this has actually led Wikimedia volunteers who deal with photographs to have one of the most detailed policies for photographs of people of any website (and better than many law textbooks). In turn, volunteers handling so many of the content issues means that the Foundation is able to dedicate time from our lawyers to help clarify situations that do present a conflict such as the Reiss Engelhorn case of freedom or panorama issues already mentioned.That said, even with efforts from many dedicated people around the world, issues like these international conflicts leave some amount of confusion and conflict. These issues often don't receive as much attention because they're not as large as, say, problems with pirated movies, but they present a more pernicious threat. As companies shy away from dealing with works that might be difficult to research or uncertain as to how the law applies to them, the public domain slowly shrinks over time and we are all poorer for it.Jacob Rogers is Legal Counsel for the Wikimedia Foundation.
|
|
by Mike Masnick on (#3EYVJ)
Let's start this post off this way: the whole "BDS" movement and questions about Israel are controversial and people have very, very strong opinions. This post is not about that, and I have no interest in discussing anyone's views on Israel or the BDS movement. This post is about free speech, so if you want to whine or complain about Israel or the BDS movement, find somewhere else to do it. This is not the post for you. This post should be worth discussing on the points in the post itself, and not as part of the larger debate about Israel.Back in December, the very popular New Zealand singer Lorde announced that she was cancelling a concert in Israel after receiving requests to do so from some of her fans who support boycotting Israel.
|
|
by Tim Cushing on (#3EYPV)
The reputation management tactic of filing bogus defamation lawsuits may be slowly coming to an end, but there will be a whole lot of reputational damage to be spread among those involved by the time all is said and done.Richart Ruddie, proprietor of Profile Defenders, filed several lawsuits in multiple states fraudulently seeking court orders for URL delistings. The lawsuits featured fake plaintiffs, fake defendants, and fake admissions of guilt from the fake defendants. Some judges issued judgments without a second thought. Others had second thoughts but they were identical to their first one. And some found enough evidence of fraud to pass everything on to the US Attorney's office.But Ruddie couldn't do all of this himself. He needed lawyers. And now those lawyers are facing a bar complaint for assisting Ruddie (and possibly others) in fraudulent behavior. Eugene Volokh has more details at the relocated (and paywall-free!) Volokh Conspiracy.
|
|
by Daily Deal on (#3EYKW)
The $70 WaveSound 2.1 headphones give you the freedom to listen to your music when, where, and how you want. The built-in Bluetooth 4.2 is 250% faster and has 10x more bandwidth than Bluetooth 4.0, allowing you to connect to your devices confidently and quickly, making your audio experiences seamless. And if wired listening is more your style, you can do that, too. With 16 hours of playtime, you'll be set for most of the day.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
|
|
by Mike Masnick on (#3EYFF)
We've been following the BMG v. Cox lawsuit from the very beginning, through all its very odd twists and turns, including having a judge in the district court, Liam O'Grady, who made it quite clear that he didn't much care about the internet, and didn't see why it was a problem if people lost their internet access completely based on merely a few allegations of copyright infringement. The 4th Circuit appeals court has now overturned the lower court ruling and sent the case back to the district court for a do-over. While the initial decision was awful (as we discuss below), this new ruling makes a huge mess out of copyright law and will have serious, dangerous, and long-lasting consequences for the internet as a whole.If you don't recall, the case involved BMG suing Cox Communications, though much of the case really hinged on the actions of another company, Rightscorp, who has been trying (and mostly failing) to build a business model around a form of mild copyright trolling. Rather than the aggressive "sue 'em and settle," strategy employed by others, Rightscorp would send DMCA takedowns to ISPs, with a settlement offer, and hope that the ISPs would pass those notices on to subscribers accused of infringing.Cox Communications -- a decently large broadband provider -- made it quite clear to Rightscorp that it did not intend to be a part of its business model, and refused to pass on the settlement letters. Rightscorp started flooding Cox with notices... to the point that Cox decided to effectively just trash all inbound messages from Rightscorp as spam. After all this happened, Rightscorp signed BMG as a client, and then sued Cox, claiming the ISP had violated the DMCA by not kicking users off. What came out during the trial was that Cox basically had a "thirteen strike" policy (some of the earlier strikes involved stopping internet access until you read something and clicked something -- or requiring the user to call in to Cox).What is rarely noted, of course, is that Cox was basically one of the only ISPs to actually have any termination policy for people who used their connections for copyright infringement. Most ISPs (and most copyright lawyers not working for legacy industry interests) believed that the DMCA's requirement for a "repeat infringer policy" was not directed at access providers, but at content hosts, where the issues are much clearer. However, BMG claimed here that Cox violated the DMCA's requirement for a repeat infringer policy -- and the court agreed. Cox was, partly, undone by some pretty bad behavior behind the scenes, that seemed to tar it as a "bad actor" and obscure the underlying copyright issues. Even more ridiculous was that Judge O'Grady later argued that Cox should pay the other side's legal fees, because even bringing up the idea that it was protected by safe harbors was "objectively unreasonable." That, itself, was crazy, since tons of copyright experts actually think Cox was correct.On appeal there were two key issues raised by Cox. The main issue was to argue that O'Grady was incorrect and that the DMCA safe harbors covered Cox. The second pertained to the specific jury instructions given to the jurors in the case. The new ruling unfortunately upholds the ruling that Cox is not covered by the DMCA's safe harbors, but does say that the instructions given to the jury were incorrect. Of course, it then proceeds to make a huge muddle of what copyright law says in the process. But we'll get to that.The Impact on Safe HarborsLet's start with the safe harbors part of the ruling, which is what most people are focusing on. As the court notes, Cox (correctly, in my view), pointed out that even if it was subject to a repeat infringer policy, that should cover actual infringers, not just those accused of infringing. After all, it's not like there aren't tons upon tons of examples of false copyright infringement accusations making the rounds, and that's doubly true when it comes to trolling operations. If the rule is that people can lose all access to the internet based solely on unproven accusations of infringement, that seems like a huge problem. But, here, the court says that it's the correct way to read the statute:
|
|
by Karl Bode on (#3EXYB)
Earlier this month, AT&T cancelled a smartphone sales agreement with Huawei just moments before it was to be unveiled at CES. Why? Several members of the Senate and House Intelligence Committees had crafted an unpublished memo claiming that Huawei was spying for the Chinese government, and pressured both the FCC and carriers to blacklist the company. AT&T, a stalwart partner in the United States' own surveillance apparatus was quick to comply, in part because it's attempting to get regulators to sign off on its $86 billion acquisition of media juggernaut Time Warner.But Verizon has also now scrapped its own plans to sell the company's smartphones based on those same ambiguous concerns:
|
|
by Tim Cushing on (#3EXHA)
Drivers sent tickets by New Miami, Ohio speed cameras will be getting a refund. The state appeals court has upheld the ruling handed down by the lower court last spring. At stake is $3 million in fines, illegally obtained by the town.
|
|
by Timothy Geigner on (#3EWV2)
As readers of this site will know, once-venerated gaming giant Atari long ago reduced itself to an intellectual property troll mostly seeking to siphon money away from companies that actually produce things. The fall of one of gamings historical players is both disappointing and sad, given just how much love and nostalgia there is for its classic games. It was just that nostalgia that likely led Nestle to craft an advertisement in Europe encouraging buyers of candy to "breakout" KitKats and included imagery of the candy replacing a simulation of a game of Breakout. For this, Atari sued over both trademark and copyright infringement, stating for the latter claim that the video reproduction of a mock-game that kind of looks like Breakout constituted copyright infringement.As we discussed in that original post, both claims are patently absurd. Nestle and Atari are not competitors and anyone with a working frontal lobe will understand that the ad was a mere homage to a classic game made decades ago. If the products aren't competing, and if there is no real potential for public confusion, there is not trademark infringement. As for the copyright claim, the expression in the homage was markedly different from Atari's original game, and there's that little fact that Nestle didn't actually make a game to begin with. They mocked up a video. Nothing in there is copyright infringement.It was enough that I'm certain some of our readers wondered why Atari would do something like this to begin with. The answer is the recent news that a settlement has been reached in the lawsuit, and it was almost certainly that settlement that Atari was fishing for all along.
|
|
by Karl Bode on (#3EWE9)
In the wake of the FCC's repeal of federal net neutrality rules, countless states have rushed to create their own protections. Numerous states from Rhode Island to Washington State are considering new net neutrality legislation, while other states (like Wyoming and New York) are modifying state procurement policies to block net neutrality violating ISPs from securing state contracts. These states are proceeding with these efforts despite an FCC attempt to "pre-empt" (read: ban) states from stepping in and protecting consumers, something directly lobbied for by both Verizon and Comcast.One of two California net neutrality laws, SB-460, passed 21-12 by the state Senate, and will now head to the state Assembly:
|
|
by Alex Feerst on (#3EW69)
On February 2nd, Santa Clara University is hosting a gathering of tech platform companies to discuss how they actually handle content moderation questions. Many of the participants have written short essays about the questions that will be discussed at this event -- and over the next few weeks we'll be publishing many of those essays, including this one.When people express free speech-based concerns about content removal by platforms, one type of suggestion they generally offer is -- increase transparency. Tell us (on a website or in a report or with an informative "tombstone" left at the URL where the content used to be) details about what content was removed. This could happen lots of different ways, voluntarily or not, by law or industry standard or social norms. The content may come down, but at least we'll have a record and some insight into what happened, at whose request, and why.In light of public discussions about platform transparency, especially in the past year, this post offers a few practical thoughts about transparency by online UGC platforms. First, some of the challenges platforms face in figuring out how to be transparent with users and the public about their content moderation processes. Second, the industry practice of transparency reports and what might be done to make them as useful as possible.Content Moderation Processes & DecisionsSo, why not be radically transparent and say everything? Especially if you're providing a service used by a substantial chunk of the public and have nothing to hide. Just post all takedown requests in their entirety and all correspondence with people seeking asking you to modify or remove content. The best place to start answering this is by mentioning some of the incentives a platform faces here and the reasonable reasons they might say less than everything (leaving aside self-interested reasons like avoiding outside scrutiny and saving embarrassment over shortcomings such as arguably inconsistent application of moderation rules or a deficient process for creating them).First, transparency is sometimes in tension with the privacy of not just users of a service, but any person who winds up the subject of UGC. Just as the public, users, regulators, and academics are asking platforms to increase transparency, the same groups have made equally clear that platforms should take people's privacy rights seriously. The legal and public relations risks of sharing information in a way that abridges someone's privacy are often uncertain and potentially large. This does not mean they cannot be outweighed by transparency values, but I think in order to weight them properly, this tension has to be acknowledged and thought through. In particular, however anonymized a given data set is, the risks of de-anonymization increase with time as better technologies come to exist. Today's anonymous data set could easily be tomorrow's repository of personally identifiable information, and platforms are acting reasonably when choosing to safeguard these future and contingent rights for people by sometimes erring on the side of opacity around anything that touches user information.Second, in some cases, publicizing detailed information about a particular moderation decision risks maintaining or intensifying the harm that moderation was intended to stop or lessen. If a piece of content is removed because it violates someone's privacy, then publicizing information about that takedown or redaction risks continuing the harm if the record is no carefully worded to exclude the private information. Or, in cases of harassment, it may provide information to the harasser or the public (or the harasser's followers, who might choose to join in) for that harassment to continue. In some cases, the information can be described at a sufficiently high level of generality to avoid harm (e.g., "a private person's home address was published and removed" or "pictures of a journalist's children were posted and removed). In other cases, it may be hard or impossible (e.g., "an executive at small company X was accused of embezzling by an anonymous user"). Of course, generalizing at too high a level may frustrate those seeking greater transparency as not much better than not releasing the information at all.Finally, in some cases publicizing the details a moderation team's script or playbook can make the platform's rules easier to break or hack by bad faith actors. I don't think these are sufficient reason to perpetuate existing confidentiality norms. But, if platform companies are being asked or ordered to increase the amount of public information about content moderation and plan to do so, they may as well try to proceed in a way that will account for these issues.Transparency ReportsShort of the granular information discussed above, many UGC platforms already issue regular transparency reports. Increasing expectations or commitments about what should be included in transparency reports could wind up an important way to move confidentiality norms while also ensuring that the information released is structured and meaningful.With some variation, I've found that the majority of UGC platform transparency reports cover information across two axes. The two main types of requests are to remove/alter content and information requests. And then, within each of those categories, whether a given request comes from a private person or a government actor. A greater push for transparency might mean adding categories to these reports with more detail about the content of requests and the procedural steps taken along the way rather than just the usually binary output of "action taken" or "no action taken" that one finds in these reports, such as the law or platform rule that is the basis for removal, more detail about what relevant information was taken into account (such as, "this post was especially newsworthy because it said ..." or "this person has been connected with hate speech on [other platform]"). As pressure to filter or proactively filter platform content increases from legislators from places like Europe and Hollywood, we may want to add a category for removals that happened based on a content platform's own proactive efforts,, rather than a complaint.Nevertheless, transparency reports as they are currently done raise questions about how to meaningfully interpret them and what can be done to improve their usefulness.A key question I think we need to address moving forward: are the various platform companies' transparency reports apple-to-apples in their categories? Being able to someday answer yes would involve greater consistency in terms by industry (e.g, are they using similar terms to mean similar things, like "hate speech" or "doxxing," irrespective of their potentially differing policies about those types of content).Relatedly, is there a consistent framework for classifying and coding requests received by each company. Doing more to articulate and standardize coding though maybe unexciting will be crucial infrastructure for providing meaningful classes and denominators for what types of actions people are asking platform companies to take and on what ground. Questions here include, is there relative consistency in how they each code a particular request or type of action taken in response? For example, a demand email with some elements of a DMCA notice, a threat of suit based on trademark infringement, an allegation of violation of rules/TOS based on harassment, and an allegation that the poster has action in breach of a private confidentiality agreement? What if a user makes a modification to their content of their own volition based on a DMCA or other request? What is a DMCA notice is received for one copy of a work posted by a user account, but in investigating, a content moderator finds 10 more works that they believe should be taken down based on their subjective judgment of the existence of possible red flag knowledge?Another question is how to ensure the universe of reporting entities is complete. Are we missing some types of companies and as a result lacking information on what is out there? The first type that comes to mind is nominally traditional online publishers, like the New York Times or Buzzfeed, who also host substantial amounts of UGC, even if it is not their main line of business. Although these companies focus on their identity as publishers, they are also platforms for their own and others' content. (Section 3 of the Times' Terms of Service) spells out its UGC policy, and Buzzfeed's Community Brand Guidelines explain things such as the fact that a post with "an overt political or commercial agenda" will likely be deleted).Should the Times publish a report on which comments they remove, how many, and why? Should they provide (voluntarily, by virtue of industry best practices, or by legal obligation) the same level of transparency major platforms already provide? If not, why not? (Another interesting question – based on what we've learned about the benefits of transparency into the processes by which online, content is published or removed, should publisher/platforms perhaps be encouraged to also provide greater transparency into non-UGC content that is removed, altered, or never published by virtue what has traditionally been considered editorial purview, such as a controversial story that is spiked at the last minute due to a legal threat or factual allegations removed from a story for the same reason? And over time, we can expect that more companies may exist that cannot be strictly classified as publisher or platform, but which should nevertheless be expected to be transparent about its content practices.) Without thinking through these question, we may lack a full data set of online expression and lose our ability to aggregate useful information about practices across types of content environments before we've started.Alex Feerst is the Head of Legal at Medium
|
|
by Tim Cushing on (#3EVX5)
Back in May of last year, a New York federal court tossed two lawsuits from plaintiffs attempting to hold social media companies responsible for terrorist attacks. Cohen v. Facebook and Force v. Facebook were both booted for failing to state a claim, pointing out the obvious: the fact that terrorists use social media to recruit and communicate does not somehow turn social media platforms into material support for terrorism.Both lawsuits applied novel legal theories to internet communications in hopes of dodging the obvious problems posed by Section 230 immunity. None of those were entertained by the New York court, resulting in dismissals without prejudice for both cases.Rather than kick their case up the ladder to the Appeals Court, the Force plaintiffs tried to get a second swing in for free. The plaintiffs filed two motions -- one asking the judge to reconsider its dismissal ruling and the other for permission to file a second amended complaint.As Eric Goldman points out on his blog, the judge's decision to address both of these filings at once makes for difficult reading. The end result is a denial of both motions, but the trip there is bumpy and somewhat incoherent.Once the court moves past the plaintiffs' attempt to skirt Section 230 by re-imagining its lawsuit as an extraterritorial claim, it gets directly to the matter at hand: the application of Section 230 immunity to the lawsuit's claims. The plaintiffs performed a hasty re-imagining of their arguments in hopes of dodging the inevitable immunity defense, but the judge has no time for bogus arguments raised hastily in the face of dismissal.From the decision [PDF]:
|
|
by Timothy Geigner on (#3EVQP)
Every once in a while, you'll come across stories about one government or another looking to censor or discourage pornography online, typically through outright censorship or some sort of taxation. While most of these stories come from countries that have religious reasoning behind censorship of speech, more secular countries in Europe have also entertained the idea of a tax or license for viewing naughty things online. Occasionally, a state or local government here in America will try something similar before those efforts run face first into the First Amendment. It should be noted, however, that any and all implementations of this type of censorship or taxation of speech have failed spectacularly with a truly obscene amount of collateral damage as a result. Not that any of that keeps some politicians from trying, it seems.The latest evidence of that unfortunate persistence would be from the great state of Virginia, where the General Assembly will be entertaining legislation to make the state the toll booth operators of internet porn. The bill (which you can see here) was introduced by Viriginia House member David LaRock (and there's a Senate version introduced by State Senator Richard Black).
|
|
by Daily Deal on (#3EVQQ)
With 8 courses (50+ Hours), the Amazon Web Services Certification Training Mega Bundle is your one-stop to learn all about cloud computing. The courses cover S3, Route 53, EC2, VPC, Lambda and more. You will learn how cloud computing is redefining the rules of IT architecture and how to design, plan, and scale AWS Cloud implementations with best practices recommended by Amazon. The AWS bundle is on sale for $69.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
|
|
by Tim Cushing on (#3EVE7)
In the face of "extremist" content and other internet nasties, British PM Theresa May keeps doing something. That something is telling social media companies to do something. Move fast and break speech. Nerd harder. Do whatever isn't working well already, but with more people and processing power.May has been shifting her anti-speech, anti-social media tirades towards the Orwellian in recent months. Her speeches and platform stances have tried to make direct government control of internet communications sound like a gift to the unwashed masses. May's desire to bend US social media companies to the UK's laws has been presented as nothing more than as a "balancing" of freedom of speech against some imagined right to go through life without being overly troubled by social media posts.Then there's the terrorism. Terrorists use social media platforms to connect with like-minded people. May would like this to stop. She's not sure how this should be accomplished but she's completely certain smart people at tech companies could bring an end to world terrorism with a couple of well-placed filters. So sure of this is May that she wants "extremist" content classified, located, and removed within two hours of its posting.May's crusade against logic and reality continues with her comments at the Davos Conference. Her planned speech/presentation contains more of her predictable demand that everyone who isn't a UK government agency needs to start doing things better and faster.Although she is expected to praise the potential of technology to "transform lives", she will also call on social media companies to do much more to stop allowing content that promotes terror, extremism and child abuse.
|
|
by Karl Bode on (#3ETWP)
A few years back, frustration at John Deere's draconian tractor DRM resulted in a grassroots tech movement. John Deere's decision to implement a lockdown on "unauthorized repairs" turned countless ordinary citizens into technology policy activists, after DRM and the company's EULA prohibited the lion-share of repair or modification of tractors customers thought they owned. These restrictions only worked to drive up costs for owners, who faced either paying significantly more money for "authorized" repair, or toying around with pirated firmware just to ensure the products they owned actually worked.The John Deere fiasco resulted in the push for a new "right to repair" law in Nebraska. This push then quickly spread to multiple other states, driven in part by consumer repair monopolization efforts by other companies including Apple, Sony and Microsoft. Lobbyists for these companies quickly got to work trying to claim that by allowing consumers to repair products they own (or take them to third-party repair shops) they were endangering public safety. Apple went so far as to argue that if Nebraska passed such a law, it would become a dangerous "mecca for hackers" and other rabble rousers.In the wake of Apple's recent iPhone battery PR kerfuffle (in which it claimed it throttled the performance of older iPhones to protect device integrity from dwindling battery performance), longer than normal repair waits have resulted in renewed interest in such laws. A new bill that would make it easier for consumers to repair their own electronics or utilize third-party repair shops is quickly winding its way through the Washington state legislature. That bill would not only protect the consumers' right to repair, but prevent the use of batteries that are difficult or impossible to replace:
|
|
by Glyn Moody on (#3ETG7)
It will probably come as zero surprise to Techdirt readers to learn the following:
|
|
by Timothy Geigner on (#3EST6)
For some time, we've been following an odd trademark dispute between the city of Portland and a small brewery, Old Town Brewing, all over a famous city sign featuring a leaping stag. Old Town has a trademark for the image of the sign and uses that imagery for its business and beer labels. Portland, strangely, has pursued a trademark for the very same market and has attempted to invalidate Old Town's mark for the purpose of licensing the image to macro-breweries to fill the municipal coffers. What I'm sure city officials thought would be the quiet bullying of a local company without the breadth of legal resources Portland has at its disposal has instead ballooned into national coverage of that very same fuckery, with local industry groups rushing to the brewery's aid.The end result of all of this has been several months of Portland officials looking comically bad in the eyes of the public. Of all places, the people of Portland were never going to sit by and let their city run roughshod over a local microbrewery just so that the Budweisers of the world could plaster local iconography over thin, metal cans of pilsner. And now, despite sticking their chins out in response to all of this backlash for these past few months, it seems that the city has finally decided to cave in.
|
|
by Mike Masnick on (#3ESEF)
Karma works in funny ways sometimes. Over the past few years, we covered how actor James Woods filed a totally ridiculous defamation lawsuit against an anonymous internet troll who made some hyperbolic statements about Woods -- statements that were little different than what Woods had said about others. The case never went anywhere... because the defendant died. But Woods gloated over the guy's death, which just confirmed what a horrible, horrible person Woods appears to be.So, while we found the karmic retribution of someone else then suing Woods for defamation on similarly flimsy claims noteworthy, we still pointed out just how weak the case was and noted that, as much of an asshole as Woods was in his case against his internet troll, he still deserved to prevail in the case against him. And prevail he has. The case has been tossed out on summary judgment. While the opinion also details Woods continuing to do the assholish move of trying to avoid being served (his lawyers refused to give an address where he could be served and Woods refused to have his lawyer waive service requirements -- which is usually a formality in these kinds of things). Not surprisingly, the judge is not impressed by Woods hiding out from the process server:
|
|
by Kevin Bankston and Liz Woolery on (#3ES3C)
On February 2nd, Santa Clara University is hosting a gathering of tech platform companies to discuss how they actually handle content moderation questions. Many of the participants have written short essays about the questions that will be discussed at this event -- and over the next few weeks we'll be publishing many of those essays, including this one.In the wake of ongoing concerns about online harassment and harmful content, continued terrorist threats, changing hate speech laws, and the ever-growing user bases of major social media platforms, tech companies are under more pressure than ever before with respect to how they treat content on their platforms—and often that pressure is coming from different directions. Companies are being pushed hard by governments and many users to be more aggressive in their moderation of content, to remove more content and to remove it faster, yet are also consistently coming under fire for taking down too much content or lacking adequate transparency and accountability around their censorship measures. Some on the right like Steve Bannon and FCC Chairman Ajit Pai have complained that social media platforms are pushing a liberal agenda via their content moderation efforts, while others on the left are calling for those same platforms to take down more extremist speech, and free expression advocates are deeply concerned that companies' content rules are so broad as to impact legitimate, valuable speech, or that overzealous attempts to enforce those rules are accidentally causing collateral damage to wholly unobjectionable speech.Meanwhile, there is a lot of confusion about what exactly the companies are doing with respect to content moderation. The few publicly available insights into these processes, mostly from leaked internal documents, reveal bizarrely idiosyncratic rule sets that could benefit from greater transparency and scrutiny, especially to guard against discriminatory impacts on oft-marginalized communities. The question of how to address that need for transparency, however, is difficult. There is a clear need for hard data about specific company practices and policies on content moderation, but what does that look like? What qualitative and quantitative data would be most valuable? What numbers should be reported? And what is the most accessible and meaningful way to report this information?Part of the answer to these questions can be found by looking to the growing field of transparency reporting by internet companies. The most common kind of transparency report that companies voluntarily publish gives detailed numbers about government demands for information about the companies’ users—showing, for example, how many requests were received, from what countries or jurisdictions, what kind of data was requested, and whether they were complied with or not. As reflected in this history of the practice published by our organization, New America’s Open Technology Institute (OTI), transparency reporting about government demands for data has exploded over the past few years, so much so that projects like the Transparency Reporting Toolkit by OTI and Harvard’s Berkman-Klein Center for Internet & Society have emerged to try and define consistent standards and best practices for such reporting. Meanwhile, a decent number of companies have also started publishing reports about the legal demands they receive for the takedown of content, whether copyright-based or otherwise.However, almost no one is publishing data about what we're talking about here: voluntary takedowns of content by companies based on their own terms of service (TOS). Yet especially now, as private censorship gets even more aggressive, the need for transparency also increases. This need has led to calls from a variety of corners for companies to report on content moderation. For example, a working group of the Freedom Online Coalition, composed of representatives from industry, civil society, academia, and government, called for meaningful transparency about companies’ content takedown efforts, complaining that “there is very little transparency†around TOS enforcement mechanisms. The 2015 Ranking Digital Rights Corporate Accountability Index found that every company surveyed received a failing grade with respect to reporting on TOS-based takedowns; the 2017 Index findings fared only slightly better. Finally, David Kaye, the United Nations Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression, called for companies to “disclose their policies and actions that implicate freedom of expression.†Specifically, he observed that “there are … gaps in corporate disclosure of statistics concerning volume, frequency and types of request for content removals and user data, whether because of State-imposed restrictions or internal policy decisions.â€The benefits to companies issuing such transparency reports around their content moderation activities would be significant: For those companies under pressure to “do something†about problematic speech online, this is a an opportunity to outline the lengths to which they have gone to do just that; for companies under fire for “not doing enough,†a transparency report would help them express the size and complexity of the problems they are addressing, and explain that there is no magic artificial intelligence wand they can wave and make online extremism and harassment disappear; and finally, public disclosure about content moderation and terms of service practices will go a long way toward building trust with users—a trust that has crumbled in recent years. Putting aside the benefit to companies, though, there is the even more significant need of policymakers and the public. Before we can have an intelligent conversation about hate speech, terrorist propaganda, or other worrisome content online, or formulate fact-based policies about how to address that content, we need hard data about the breadth and depth of those problems, and about the platforms' current efforts to solve those problems.While there have been calls for publication of such information, there has been little specificity with respect to what exactly should be published. No doubt this is due, in great part, to the opacity of individual companies’ content moderation policies and processes: It is difficult to identify specific data that would be useful without knowing what data is available in the first place. Anecdotes and snippets of information from companies like Automattic and Twitter offer a starting point for considering what information would be most meaningful and valuable. Facebook has said they are entering a new of era transparency for the platform. Twitter has published some data about content removed for violating its TOS, Google followed suit for some of the content removed from YouTube, and Microsoft has published data on “revenge porn†removals. While each of these examples is a step in the right direction, what we need is a consistent push across the sector for clear and comprehensive reporting on TOS-based takedowns.Looking to the example of existing reports about legally-mandated takedowns, data that shows the scope and volume of content removals, account removals, and other forms of account or content interference/flagging would be a logical starting point. Information about content that has been flagged for removal by a government actor—such as the U.K.’s Counter Terrorism Internet Referral Unit, which was granted “super flagger†status on YouTube, allowing the agency to flag content in bulk—should also be included, to guard against undue government pressure to censor. More granular information, such as the number of takedowns in particular categories of content (whether sexual content, harassment, extremist speech, etc.), or specification of the particular term of service violated by each piece of taken-down content, would provide even more meaningful transparency. This kind of quantitative data (i.e., numbers and percents) would be valuable on its own, but would be even more helpful if paired with qualitative data to shed more light on the platforms’ opaque content moderation practices and tell users a clear story about how those processes actually work, using compelling anecdotes and examples.As has already and often happened with existing transparency reports, this data will help keep companies accountable. Few companies will want to demonstrably be the most or least aggressive censor, and anomalous data such as huge spikes around particular types of content will be called out and questioned by one stakeholder group or another. It will also help ensure that overreaching government pressure to takedown more content is recognized and pushed back on, just as in current reporting it has helped identify and put pressure on countries making outsized demands for users’ information. And most importantly, it will help drive policy proposals that are based on facts and figures rather than on emotional pleas or irrational fears—policies that hopefully will help make the internet a safer space for a range of communities while also better protecting free expression.Unquestionably, the major platforms have become our biggest online gatekeepers when it comes to what we can and cannot say. Whether we want them to have that power or not, and whether we want them to use more or less of that power in regard to this or that type of speech, are questions we simply cannot answer until we have a complete picture of how they are using that power. Transparency reporting is our first and best tool for gaining that insight.Kevin Bankston is the Director of the Open Technology Institute at New America). Liz Woolery is Senior Policy Analyst at the Open Technology Institute at New America.
|
|
by Tim Cushing on (#3ERW3)
ICE is finally getting that nationwide license plate reader database it's been lusting after for several years. The DHS announced plans for a nationwide database in 2014, but decided to rein that idea in after a bit of backlash. The post-Snowden political climate made many domestic mass surveillance plans untenable, if not completely unpalatable.Times have changed. The new team in the White House doesn't care how much domestic surveillance it engages in as long as it might aid in rooting out foreign immigrants. The first move was the DHS's updated Privacy Impact Assessment on license plate readers -- delivered late last year -- which came to the conclusion that any privacy violations were minimal compared to the national security net benefits.The last step has been finalized, as Russell Brandom reports for The Verge.
|
|
by Mike Masnick on (#3ERQB)
As I've occasionally mentioned in the past, my undergraduate studies were in (of all things) "industrial and labor relations," which involved many, many courses of study on the history of unions, collective bargaining and the economics around such things. I tend to have a fairly nuanced view of unionizing that I won't get into here, other than to note that a big part of the reasons why unions get a bad name is when they take indefensible positions that they think will "protect" their members, but which actually are long term suicidal. This is one of those stories. Reports are coming out that as the Teamsters are entering negotiations on a new contract with shipping giant UPS, their demands include a ban on both drone deliveries and on the use of autonomous vehicles. These are, not surprisingly, both technologies that UPS has been experimenting with lately (as has nearly every other delivery company).You can understand the short term thinking here, of course, UPS drivers see both of those options as potential "competition" that would decrease the number of drivers and potentially cause many to lose their jobs. And that might be true (though, it also might not be true as we'll discuss below). But, at the very least, demanding that the company that employs you directly choose not to invest in the technologies of the future is demanding that a company commit suicide -- in which case all those jobs for drivers would likely be eliminated anyway. While there are obviously a lot more variables at work here, it's not hard to see how a competing delivery company -- whether Fedex, the US Postal Service, Amazon or someone else entirely -- could get drone/driverless car delivery right, and suddenly UPS's service is seen as slower, more expensive and less efficient in many cases. If that's the case, UPS would likely have to layoff tons of workers anyway.The other key point: the idea that these technologies are simply going to destroy all the jobs is almost certainly highly overstated. They very likely will change the nature of jobs, but not eliminate them. Professor James Bessen has been doing lots of research on this for years, and has found that in areas of heavy automation, jobs often increase (though they may be changed). That links to an academic paper he wrote, but he also wrote a more general audience targeted piece for the Atlantic on what he calls the automation paradox. As Bessen explains:
|
|
by Daily Deal on (#3ERQC)
Nothing makes a shower or bath experience complete like your favorite podcast or playlist. With the waterproof FresheTech Splash Tunes Bluetooth Shower Speaker, you can hit play, skip songs, adjust the volume, take phone calls, and more. Just suction cup it to any surface and you'll always have your tunes within arm's reach. It's on sale for $19.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
|
|
by Cathy Gellis on (#3ERFB)
Never mind all the other reasons Deputy Attorney General Rod Rosenstein's name has been in the news lately... this post is about his comments at the State of the Net conference in DC on Monday. In particular: his comments on encryption backdoors.As he and so many other government officials have before, he continued to press for encryption backdoors, as if it were possible to have a backdoor and a functioning encryption system. He allowed that the government would not itself need to have the backdoor key; it could simply be a company holding onto it, he said, as if this qualification would lay all concerns to rest.But it does not, and so near the end of his talk I asked the question, "What is a company to do if it suffers a data breach and the only thing compromised is the encryption key it was holding onto?"There were several concerns reflected in this question. One relates to what the poor company is to do. It's bad enough when they experience a data breach and user information is compromised. Not only does a data breach undermine a company's relationship with its users, but, recognizing how serious this problem is, authorities are increasingly developing policy instructing companies on how they are to respond to such a situation, and it can expose the company to significant legal liability if it does not comport with these requirements.But if an encryption key is taken it is so much more than basic user information, financial details, or even the pool of potentially rich and varied data related to the user's interactions with the company that is at risk. Rather, it is every single bit of information the user has ever depended on the encryption system to secure that stands to be compromised. What is the appropriate response of a company whose data breach has now stripped its users of all the protection they depended on for all this data? How can it even begin to try to mitigate the resulting harm? Just what would government officials, who required the company to keep this backdoor key, now propose it do? Particularly if the government is going to force companies to be in this position of holding onto these keys, these answers are something they are going to need to know if they are going to be able to afford to be in the encryption business at all.Which leads to the other idea I was hoping the question would capture: that encryption policy and cybersecurity policy are not two distinct subjects. They interrelate. So when government officials worry about what bad actors do, as Rosenstein's comments reflected, it can't lead to the reflexive demand that encryption be weakened simply because, as they reason, bad actors use encryption. Not when the same officials are also worried about bad actors breaching systems, because this sort of weakened encryption so significantly raises the cost of these breaches (as well as potentially makes them easier).Unfortunately Rosenstein had no good answer. There was lots of equivocation punctuated with the assertion that experts had assured him that it was feasible to create backdoors and keep them safe. Time ran out before anyone could ask the follow-up question of exactly who were these mysterious experts giving him this assurance, especially in light of so many other experts agreeing that such a solution is not possible, but perhaps this answer is something Senator Wyden can find out...
|
|
by Karl Bode on (#3EQXH)
You'll recall that the FCC ignored the public, the people who built the internet, and all objective data as it rushed to repeal net neutrality at Verizon, Comcast and AT&T's behest. Things got so absurd during the proceeding, the FCC at one point was directing reporters who had questions regarding the FCC's shaky justifications to telecom industry lobbyists, who were more than happy to molest data until it "proved" FCC assertions on this front (most notably the false claim that net neutrality killed sector investment):
|
|
by Tim Cushing on (#3EQHM)
The UK's mass surveillance programs haven't been treated kindly by the passing years (2013-onward). Ever since Snowden began dumping details on GCHQ surveillance, legal challenges to the lawfulness of UK bulk surveillance have been flying into courtrooms. More amazingly, they've been coming out the other side victorious.In 2015, a UK tribunal ruled GCHQ had conducted illegal surveillance and ordered it to destroy intercepted communications between detainees and their legal reps. In 2016, the UK tribunal declared GCHQ's bulk collection of communications metadata illegal. However, the tribunal did not order destruction of this collection, meaning GCHQ is likely still making use of illegally-collected metadata.A second loss in 2016 -- this time at the hands of the EU Court of Justice -- found GCHQ's collection of European communications being declared illegal due to the "indiscriminate" (untargeted) nature of the collection process. The UK government appealed this decision, taking the ball back to its home court. And, again, it has been denied a victory.
|
|
by Glyn Moody on (#3EPSB)
Last November we reported on the legal opinion of one of the Advocates General that advises the EU's top court, the Court of Justice of the European Union (CJEU). It concerned yet another case brought by the data protection activist and lawyer Max Schrems against Facebook, which he claims does not follow EU privacy laws properly. There were two issues: whether Schrems could litigate against Facebook in his home country, Austria, and whether he could join with 25,000 people to bring a class action against the company. The Advocate General said "yes" to the first, and "no" to the second, and in its definitive ruling, the CJEU has agreed with both of those views (pdf). Here's what Schrems has to say on the judgment (pdf):
|
|
by Tim Cushing on (#3EPD1)
When it comes to the Fifth Amendment, you're better off with a password or PIN securing your device, rather than your fingerprint. Cellphone manufacturers introduced fingerprint readers in an effort to protect users from thieves or other unauthorized access. But it does nothing at all to prevent law enforcement from using their fingerprints to unlock seized devices.The US Supreme Court hasn't seen a case involving compelled production of fingerprints land on its desk yet and there's very little in the way of federal court decisions to provide guidance. What we have to work with is scattered state court decisions and the implicit understanding that no matter how judges rule, a refusal to turn over a fingerprint or a password is little more than a way to add years to an eventual sentence.The Minnesota Supreme Court has issued the final word on fingerprints and the Fifth Amendment for state residents. In upholding the appeals court ruling, the Supreme Court says a fingerprint isn't testimonial, even if it results in the production of evidence used against the defendant. (h/t FourthAmendment.com)From the ruling [PDF]:
|
|
by Leigh Beadon on (#3EP50)
Last week, Mike sparked lots of conversation with his post about rethinking the marketplace of ideas without losing sight of the importance of the fundamental principles of free speech. Naturally, there's plenty more to discuss on that topic, so this week we're joined by Buzzfeed general counsel Nabiha Syed — whose recent article in the Yale Law Journal, Real Talk About Fake News, offered a thorough and insightful look at free speech online — to try to cut through all the simplistic takes on free speech and talk about where things are going.Follow the Techdirt Podcast on Soundcloud, subscribe via iTunes or Google Play, or grab the RSS feed. You can also keep up with all the latest episodes right here on Techdirt.
|
|
by Kate Klonick on (#3ENYJ)
On February 2nd, Santa Clara University is hosting a gathering of tech platform companies to discuss how they actually handle content moderation questions. Many of the participants have written short essays about the questions that will be discussed at this event -- and over the next few weeks we'll be publishing many of those essays, including this one.The first few years of the 21st century saw the start of a number of companies whose model of making user-generated content easily amplified and distributable continues to resonate today. Facebook was founded in 2004, YouTube began in 2005 and Twitter became an overnight sensation in 2006. In their short history, countless books (and movies and plays) have been devoted to the rapid rise of these companies; their impact on global commerce, politics and culture; and their financial structure and corporate governance. But as Eric Goldman points out in his essay for this conference, surprisingly little has been revealed about how these sites manage and moderate the user-generated content that is the foundation for their success.Transparency around the mechanics of content moderation is one part of understanding what exactly is happening when sites decide to keep up or take down certain types of content in keeping with the community standards or terms of service. How does material get flagged? What happens to it once it's reported? How is content reviewed and who reviews it? What does takedown look like? Who supervises the moderators?But more important than understanding the intricacies of the system is understanding the history of how it was developed. This gives us not only important context for the mechanics of content moderation, but a more comprehensive idea of how policy was created in the first place, so as to know how best to change it in the future.At each company, there were various leaders who were charged with developing the content moderation policies of the site. At YouTube (Google) this was Nicole Wong. At Facebook, this was Jud Hoffman and Dave and Charlotte Willner. Though it seems basic now, the development of content moderation policies was not a foregone conclusion. Early on, many new Internet corporations thought of themselves as software companies—they did not think about "the lingering effects of speech as part of what they were doing."As Jeff Rosen wrote in one of the first accounts of content moderation's history, while "the Web might seem like a free-speech panacea: it has given anyone with Internet access the potential to reach a global audience. But though technology enthusiasts often celebrate the raucous explosion of Web speech, there is less focus on how the Internet is actually regulated, and by whom. As more and more speech migrates online, to blogs and social-networking sites and the like, the ultimate power to decide who has an opportunity to be heard, and what we may say, lies increasingly with Internet service providers, search engines and other Internet companies like Google, Yahoo, AOL, Facebook and even eBay."Wong, Hoffman and the Willners all provide histories of the hard questions dealt with at each corporation related to speech. For instance, many problems existed simply because flagged content lacked necessary context in order to apply a given rule. This was often the case with online bullying. As Hoffman described, "There is a traditional definition of bullying—a difference in social power between two people, a history of contact—there are elements. But when you get a report of bullying, you just don't know. You have no access to those things. So you have to decide whether you're going to assume the existence of some of those things or assume away the existence of some of those things. Ultimately what we generally decided on was, 'if you tell us that this is about you and you don't like it, and you're a private individual not a public figure, we'll take it down.' Because we can't know whether all these other things happened, and we still have to make those calls. But I'm positive that people were using that function to game the system. . . I just don't know if we made the right call or the wrong call or at what time."Wong came up against similar problems at Google. In June 2009, a video of a dying Iranian Green Movement protestor shot in the chest and bleeding from the eyes was removed from YouTube as overly graphic and then reposted because of its political significance. YouTube's policies and internal guidelines on violence were altered to allow for the exception. Similarly, in 2007, a YouTube video of a man being brutally beaten by four men in a cell and was removed for violence, but restored by Wong and her team after journalists contacted Google to explain that the video was posted by Egyptian human rights activist Wael Abbas to inform the international community of human rights violations by the police in Egypt.What the stories of Wong and Hoffman reveal is that much of the policy and the enforcement of that policy developed in an ad hoc way at each company. Taking down breastfeeding was a fine rule, until it wasn't. Removing an historic photo of a young girl running naked in Vietnam following a napalm attack was acceptable for years, until it was a mistake. A rule worked until it didn't.Much of the frustration that gets expressed towards Facebook, Twitter, and YouTube seems to build itself off a fundamentally flawed premise: that online speech platforms have had one seminal moment in their history where they established a fundamental set of values that would guide their platform. Instead, however, most of these content moderation policies were a series of piecemeal long, hard, and deliberations about the policies to put in place. There was no "Constitutional Convention" moment at these companies, decisions were made reactively in response to signals that were reported to companies through media pressure, civil society groups, government, or individual users. Without a signal, these platforms couldn't develop, change or "fix" their policy.Of course, it's necessary to point out that even when these platforms have been made aware of a problematic content moderation policy, they don't always modify their policies, even when they say they will. That's a huge problem -- especially as these sites become an increasingly essential part of our modern public square. But learning the history of these policies, alongside the systems that enforce them, is a crucial part of advocating effectively for change. At least for now, and for the foreseeable future, online speech is in the hands of private corporations. Understanding how to communicate the right signals through amidst the noise will continue to be incredibly useful.Kate Klonick is a PhD. in Law candidate and a Resident Fellow at the Information Society Project at Yale.
|
|
by Mike Godwin on (#3ENSF)
Late last year I published Part I of a project to map out all the complaints we hear about social media in particular and about internet companies generally. Now, here's Part 2.This Part should have come earlier; Part 1 was published in November. I'd hubristically imagined that this is a project that might take a week or a month. But I didn't take into account the speed with which the landscape of the criticism is changing. For example, just as you're trying to do more research into whether Google really is making us dumber, another pundit (Farhad Manjoo at the New York Times) comes along and argues that Apple -- a tech giant no less driven by commercial motives than Google and its parent company, Alphabet -- ought to redesign its products to make us smarter (by making them less addictive). That is, it's Apple's job to save us from Gmail, Facebook, Twitter, Instagram, and other attention-demanding internet media — which we connect to through Apple's products, as well as many others.In these same few weeks, Facebook has announced it's retooling the user experience for Facebook users in ways aimed at making the experience more personal and interactive and less passive. Is this an implicit admission that Facebook, up until now, has been bad for us? If so, is it responding to the charges that many observers have leveled at social-media companies — that they're bad for us and that they're bad for democracy.And only this last week, social-media companies have responded to concerns about political extremists (foreign and domestic) in Senate testimony. Although the senators had broad concerns (ISIS recruitment, bomb-making information on YouTube), there was, of course, some allocation of time on the ever-present question of Russian "misinformation campaigns," which may not have altered the outcome of 2016's elections but still may aim to affect 2018 mid-terms and beyond.These are recent developments, but coloring them all is a more generalized social anxiety about social media and big internet companies that is nowhere better summarized than in Senator Al Franken's last major public policy address. Whatever you think of Senator Franken's tenure, I think his speech was a useful accumulation of the growing sentiment among commentators that there's something out of control with social media and internet companies that needs to be brought back into control.Now, let's be clear: even if I'm skeptical here about some claims that social media and internet giants are bad for us, that doesn't mean these criticisms necessarily lack any merit at all. But it's always worth remembering that, historically, every new mass medium (and mass-medium platform) has been declared first to be wonderful for us, and then to be terrible for us. So it's always important to ask whether any particular claim about the harms of social media or internet companies is reactive, reflexive... or whether it's grounded in hard facts.Here are reasons 4, 5, and 6 to believe social media are bad for us. (Remember, reasons 1, 2, and 3 are here.)(4) Social media (and maybe some other internet services) are bad for us because they're super-addictive, especially on our sweet, slick handheld devices."It's Time for Apple to Build a Less Addictive iPhone," according to New York Times tech columnist Farhad Manjoo, who published a column to that effect recently. To be sure, although "Addictive" is in the headline, Manjoo is careful to say upfront that, although iPhone use may leave you feeling "enslaved," it's not "not Apple's fault" and it "isn't the same as [the addictiveness] of drugs or alcohol." Manjoo's column was inspired by an open letter from an ad-hoc advocacy group that included an investment-management firm and the California State Teachers Retirement System (both of which are Apple shareholders). The letter, available here at ThinkDifferentlyAboutKids.com (behind an irritating agree-to-these-terms dialog) calls for Apple to add more parental-control choices for its iPhones (and other internet-connected devices, one infers). After consulting with experts, the letter's signatories argue, "we note that Apple's current limited set of parental controls in fact dictate a more binary, all or nothing approach, with parental options limited largely to shutting down or allowing full access to various tools and functions." Per the letter's authors: "we have reviewed the evidence and we believe there is a clear need for Apple to offer parents more choices and tools to help them ensure that young consumers are using your products in an optimal manner."Why Apple in particular? Obviously, the fact that two of the signatories own a couple of billion dollars' worth of Apple stock explains this choice to some extent. But one hard fact is that Apple's share of the smartphone market mostly stays in the 12-to-20-percent range. (Market leader Samsung has held 20-30 percent of the market since 2012.) Still, the implicit argument is that Apple's software and hardware designs for the iPhone will mostly lead the way for other phone-makers going forward, as they mostly have for the first decade of the iPhone era.Still, why should Apple want to do this? The idea here is that Apple's primarily a hardware-and-devices company — which distinguishes Apple from Google, Facebook, Amazon, and Twitter, all of which primarily deliver an internet-based service. Of course, Apple's an internet company too (iTunes, Apple TV, iCloud, and so on), but the company's not hooked on the advertising revenue streams that are the primary fuel for Google, Facebook, and Twitter, or on the sales of other, non-digital merchandise (like Amazon). The ad revenue for the internet-service companies creates what Manjoo argues are "misaligned incentives" — when ad-driven businesses' economic interests lie in getting more users clicking on advertisements, he reasons, he's "skeptical" that (for example) Facebook is the going to offer any real solution to the "addiction" problem. Ultimately, Manjoo agrees with the ThinkDifferentlyAboutKids letter -- Apple's in the best position to fix iPhone "addiction" because of their design leadership and independence from ad revenue.Even so, Apple has other incentives to make iPhones addictive — notably, pleasing its other investors. Still, investors may ultimately be persuaded that Apple-led fixes will spearhead improvements, rooted in our devices, of our social-media experience. (See, for example, this column: Why Investors May Be the Next to Join the Backlash Against Big Tech's Power.)It's worth remembering that the idea technology is addictive is itself an addictive idea — not that long ago, it was widely (although not universally) believed that television was addictive. This New York Times story from 1990 advances that argument, although the reporter does quote a psychiatrist who cautions that "the broad definition" of addiction "is still under debate." (Manjoo's "less addictive iPhone" column inoculates itself, you'll recall, by saying iPhone addiction is "not the same.")"Addiction" of course is an attractive metaphor, and certainly those of us who like using our electronics to stay connected can see the appeal of the metaphor. And Apple, which historically has been super-aware of the degree to which its products are attractive to minors, may conclude—or already have concluded, as the ThinkDifferentlyAboutKids folks admit — that more parental controls are a fine idea.But is it possible that smartphones maybe already incorporate a solution for addictiveness? Just the week before Manjoo's column, another Times writer, Nellie Bowles asked whether we can make our phones less addictive just by playing with the settings. (The headline? "Is the Answer to Phone Addiction a Worse Phone?") Bowles argues, based on interviews with researchers, that simply setting your phone to use grayscale instead of color inclines users to respond less emotionally and impulsively—in other words, more mindfully—when deciding whether to respond to their phones. Bowles says she's trying the experiment herself: "I've gone gray, and it's great."At first it seems odd to focus on the device's user interface (parental settings, or color palette) if the real problem of addictiveness is internet content (social media, YouTube and other video, news updates, messages). One can imagine a Times columnist in 1962—in the opening years of widespread color TV— responding to Newt Minow's famous "vast wasteland" speech by arguing that TV-set manufacturers should redesign sets so that they're somewhat more inconvenient—no remote controls, say—and less colorful to watch. (So much for NBC's iconic Peacock opening logo)In the interests of science, I'm experimenting with some of these solutions myself. For years already I've configured my iDevices not to bug me with every Facebook and Twitter update or new-email notice. Plus, I was worried about this grayscale thing on my iPhone X—one of the major features of which is a fantastic camera. But it turns out that you can toggle between grayscale and color easily once you've set gray as the default. I kind of like the novelty of all-gray—no addiction-withdrawal syndrome yet, but we'll see how that goes.(5) Social media are bad for us because they make us feel bad, alienating us from one another and causing is to be upset much of the time.Manjoo says he's skeptical whether Facebook is going to fix the addictiveness of its content and interactions with users, thanks to those "misaligned incentives." It should be said of course that Facebook's incentives—to use its free services to create an audience for paying advertisers—at least have the benefit of being straightforward. (Apple's not dependent on ads, but they still want new products to be attractive enough for users to want to upgrade.) Still, Facebook's Mark Zuckerberg has announced that the company is redesigning Facebook's user experience, (focusing first on its news feed) to emphasize quality time ("time well spent") over more "passive" consumption of the Facebook ads and video that may generate more hits for some advertisers. Zuckerberg maintains that Facebook, even as it has operated over the last decade-plus of general public access, had been good for many and maybe for most users:
|
|
by Daily Deal on (#3ENSG)
The Project Management Professional Certification Training Bundle features 10 courses designed to get you up and running as a project manager. You'll prepare for certification exams by learning the fundamental knowledge, terminology, and processes of effective project management. Various methods of project management are covered as well including Six Sigma, Risk Management, Prince and more. The bundle is on sale for $49.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
|
|
by Mike Masnick on (#3ENKY)
We've written a few times about the GDPR -- the EU's General Data Protection Regulation -- which was approved two years ago and is set to go into force on May 25th of this year. There are many things in there that are good to see -- in large part improving transparency around what some companies do with all your data, and giving end users some more control over that data. Indeed, we're curious to see how the inevitable lawsuits play out and if it will lead companies to be more considerate in how they handle data.However, we've also noted, repeatedly, our concerns about the wider impact of the GDPR, which appears to go way too far in some areas, in which decisions were made that may have made sense in a vacuum, but where they could have massive unintended consequences. We've already discussed how the GDPR's codification of the "Right to be Forgotten" is likely to lead to mass censorship in the EU (and possibly around the globe). That fear remains.But, it's also becoming clear that some potentially useful innovation may not be able to work under the GDPR. A recent NY Times article that details how various big tech companies are preparing for the GDPR has a throwaway paragraph in the middle that highlights an example of this potential overreach. Specifically, Facebook is using AI to try to catch on if someone is planning to harm themselves... but it won't launch that feature in the EU out of a fear that it would breach the GDPR as it pertains to "medical" information. Really.
|
|
by Karl Bode on (#3EN1H)
Last year we noted how the FCC had been hyping the creation of a new "Broadband Deployment Advisory Panel" purportedly tasked with coming up with solutions to the nation's broadband problem. Unfortunately, reports just as quickly began to circulate that this panel was little more than a who's who of entrenched telecom operators with a vested interest in protecting the status quo. What's more, the panel featured few representatives from the countless towns and cities that have been forced to build their own broadband networks in the wake of telecom sector dysfunction.One report showed how 28 of the 30 representatives on the panel had some direct financial ties to the telecom sector, though many attempted to obfuscate this connection via their work for industry-funded think tanks.You'll recall that FCC boss Ajit Pai consistently insists he's breathlessly dedicated to closing the digital divide, despite the fact his policies (like killing net neutrality or protecting business broadband monopolies) will indisputably make the problem worse. Regardless, Pai has spent the last few weeks insisting in speeches like this one (pdf) that his advisory council is the centerpiece of his efforts to close the digital divide:
|