Article 6FB6H KOSA Won’t Make The Internet Safer For Kids. So What Will?

KOSA Won’t Make The Internet Safer For Kids. So What Will?

by
Mike Masnick
from Techdirt on (#6FB6H)
Story Image

I've been asked a few times now what to do about online safety if the Kids Online Safety Act is no good. I will take it as a given that not enough is being done to make the Internet safe, especially for children. I think there is enough evidence to show that while the Internet can be a positive for many young people, especially marginalized youth that find support online, there are also significant negatives that correlate to real world harms that lead to suffering.

As I see it, there are three separate but related problems:

  1. Most Internet companies make money off engagement, and so there can be misaligned incentives especially when some toxic things can drive engagement.
  2. Trust & Safety is the linchpin of efforts to improve online safety, but it represents a significant cost to companies without a direct connection to profit.
  3. The tools used by Trust & Safety, like content moderation, have become a culture war football and many - including political leaders - are trying to work the refs.

I think #1 tends to be overstated, but X/Twitter is a natural experiment on whether this model is successful in the long run so we may soon have a better answer. I think #2 is understated, but it's a bit hard to find government solutions here - especially those that don't run into First Amendment concerns. And #3 is a bit of a confounding problem that taints all proposed solutions. There is a tendency to want to use online safety" as an excuse to win culture wars, or at least tack culture war battles onto legitimate attempts to make the Internet safer. These efforts run headfirst into the First Amendment, because they are almost exclusively about regulating speech.

KOSA's main gambit is to discourage #1 and maybe even incentivize #2 by creating a sort of nebulous duty of care that basically says if companies don't have users' best interests at heart in six described areas then they can be sued by the FTC and State AGs. The problem is that the duty of care is largely directed at whether minors are being exposed to certain kinds of content, and this invites problem #3 in a big way. In fact, we've already seen politically connected anti-LGBTQ organizations like Heritage openly call for KOSA to be used against LGBTQ content and Senator Blackburn, a KOSA co-author, connected the bill with protecting minor children from the transgender." This also means that this part of KOSA is likely to eventually fall to the First Amendment, as the California Age Appropriate Design Code (a bill KOSA borrows from) did.

So what can be done? I honestly don't think we have enough information yet to really solve many online safety problems. But that doesn't mean we have to sit around doing nothing. Here are some ideas of things that can be done today to make the Internet safer or prepare for better solutions in the future:

Ideas for Solving Problem #1

  • Stronger Privacy: Having a strong baseline of privacy protections for all users is good for many reasons. One of them is breaking the ability of platforms to use information gathered about you to keep you on the platform longer. Many of the recommendation engines that set people down a bad path are algorithms powered by personal information and tuned to increase engagement. These algorithms don't really care about how their recommendations affect you, and can send you in directions you don't want to go but have trouble turning away from. I experienced some of this myself when using YouTube to get into shape during the pandemic. I was eventually recommended videos that body shamed and recommended pretty severe diets to show off" your muscles. I was able to reorient the algorithm towards more positive and health-centered videos, but it took some degree of effort and understanding how things worked. If the algorithm wasn't powered by my entire history, and instead had to be more user directed, I don't think I'd be offered the same content. And if I did look for that content, I'd be able to do so more deliberately and carefully. Strong privacy controls would force companies to redesign in that way.
  • An FTC 6(b) study: The FTC has the authority to conduct wide-ranging industry studies that don't need a specific law enforcement purpose. In fact, they've used their 6(b) authority to study industries and produce reports that help Congress legislate. This 6(b) authority includes subpoena power to get information that independent researchers currently can't. KOSA has a section that allows independent researchers to better study harms related to the design of online platforms, and I think that's a pretty good idea, but the FTC can start this work now. A 6(b) study doesn't need Congressional action to start, which is good considering the House is tied up at the moment. They can examine how companies work through safety concerns in product design, look for hot docs that show they made certain design decisions despite known risks, or look for mid docs that show they refused to look into safety concerns.
  • Enhance FTC Section 5 Authority: The FTC has already successfully obtained a settlement based on the argument that certain harmful design choices violate Section 5's prohibition of unfair or deceptive" business practices. The settlement required Epic to turn off voice and text chat in the game Fortnite for children and teens by default. Congress could enhance this power by clarifying that Section 5 includes dangerous online product design more generally and require the FTC to create a division for enforcement in this area (and also increase the FTC's budget for such staffing). A 6(b) study would also lay the groundwork for the FTC to take more actions in this area. However, any legislation should be drafted in a way that does not undercut the FTC's argument that it already has much of this authority, as doing so would discourage the FTC from pursuing more actions on its own. This is another option that likely does not need Congressional action, but budget allocations and an affirmative directive to address this area would certainly help.
  • NIH/other agency studies: Another way to help the FTC to pursue Section 5 complaints against dangerous design, and improve the conversation generally, is to invest in studies from medical and psychological health experts on how various design choices impact mental health. This can set a baseline of good practices from which any significant deviation could be pursued by the FTC as a Section 5 violation. It could also help policy discussions coalesce around rules concerning actual product design rather than content. The NTIA's current request for information on Kids Online Health might be a start to that. KOSA's section on creating a Kids Online Safety Council is another decent way of accomplishing this goal. Although, the Biden administration could simply create such a Council without Congressional action, and that might be a better path considering the current troubles in the House. I should also point out that this option is ripe for special interest capture, and that any efforts to study these problems should include experts and voices from marginalized and politically targeted communities.
  • Better User Tools: I've written before on concerns I had with an earlier draft of KOSA's parental tools requirements. I think that section of the bill is in a much better place now. Generally, I think it's good to improve the resources parents have to work with their kids to build a positive online social environment. It would also be good to have tools for users to help them have a say in what content they are served and how the service interacts with them (i.e. turning off nudges). That might come from a law establishing a baseline for user tools. It might also come from an agency hosting discussions on and fostering the development of best practices for such tools. I will again caution though that not all parents have their kids' best interests at heart, and kids are entitled to privacy and First Amendment rights. Any work on this should keep that in mind, and some minors may need tools to protect themselves from their parents.
  • Interoperability: One of the biggest problems for users who want to abandon a social media platform is how hard it is to rebuild their network elsewhere. X/Twitter is a good example of this, and I know many people that want to leave but have trouble rebuilding the same engagement elsewhere. Bluesky and Mastodon are examples of newer services that offer some degree of interoperability and portability of your social graph. The advantages of that are obvious, creating more competition and user choice. This is again something the government could support by encouraging standards or requiring interoperability. However, as Bluesky and Mastodon have shown, there has been a problem with interoperable platforms and content moderation because it's a large cost not directly related to profit. This remains a problem to be solved. Ideally a strong market for effective third party content moderation should be created, but this is not something the government can be involved in because of the obvious First Amendment problems.

Ideas for Solving Problem #2

  • Information sharing: When I went to TrustCon this year the number one thing I heard was that T&S professionals need better information sharing - especially between platforms. This makes perfect sense: it lowers the cost of enforcement and improves the quality of enforcement. The kind of information we are talking about are emerging threats and the most effective ways of dealing with them. For example, coded language people are adopting to get around filters to catch sexual predation on platforms with minors. There are ways that the government can foster this information sharing at the agency level by, for example, hosting workshops, roundtables, and conferences geared towards T&S professionals on online safety. It would also be helpful for agencies to encourage open source" information for T&S teams to make it easier for smaller companies.
  • Best Practices: Related to other solutions above, a government agency could engage the industry and foster the development of best practices (as long as they are content-agnostic), and a significant departure of those best practices could be challenged as a violation of Section 5 of the FTC Act. Those best practices should include some kind of minimum for T&S investment and capabilities. I think this could be done under existing authority (like the Fortnite case), although that authority will almost certainly be challenged at some point. It might be better for Congress to affirmatively task agencies with this duty and allocate appropriate funding for them to succeed.

Ideas for Solving Problem #3

  • Keeping the focus on product design: Problem #3 is never going away, but the best way to minimize its impacts AND lower the risk of efforts getting tossed on First Amendment grounds is to keep every public action on online safety firmly grounded in product design. That means every study, every proposed rulemaking, and every introduced bill needs to be first examined with a basic question: does this directly or indirectly create requirements based on speech, or suggest the adoption of practices that will impact speech." Having a good answer to this question is important, because the industry will challenge laws and regulations on First Amendment grounds, so any laws and regulations must be able to survive those challenges.
  • Don't Undermine Section 230: Section 230 is what enables content moderation work at scale, and online safety is mostly a content moderation problem. Without Section 230 companies won't be able to experiment with different approaches to content moderation to see what works. This is obviously a problem because we want them to adopt better approaches. I mention this here because some political leaders have been threatening Section 230 specifically as part of their attempts to work the refs and get social media companies to change their content moderation policies to suit their own political goals.

Matthew Lane is a Senior Director at InSight Public Affairs.

External Content
Source RSS or Atom Feed
Feed Location https://www.techdirt.com/techdirt_rss.xml
Feed Title Techdirt
Feed Link https://www.techdirt.com/
Reply 0 comments