Many interventions hurt more than they help.
“We chase the approval of strangers on our phones,” Barack Obama said at the Democratic National Convention on Tuesday. “We build all manner of walls and fences around ourselves and then wonder why we feel so alone.” As the former president spoke, countless convention delegates held up their devices, taking photos to share with friends — or, perhaps, those strangers on social media.
The scene (not to mention the paradox of delivering such a message amid the meme-fueled Harris campaign) underscored that platforms such as Instagram and TikTok are core to many people’s lives, no matter how guilty or conflicted Americans feel about using them. The impulse to prevent another generation from getting so attached is reasonable, and it’s easy to understand why legislators and other government officials have proposed a range of policy responses to parental worries about the sheer quantity of time kids spend on these apps.
But they should proceed carefully. Overcorrecting, and denying kids access to meaningful digital social interaction as a result, wouldn’t be healthy, either.
U.S. Surgeon General Vivek H. Murthy wants warning labels on everyone’s favorite apps. Schools, including the 500,000-student Los Angeles public school system, have banned cellphones during the day. Some states want to go further, banning minors from using apps such as Instagram and TikTok — or at least require parental consent to use them. Congress is considering a bill that would penalize companies that don’t take “reasonable” measures to mitigate certain harms.
The catch is that researchers don’t know precisely what kind of content is harmful to which kind of kids, and people should not want platforms to censor young people’s speech or access to it. Clearer, however, is that sites are engineered to keep tweens, teens and adults, too, clicking at all hours, with features such as infinite scroll, autoplay and nonstop notifications drawing them in and keeping them staring. Legislators ought to devote their efforts to curbing these addictive tactics. Going beyond that, into content control, risks damaging the same young people these lawmakers are trying to protect.
Banning cellphones during the school day is an easy call: The foreseeable result is students focusing more on learning in the classroom and on in-person interactions in hallways. Banning social media for minors who fail to obtain explicit parental consent — as lawmakers in states such as Utah, Florida and Arkansas have sought to do — is a more dubious proposition. Some online experiences benefit young people, allowing them to express themselves in new ways or form connections with new people. These sites can serve as a source of comfort for LGBTQ+ kids lacking supportive home environments in particular.
Rules that require platforms to verify age — as those state social media bans explicitly did, and as other regulations requiring special protections for children often do in practice — come with real trade-offs. Driver’s license and Social Security numbers can be vulnerable to hacking. The same goes for biometric tools, such as face scans.
Lawmakers targeting social media’s demonstrable ills rather than social media in general are on the right track. But even this is more complicated than it might sound. Take the Kids Online Safety Act, recently passed by the Senate and now on hold in the House. The proposal establishes a “duty of care” for companies not to expose young people to harms including sexual exploitation, drug promotion, bullying and disordered eating. Though the bill’s authors say they’re targeting the way platforms are designed, going after certain types of content at all could still open the door to politically motivated enforcement — say, if conservatives exploit the legislation to restrict posts about gender identity or progressives to suppress right-of-center opinions by labeling them as dangerous hate speech.
Policymakers should home in on how platforms are built, and only how they’re built: the little tricks woven into them that make them so irresistible to consumers of every age — but to younger users, whose impulse control is yet to develop fully, most of all. These range from features that extend the time kids spend online, such as autoplay or infinite scroll, to “dark patterns” that deliberately mislead internet-goers into making poor choices (for example, a countdown clock on a purchase when the numbers ticking down are irrelevant to the sale). The best portion of KOSA takes aim at these very tactics.
The science on whether social media is to blame for the crisis in youth well-being remains open to interpretation and debate. But it’s clear that some categories of smartphone use are harmful to some categories of teens and tweens. And screen time supplants activities known to be beneficial — such as spending time outside, or reading and playing with family and friends. The most prudent route to addressing the internet’s ills might also end up being the most effective: Give young people fewer reasons to stare at their screens so they have more time to touch grass.
Excerpts: Washington.
COMMENTS