Marketing Safely in a Changing Social Media World
The Gist
- Social media feeds and their commercial value. Social media feeds are designed to attract attention, with commercial value tied to impressions and clicks.
- Mental health concerns linked to social media. Researchers are uncovering correlations between social media behavior and mental health issues, especially in youth.
- Safety measures and legislation are evolving. State and federal laws, along with platform-specific features, aim to address mental health risks and protect young users.
A main feature of a social media platform is a feed where users can discover shared imagery and commentary. The feed is designed to attract people’s attention. Social media feeds have commercial value, represented as impressions that lead to clicks to other sites, be it retail or B2B.
However, the value of garnering attention is being questioned, especially when it comes to marketing to young people. Researchers are discovering how mental health concerns are potentially linked to social media behavior.
Unlike the days when cigarette companies buried such negative health reporting on their products, both the tech industry and US legislators have shed a spotlight on managing recent mental health discoveries. The key debates focus on how platforms should implement safeguards.
What does this mean for marketers and their social media campaigns? They must navigate evolving social media safety measures and legislation to create youth-safe content that engages both parents and teens while addressing mental health concerns. More on that later.
Now, let’s look at all the safety measures being put into place, ranging from new state laws and federal programs passed to new features on social media platforms designed to disable negative mental health behaviors.
Table of Contents
What Is Driving Social Media Safety?
Parental Consent and Early Internet Practices
The biggest driver of these safety changes is the range of parental consent to children’s access. Some of the first initiatives for managing consent were established in the internet’s nascent years. Border internet access and the earlier introduction of digital devices have created multiple means for accessing media.
Mental Health Research Highlights
Documentation of mental health concerns linked to social media behavior. Stanford University lists the most recent research discovery. Many indicate that extensive usage online correlates to mental anxiety. The research noted that the use of social networking sites is associated with an increased risk of depression, anxiety and psychological distress, according to the Stanford report.
Social Media Usage Trends Among Teens
A better understanding of user behavior, particularly the average length of time usage, has also emerged. Pew Institute noted in its 2023 study that nearly half of teens say they use the internet “almost constantly,” a similar result to the 2022 study and “roughly double the 24% who said this in the 2014-2015 survey.”
More specific remedial tactics have been made this year, in response to further research and behavior trends. In June, the US Surgeon General Vivek Murthy wrote a New York Times opinion piece called for a warning label on social media platforms. Murthy noted that “warning labels can increase awareness and change behavior. When asked if a warning from the surgeon general would prompt them to limit or monitor their children’s social media use, 76% of people in one recent survey of Latino parents said yes.
Like most digital practitioners, I am a heavy user of online platforms: in my analytics business; I have to review websites; write on digital marketing and work with programming code from a laptop. So do many entrepreneurs and remote workers.
Yet activities in which a business outcome is expected are different from casual activities involving doom-scrolling and ongoing discussions. Business activity is meant to be conducted during business hours. A high volume of casual activity that reflects an uncontrollable habit impeding other aspects of someone’s life becomes exposure to mental health harm. Harassment from complete strangers can make younger social media users feel unsafe, especially as the internet can unintentionally mask when people respond to a post or direct message.
Related Article: 10 Social Media Trends You Can’t Ignore
Australia: Making Laws for Social Media Safety
A final driver for social media safety is the rising global interest in online safety. Australia’s approach? They’re shutting down social media access for children under 16.
Australia’s Parliament approved the Online Safety Amendment (Social Media Minimum Age) Bill, a law that prevents children under 16 years in age from hosting and operating a social media profile. It is the first national legislation of its kind, holding social media platforms such as TikTok, Facebook, Snapchat, Reddit, X and Instagram liable for systemic failures that prevent children from holding accounts.
The ban has fines that range up to $33 million. Britain, France and Norway are looking into similar usage restrictions, so a debate regarding the success and failures of a government guidance on social media will fuel technological and political debates.
What Legislation Is Considered at the State Level?
State-Level Laws Lead the Way
Many of the first initiatives arose from the passage of local proclamations and state laws in the US. In June of this year, New York state legislature passed two comprehensive laws covering social media safety, the SAFE For Kids Act, the first such legislation in the United States placing restrictions to prevent minor access to addictive social media feeds, and the New York Child Data Protection Act, which prohibits online sites from collecting personal data of minors without consent.
Challenges and Repeals in Social Media Legislation
Even earlier than New York, in 2023 Utah lawmakers passed the Utah Social Media Regulation Act, a bill meant to guide social media usage for teens. The law was notable for its establishment of curfew usage hours and age limits.
This year the state legislature repealed the Act through the passage of two bills (SB 194 and HB 464). The bills were in response to a lawsuit filed by an Internet trade association challenging the Utah Social Media Regulation Act on constitutional grounds.
California, already at the forefront of tech and data safety legislation, passed a law in which social media platforms are prohibited from deliberately delivering addictive feeds to minors without parental consent, starting in 2027. California also has an age-appropriate design code that involves stronger data protection for young consumers.
Related Article: Building Strong Customer Relationships Through Social Media
Federal Acts Targeting Youth Safety
In addition to the local legislation, Congress has been considering two federal acts. The most prominent of the two, the Kids Online Safety Act (KOSA) is for protecting children from online harm. Passed in the Senate this past June, KOSA sets guidance for the measures social media platforms take to mitigate harm to young people. KOSA highlights several key online protection measures that assign a duty of care to the platforms to ensure that no harm comes to teen users. Platforms must have their strongest safety protections automatically enabled for minors rather than burying safety features within the app.
The platforms would be required to provide parents and guardians of young people with more control over minors’ use of a platform and prevent certain features, such as the ability to disable algorithmic recommendations that keep children endlessly scrolling.
Finally, platforms offering parental empowerment provisions would offer parents tools to identify harmful behaviors and dedicated pages on which to report harmful content.
The second, COPPA 2.0, updates a 1998 law by extending online privacy protections to include anyone under the age of 17 vs. the previous law’s 13 age limit. COPPA also eliminates loopholes in targeted advertising for kids and teens — the loopholes would allow and create an eraser button for parents and kids by requiring companies to allow users to delete information.
Debates and Criticisms of KOSA
KOSA has experienced the largest brunt of bill debate before its passage. Different groups have different takes and recommendations based on a version of the bill being considered.
An earlier version of the KOSA bill originally placed oversight with state attorney generals. Critics such as civil liberties advocates were concerned that broad discretion would lead to a pursuit of partisan goals when suing tech companies.
The bill has since been updated to give authority to the FTC rather than state attorney generals to sue over content. Critics remain concerned about partisan behavior, noting that even the FTC is traditionally staffed by a 3-2 majority from the president’s party.
Beyond the partisan issue, other critics feel that KOSA may undermine some online behaviors that are beneficial, mainly finding information. Experts have warned that upholding KOSA would hinder how kids and adults search for necessary information online. The Electronic Frontier Foundation (EFF), a nonprofit that plays a significant role in protecting online civil liberties, conducted an analysis of KOSA provision earlier this year. The EFF concludes that some websites, as a response to KOSA, might implement age verification or simply block content on sensitive topics to avoid potential lawsuits.
The issues have created strong interest in how the final bill should be formed.
A bipartisan coalition of 32 attorney generals sent a letter to Congressional leaders requesting Congress to pass KOSA before the end of this year. In the letter, the coalition emphasized the urgent need to address the growing youth mental health crisis linked to social media usage.
Instagram Teen Profile: How Social Media Platforms Increase Youth Safety Online
Instagram’s Approach to Teen Safety
Social media platforms have been implementing solutions to make engagement on their feeds safer. Current tactics across every platform vary. The most prominent feature among social media platforms has been Instagram’s introduction of a teen-oriented feature, a specific teen Instagram profile that includes specific profile privacy measures that create a separate digital environment for teens. Teens can select age-appropriate topics in their Instagram recommendations, like “sports” or “pets.” Strangers can not direct message teen users, as profiles are automatically set to “private” to minimize discovery. A sleep mode is expected in the next few months.
Instagram’s Teen profile represents a sweeping overhaul of its social media profile guidelines for users. Parents gain more control of these profiles to help manage teens under the age of 18. Teens younger than 16 years old need their parents’ permission to be on Instagram. This profile has settings that complement compliance with local laws like those in New York and Utah.
Challenges With Platform Safety Enforcement
However, some people have learned how to host fake profiles to gain access to the feeds. Hosting dummy profiles circumvents age verifications, making precise safety enforcement for platforms difficult to maintain.
Other social media platforms have lower barriers to exposure. On the social live audio app Clubhouse, for example, each person has their profile. However, because social audio is an audio conference that treats the smartphone as a microphone, children in the room can become part of a chat room discussion. This instance sort of circumvents the spirit of profile — with a profile, a user can be part of a community or not part of a community. Usually a parent or adult is in the room with the child so there is no major issue.
However, the supervision can be too lax, given that profile verification is minimal when creating a profile.
What the Social Media Safety Movement Means for Marketers
Navigating a Dual Audience: Parents and Teens
All of these social media developments have a heavy influence on marketing products and services that involve young people. The customer experience planned often involves two personas that must be engaged — the parent and the child. Family members use social media very often, but not always simultaneously.
The combination of legislation and platform features means marketers must ensure that their customer experiences to reach young audiences do not encourage the mental health issues that are being fought against. Targeted advertising that ensures ads align with intended demographics can help ensure a message reaches an audience while minimizing reasons for doom-scrolling to see or hear a message.
Successful Campaigns Highlight Youth-Safe Strategies
Some brands have seen success in recent years. Advertising experts lauded Nike for its “Play New” Campaign, which centered on encouraging youth to try new sports without the pressure of being the perfect athlete. The campaign used real teens instead of the professional athletes normally appearing in a Nike campaign. The campaign generated over 100 million organic views across platforms while maintaining youth safety protocols.
Marketers can use successes like Nike to plan practical guidelines for creating youth-safe social media content. Highlighting practical considerations for ad targeting and content strategy will get complex as 2025 rolls around — users are switching social media platforms, creating a fragmented audience, and more considerations of how brands are exposed to platform concerns.
Marketers will have to be more aware of how platform safety impacts the reception of their media and message. The ability of a brand to keep your people safe online will be an important test as the new year unfolds.
link