According to new research from Penn State, implementing a framework for artificial intelligence (AI) equality is needed to help elevate the voices of communities that could become lost amid a shifting digital landscape shaped by algorithmic biases, misinformation, and targeted political influence.
Due to a lack of policies and regulations involving social media and AI in years past, political groups have increasingly used platforms like TikTok to sway voters, mainly in the Gen Z demographic, by using misinformation, propaganda, or targeting vulnerable groups who don’t match their political leanings.
Now, a Penn State Dickinson Law researcher has proposed an “equity by design” framework for regulating (AI) to help protect marginalized communities from potential harm.
In an article published on January 28 in the Duke Technology Law Review, Daryl Lim, H. Laddie Montague Jr. Chair in Law and associate dean for research and innovation with Penn State University’s Institute for Computational and Data Sciences (ICDS), emphasized the need for socially responsible AI governance that prioritizes ethics, fairness, and transparency.
Learning from Past Policy Mistakes
Since the rise of social media platforms created by major technology conglomerates, American government policies and lawmakers have been slow to grasp how these networks influence not only public opinion, the spread of misinformation, and extremist political views but also how users’ beliefs can contribute to discrimination or marginalization of those who look or think differently.
In the early years of social media, between 2005 and 2008, government administrations largely viewed these platforms as tools for public engagement rather than subjects of policy regulation. However, between 2009 and 2010, the Obama administration recognized the need for social media guidelines, launching the “Open Government Initiative” to promote transparency, participation, and collaboration. As social media gained widespread influence across the United States, growing global concerns over misinformation and hate speech prompted discussions on regulatory oversight. In 2010, the Federal Trade Commission (FTC) expanded its role to address consumer privacy protections in the digital space.
By the early 2010s, bias in coding became a significant topic of conversation as researchers and organizations advocating for equality noticed that marginalized communities, such as black, Indigenous, and people of color (BIPOC), women, and other stigmatized groups, were not adequately represented in coding datasets, which in turn led to biases in artificial intelligence (AI) and machine learning models.
As policy changes struggled to keep pace with the rapidly expanding technological landscape, by 2016, approximately 51 percent of businesses had implemented social media policies, leaving nearly half without any formal guidelines. As a result, many companies and employees were free to post, speak, and represent either their brand or themselves however they saw fit—often without clear boundaries or consequences.
Meta (formerly Facebook) and its founder and CEO, Mark Zuckerberg, began enforcing stricter content guidelines around 2018 that aimed to curb misinformation and hate speech. This push for tighter regulations intensified following the 2016 U.S. presidential election between Donald Trump and Hillary Clinton, which led to Trump entering office as the 45th President of the United States.
How inequalities in AI and social media become political
Fast-forward to 2024. Elon Musk and newly reelected President Donald Trump successfully leveraged AI-generated images and social media to sway public opinion before the 2024 election. In addition to these tools, TikTok was also employed as a platform to connect directly with Generation Z voters. By embracing AI-generated imagery, engaging in meme culture, and collaborating with social media influencers, Trump effectively communicated his political ideas to a demographic he struggled to reach during his first presidential campaign.
Generation Z (ages 18–29) exhibited notable voting patterns in the recent election, with approximately 42 percent of young voters participating—a decline from the 50 percent turnout in 2020 that was consistent with 2016 figures.
A significant shift in the role of social media and AI occurred when Elon Musk acquired X (formerly Twitter). Shortly after the purchase, Musk dismissed a large portion of Twitter’s employees and later released what is now commonly referred to as the “Twitter Files.” This series of disclosures, consisting of internal documents and communications, was made available by independent journalists—most notably Matt Taibbi and Bari Weiss—following Musk’s acquisition of the platform in 2022. These documents revealed internal decision-making processes at Twitter, particularly regarding content moderation, censorship, and the platform’s interactions with government agencies.
Debate continues over whether Musk’s takeover of the social media platform fostered true freedom of speech with limited regulations or, conversely, created an environment where hate speech was tolerated, and extremist groups found a venue to share their ideologies and connect with like-minded individuals. A study by Montclair State University found that in the first 12 hours following Musk’s acquisition, tweets containing hate speech surged to approximately 4,778—compared to an average of 84 per hour in the week prior. This surge ultimately signaled to users and other technology companies that inequality remains deeply embedded in the ethical framework of X as a newly rebranded social media platform under Musk.
Musk came to his own defense after buying the social media network, taking to X in 2024 saying, “Free speech is the bedrock of democracy. That’s why it’s the FIRST Amendment. Without free speech, all is lost.”
While the X controversy played out, other social media apps like META continued to implement policy guidelines regarding hate speech. However, earlier this year, before the new Trump administration began to assemble itself, Mark Zuckerberg, CEO of META, told podcaster Joe Rogan that not only was the Biden administration pushing to control certain information regarding the coronavirus pandemic but had also actively worked to censor other forms of content being served on the platform, raising questions about the Biden administration’s agenda.
Shortly after Zuckerberg’s podcast interview, Donald Trump was sworn in as the 47th U.S. President, flanked by billionaires and tech CEOs, including Elon Musk, Mark Zuckerberg, Amazon founder Jeff Bezos, Apple CEO Tim Cook, Google CEO Sundar Pichai, and OpenAI CEO Sam Altman. The presence of these industry leaders underscored their alignment with the new administration and the online culture Trump sought to cultivate. Among the administration’s policies was the implementation of a two-gender language framework in healthcare-related materials and political discourse, once again sidelining marginalized communities and reinforcing their exclusion.
What does this mean for equality online? Both political parties have been implicated in censoring content and shaping narratives to serve their own interests, making it clear that achieving true equality requires a balanced approach. Addressing the treatment of marginalized communities across AI and social media platforms is not a partisan issue—it is a broader societal challenge. Moving forward, academics, marginalized groups, and policymakers must collaborate to develop fair and inclusive policies that reflect the realities of everyday life across North America.
What the Future May Hold for “Equity by Design”
History has shown that when technology and policy fail to work in tandem, political powers can exploit technology to manipulate public opinion or target marginalized communities, including women, LGBTQ+ individuals, and BIPOC groups.
The Debrief reached out to Lim, one of the lead academics working on the equality framework at Penn State Dickinson Law, to inquire about how tech companies, President Trump (or any other U.S. political leaders or parties), and government entities may collaborate to use AI and social media as propaganda tools, potentially reshaping societal values and cultural attitudes.
“The potential for AI and social media to be used as propaganda tools to reshape societal values and morals exists,” Lim said in an email, explaining that his recent research “provides a foundation for this argument by highlighting AI’s impact on governance, bias, and decision-making.”
Given that this scenario is a real possibility, experts like Lim say that establishing a bipartisan equality framework must be prioritized. This discussion should extend beyond Congress to all government agencies. Representation and language matter—not just for the current generation but for those to follow. The goal, in essence, should be to help instill faith in technological advancements without fear of manipulation, marginalization, or political influence that undermines true equality.
“The opacity of AI-driven content and limited regulatory oversight makes it feasible for governments and tech companies to coordinate in influencing societal norms over time,” Lim told The Debrief. “However, ensuring transparency, accountability, and ethical AI governance could act as safeguards against such risks.”
The framework Lim proposes in his recent article focuses on mitigating risks from discrimination and bias, fostering public trust, and promoting innovation. Including equality policies throughout the AI lifecycle, Lim says, will help minimize AI biases and promote justice for all, with a focus on marginalized groups from past and present history.
“Being socially responsible with AI means developing, deploying, and using AI technologies in ethical, transparent, and beneficial ways,” Lim recently said in a Q&A featured on Penn State’s website. “This ensures that AI systems respect human rights, uphold fairness, and do not perpetuate biases or discrimination.”
“This responsibility extends to accountability, privacy protection, inclusivity, and environmental considerations,” Lim added.
“It’s important because AI has a significant impact on individuals and communities. By prioritizing social responsibility, we can mitigate risks such as discrimination, biases, and privacy invasions, build public trust, and ensure that AI technologies can contribute positively to the world,” Lim said in the online Q&A. “By incorporating social responsibility into AI governance, we can foster innovation while protecting the rights and interests of all stakeholders.”
Advocating for proactive governance, transparency, and tailored regulation, urging policymakers and legal scholars to align AI with societal equity values and the rule of law, Lim argues that AI equality is not only a North American issue but also a global one. As a primary example, Lim references the Framework Convention on Artificial Intelligence, signed by the U.S. and EU, which focuses on human rights, democracy, and oversight of high-risk AI applications.
“This AI treaty was a major milestone in establishing a global framework to ensure that AI systems respect human rights, democracy, and the rule of law,” Lim said. “The treaty specifies a risk-based approach, requiring more oversight of high-risk AI applications in sensitive sectors such as health care and criminal justice. The treaty also details how different areas — specifically the U.S., the EU, China, and Singapore — have different approaches to AI governance.”
Primarily, Lim advocates for proactive AI governance, emphasizing transparency, equity, and tailored regulation to align AI systems with societal values and the rule of law. His equity by design framework integrates justice, equity, and inclusivity throughout AI’s lifecycle, rather than focusing solely on post-implementation protections, while calling for equity audits to ensure AI systems undergo checks and balances before deployment, addressing potential racial, gender, and geographical biases.
As far as potential solutions, Lim also says increased diversity in hiring for AI development positions, as well as third-party audits to mitigate unconscious biases and improve outcomes, would be beneficial.
Fundamentally, Lim stresses that while policy changes take time, marginalized communities and advocacy groups must have a voice in shaping AI governance to ensure technology advances human rights for all.
“People that pick the data may be biased, and that may entrench inequalities, whether the bias manifests itself through racial bias, gender bias or geographical bias,” Lim said during the recent Penn State Q&A.
“A solution,” Lim added, “could be hiring a wide group of people with awareness of different biases and who can call out unconscious biases or having third parties look at how systems are implemented and provide feedback to improve outcomes.”
Lim’s recent article, “Determinants of Socially Responsible AI Governance,” was published in the Duke Law & Technology Review and can be read online.
Chrissy Newton is a PR professional and founder of VOCAB Communications. She hosts the Rebelliously Curious podcast, which can be found on The Debrief’s YouTube Channel. Follow her on X: @ChrissyNewton and at chrissynewton.com.