ravi kumar/ Flickr

What Happened

AI has made headlines recently as a potentially world-altering technology. While there are many opinions floating around about the technology's benefits and risks, Republicans and Democrats agree on some key points about AI, such as the impact of artificial intelligence on people’s personal lives and decision making.

Arguably the most popular AI, ChatGPT, is a natural language processing tool released by OpenAI in November of 2022. Unlike OpenAI’s ‘Dall-E’ or Google’s ‘MusicLM’, which are generative AI programs, Chat GPT is an LLM, standing for “Large Language Model,” meaning it is designed to mimic human intelligence with text based input and output. GPT quickly made headlines worldwide and started widespread conversations regarding Artificial Intelligence’s place in society. Previous concerns about job displacement, suppression of speech, and the perpetuation of systemic racial issues due to AI reemerged into public discourse. As usual, political polarization has a part to play in this game, so let’s dive into the areas in which we can find common ground.

Where Republicans and Democrats Agree

In late June, a new bipartisan companion bill was introduced to the Senate, led by Rep. Ted Lieu (D-CA), as well as Reps. Ted Buck (R-CO), Anna Eschoo (D-CA), and Sen. Brian Schatz (D-HI), to create a commission to study developments in artificial intelligence and look for areas where regulations and government oversight can be implemented. The National AI Commision Act would establish a 20-member commission of those with experience in computer science, Industry, civil society, and more — 10 chosen by Democrats and 10 by Republicans. The commission would produce three reports on federal capabilities to oversee and regulate AI and suggest new approaches and frameworks for governing the technology over a two-year period. 

Also being introduced in June of this year is the No Section 230 for AI Act, introduced by Josh Hawley (R-MO) and Richard Blumenthal (D-CT), which aims to remove Section 230 protections in the realm of generative AI, including “deepfakes” in civil claims or criminal prosecutions. The bill also empowers Americans who have been harmed by generative AI to sue AI companies in federal or state court. 

Senate Majority Leader Chuck Schumer (D-NY) met with a bipartisan group of senators to work on a new framework through which the Federal Government can regulate AI. The group consisted of Martin Heinrich (D-NM) and Republican Sens. Mike Rounds (R-SD) and Todd Young (R-IND). The group agreed the bipartisan plan has to be enacted as soon as possible. "Congress must move quickly. Many AI experts have pointed out that the government must have a role in how this technology enters our lives. Even leaders of the industry say they welcome regulation," said Schumer. This came after multiple hearings in May, including a senate judiciary trial with OpenAI CEO Sam Altman, whose company is behind ChaptGPT. 

On July 13th, 2023 the F.T.C opened an investigation into OpenAI on claims that the Microsoft-backed startup has mismanaged its data collection through ChatGPT, therefore putting its consumer’s at risk. The FTC also claims that ChapGPT has been putting our false information on individuals. Lina Khan, Chair of the FTC, commented, “Although these tools are novel, they are not exempt from existing rules, and the F.T.C. will vigorously enforce the laws we are charged with administering, even in this new market.” This investigation is the first U.S regulatory threat faced by OpenAI. 

Many Democratic and Republican voters agree in areas where AI could affect deeply personal aspects of their lives. Pew Research asked if US adults were very or somewhat concerned about AI programs knowing people’s thoughts and behaviors, to which 79% of Republicans and 71% of Democrats agreed. When asked about AI making important life decisions for people, 80% of Republicans and 69% of Democrats were somewhat or very concerned.  

Where They Disagree

The prospect of AI having such profound effects on our society has led to a fair amount of disagreement among Democrats and Republicans. According to Pew Research data from 2022, people disagree on how the government should regulate specific AI applications. For example, when asked how the government would handle regulating driverless/automated passenger vehicles, 29% of Democrats said the government would go too far and 69% said the government would not go far enough, compared to 59% of Republicans saying the government would go too far and 39% saying the government would not go far enough. When asked about government regulation of AI for facial recognition used by law enforcement, the split was similar, with 40% of Republicans and 62% of Democrats answering the government would go ‘not far enough’, and 59% of Republicans and 36% of Democrats answering ‘too far.’

Voices on the right are concerned about AI’s effect on the job market, with one area being trucking. Currently, trucking is the most common job for people in the workforce with a high school education. 3.5 million people are employed as truckers in the United States, with 7.95 total working in the transportation field in some way. With full automation, the US for-hire trucking industry could save between 85-125 billion dollars a year. Another area of concern for the right is the potential for AI to be used by tech companies (which tend to lean left) to suppress free speech, as ChatGPT has been accused to have political bias

Voices on the left are worried that racial disparities and inequities could be exacerbated by AI, one example being marginalized groups being denied loans because of AI screening systems that rely on court records. These biases within AI systems has led the ACLU to call on the Biden Administration to “to take concrete steps to bring civil rights and equity to the forefront of its AI and technology policies, and to actively work to address the systemic harms of these technologies.” 

A Warning from Experts

With such a rapid pace of development in AI technology, it's easy to think that we might be diving into this society altering technology a bit too fast. This concern is exactly what sparked an open letter from the Future of Life Institute calling for a 6-month pause in the development of any AI systems more powerful than GPT4 (the latest version of ChatGPT). The Letter garnered signatures from some big names in the tech space such as Tesla CEO Elon Musk, 2020 Presidential Candidate Andrew Yang, and Tristan Harris, Executive Director of the Center for Humans Technology and star of the 2020 Netflix Documentary “The Social Dilemma.” 

All of these new AI developments have started new debates. How will this rapid growth in technology change our society? What should the government’s role be in regulating this technology that could make or break a brighter future? What responsibilities need to be placed on the companies developing AI? 

Why It Matters

The advent of AI marks an important moment in our history. This technology could lead us into an unimaginably bright future, with present applications leading to increased farming yields, improved early disease detection, and more efficient energy grids. It could also be the end of civilization as we know it if we allow bad actors to use this technology to divide us and manipulate our society for their personal gain. We will need to have our wits about us, and our elected officials will need to be able to work together and act in the interests of the American people. We have an opportunity to find common ground and build a solid foundation of understanding towards AI and each other to propel the United States and the rest of the world into a better tomorrow. 


Carsen Brunn is the Bridging and Content Intern at AllSides. He has a Center Bias.

Reviewed by Clare Ashcraft (Center), Bridging and Bias Assistant, Julie Mastrine (Lean Right), Director of Marketing and Media Bias Ratings, and Joseph Ratliff (Lean Left), Daily News Editor.