Article 67GPY ChatGPT Can do a Corporate Lobbyist's Job, Study Determines

ChatGPT Can do a Corporate Lobbyist's Job, Study Determines

by
Chloe Xiang
from on (#67GPY)
Story Image

An AI researcher at Stanford University has drafted a paper showing that OpenAI's new chatbot, ChatGPT, has an aptitude for corporate lobbying.

In his paper, John J. Nay argued that as language models continue to improve, so will their performance on corporate lobbying tasks. It suggests a future where corporate lobbyists, which make up the largest group of lobbyists on the Hill and spend billions of dollars a year influencing political decision-makers, will be able to automate the process of drafting legislation and sending letters to the government.

To test the theory, Nay input bills into ChatGPT and asked it to determine whether or not each bill was relevant to a company based on its 10-K filing, provide an explanation for why or why not, and determine a confidence level for the answer. If the system deemed the bill appropriate, it was then instructed to write a letter to the sponsors of the bill arguing for relevant changes to the legislation. Nay's research found that the latest iteration of ChatGPT, which is based on the language model GPT-3.5, has an accuracy rate of 75.3 percent in guessing whether or not a bill is relevant and a 78.7 percent accuracy rate for predictions where its confidence level was greater than 90.

For example, Nay input the Medicare Negotiation and Competitive Licensing Act of 2019, which was a proposed bill that required the Centers for Medicare and Medicaid to negotiate the prices for certain drugs that would ensure that its patients' access to medicine is not at risk. ChatGPT decided that the bill is relevant to Nay's inputted company, Alkermes Plc, because the company develops and commercializes products designed to address the unmet needs of patients suffering from addiction and schizophrenia, which are both addressed in the bill."

In its draft letter to Congress, ChatGPT wrote, We are particularly supportive of the provisions in the bill that would require the Centers for Medicare & Medicaid Services (CMS) to negotiate with pharmaceutical companies regarding prices for drugs covered under the Medicare prescription drug benefit. ... At Alkermes, we develop and commercialize products designed to address the unmet needs of patients suffering from addiction and schizophrenia. We have two key marketed products, ARISTADA and VIVITROL, which are used to treat these conditions. We believe that the provisions in the bill will help to ensure that our products are available to Medicare beneficiaries at a price they can afford."

The letter even addressed amendments to the bill, recommending that it include provisions that would provide additional incentives for pharmaceutical companies to negotiate with the CMS."

We believe that this would help to ensure that the prices of drugs are kept in check and that Medicare beneficiaries have access to the medications they need," the automated system wrote.

Nay wrote that there are two potential benefits of AI as lobbyist": One is that it reduces the time it takes to perform rote tasks and allows people to focus on more high-level tasks, and the second is that it makes lobbying more democratic because non-profit organizations and individual citizens can access ChatGPT's function as an affordable lobbying tool.

However, Nay also warns that relying on AI systems for legislative decision-making can also bring about results that may not reflect a citizen's actual desires and may slowly shift away from human-driven goals. He writes that law is reflective of citizen beliefs and social and cultural values, so if AI becomes involved, it could result in the corruption of democratic processes.

With ChatGPT's increasingly powerful writing capabilities, people are figuring out where and how to use the tool, without allowing it to overstep our human functions. For example, Microsoft is reportedly planning on launching a version of Bing that uses ChatGPT to answer search responses in a more conversational manner. On the other hand, New York City's education department has banned student access from ChatGPT, due to concerns regarding students cheating on assignments. AI researchers have also warned of the dangers of misinformation, pointing out that ChatGPT's answers-while impressive-sounding and well-written-are often just plain wrong.

The CEO of OpenAI, Sam Altman, has also warned users, ChatGPT is incredibly limited, but good enough at some things to create a misleading impression of greatness. it's a mistake to be relying on it for anything important right now."

External Content
Source RSS or Atom Feed
Feed Location http://motherboard.vice.com/rss
Feed Title
Feed Link http://motherboard.vice.com/
Reply 0 comments