American University
Browse
- No file added yet -

The AI Task Force: Navigating AI Regulation

Download (70.02 kB)
journal contribution
posted on 2023-09-07, 14:22 authored by Dan Zacharski

The release of ChatGPT-3 in late November 2022 marked an explosion in the popularity of artificial intelligence. The chatbot displayed the potential of AI to be used by everyday people in a variety of tasks such as writing emails, brainstorming party themes, and summarizing readings. ChatGPT-3’s creators, Open AI, have been working on this technology for nearly eight years. That being said, GPT-3 is far from perfect. The company has tried to make it clear that the chatbot has many flaws, including relaying incorrect facts and giving biased answers or harmful responses [1]. As this novel technology continues to develop, many concerned researchers, lawmakers, and netizens advocate for regulations. Considering the worst-case scenarios, these calls to limit AI research have gained support.

Recently in Congress, Senator Michael Bennet of Colorado introduced, “A bill to establish a task force on organizational structure for artificial intelligence governance and oversight.” Potentially consisting of cabinet members and officials from a handful of executive agencies, the group would work to classify and combat the dangers of AI’s shortcomings [2]. With the novelty of the tech, this taskforce would have no shortage of concepts to examine while learning about AI.

In March over 1,000 tech leaders and researchers signed an open letter warning about the shortcomings and threats posed by rapid AI development. The letter calls for work on AI to halt while academics and lawmakers gain a better understanding of how the technology works. Current fears include implications with national security, job displacement, and biased language models [3]. There are also concerns regarding the unexpected threats that could stem from the quickly improving AI capabilities. With the release of GPT-3, disinformation has been easily spread by the chatbot, which shares potentially misleading facts with a conviction of confidence.

Last week, the White House invited leaders from the AI industry to a series of meetings about the future of the artificial intelligence. Officials from Anthropic, Microsoft, Google, and OpenAI met with Vice President Kamala Harris to discuss the “ethical, moral, and legal responsibility…” private companies owe to the American people when working on AI [4]. President Biden has also said that CEO’s and the private sector should be working to ensure their products are safe for the public, whereas companies have looked to Washington for guidance and regulation.

While these worries are well founded, some argue that stopping research could be just as dangerous as unregulated AI progress. Sal Khan, a popular educator, stated in his recent TED Talk that ceasing experimentation would not deter bad actors from testing harmful applications of the technology [5]. As AI innovation continues, the duty of regulation should be shared by the government and private companies. Without input from both entities, the possibility of substandard laws and procedures greatly increases.

History

Publisher

American University (Washington, D.C.); Juris Mentem Law Review

Notes

This Article is brought to you for free and open access by the Juris Mentem Law Review. This article has been accepted for inclusion in the Juris Mentem Digital Collection. The Digital Collection is edited by Juris Mentem Staff but is not peer-reviewed by university faculty. For more information, visit: https://www.american.edu/spa/jlc/juris-mentem.cfm Questions can be directed to jurismentem@american.edu

Journal

Juris Mentem Law Review

Usage metrics

    Juris Mentem Digital Collection

    Licence

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC