The debate on artificial intelligence reached an inflection point on March 22, when an open letter, signed by more than 2,806 people, was released calling for a six-month pause on “the training of AI systems more powerful than GPT-4.” The letter was signed by a mix of business, technology and political leaders such as Twitter and Tesla CEO Elon Musk, Co-Founder of Apple Steve Wozniak and Forward Party Co-Chair Andrew Yang.
Artificial intelligence has become a hot topic as modern computers and machines have become able to quickly learn patterns, enabling them to write accurate text, convincingly manipulate photos and videos and create breathtaking art. Given the rate of technological advancement – each year allowing for faster and more robust computing – many feel that our society is poised for fundamental change.
The letter recommends that, “Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.” It advocates for oversight of artificial intelligence that will ensure “well justified” confidence that the risk of powerful artificial intelligence systems will be manageable. According to signees, these risks include job automation, misinformation bots, the obsoletion of human intelligence and a potential loss of control over civilization.
Although concerns abound, artificial intelligence is being used in Alaska. UAA Computer Science professor Shawn Butler has employed machine learning to combat the spread of COVID-19 misinformation online. According to a media release on UAA’s website, Butler’s research has resulted in a program that identifies such misinformation with 80% accuracy – a rate that Butler expects will rise as the model is given more data to learn from.
Some have hope that artificial intelligence might lessen the effects of political polarization caused by misinformation, if used constructively. A 2020 Stanford University research paper titled “Designing AI for All: Addressing Polarization and Political Anger” notes that AI-powered language processing models might someday “impartially highlight commonalities across articles [by different news outlets]” and “clarify” issues for people, regardless of where they exist on the political spectrum.
The Stanford researchers cited a nationwide survey of 10,000 adults conducted by the nonpartisan Public Religion Research Institute that found that 91% of Americans believe that the country is divided over politics. The Stanford study concludes that artificial intelligence could “increase tolerance of polarization” and reduce the dangers posed by a polarized society.
Some educators are concerned with the potential for artificial intelligence to be used by students to cheat on assignments. ChatGPT, a language program produced by Open AI, can be easily accessed by students and used to write anything from short sentences to multi-paragraph essays. The Anchorage School Board updated its academic honesty policy on Feb 7 to include language that prohibits students from claiming “products generated by Artificial Intelligence as their own.” The move is just one of many across the nation intended to curb artificial intelligence-driven plagiarism.
Whether or not artificial intelligence has the power to ultimately change society remains to be seen, but even in its currently limited state it has already been enough to create small-scale change and force debate that is unlikely to settle anytime soon.