The Northern Light conducted a survey across UAA’s main campus, asking respondents for their opinions on a variety of AI-based issues. The results were mixed, with some students feeling very positive toward AI, some feeling very negative and many falling somewhere in-between.
This survey allowed respondents to be anonymous, though they did have the option to provide their name and contact information if they chose.
The Northern Light received 23 responses to the survey. All respondents identified as students, though one identified as both student and staff. This is not meant to be a scientific survey or represent the feelings of the entire student body of UAA.
Students represented a wide variety of degree programs including English, anthropology, computer science, mechanical engineering, marketing and history to name a few.
More than half of all respondents reported using AI programs before.
Excluding “other” responses, 69.6% of students reported using generative AI chatbots like ChatGPT or Google Bard, 52.2% reported using a generative AI image generator like DALL-E and 56.5% reported using a built-in generative AI feature in an existing product, such as Google Generative Search or Microsoft Copilot.
When it comes to how often students are using AI, 27% of students said they do not use generative AI products or services, while just as many say they use it at least once a week.
23% reported that they use AI on rare occasions, 9% said they use it only once a month while another 9% claimed to use it daily.
54.5% of students said that their instructor, employer, or organization has a policy around the use of generative AI. 36.4% were unsure. Only 9.1% of students said there was no policy in place.
While some professors ban AI altogether, students reported that some allow it as a starting point, or as a way to generate ideas. One student wrote that in one class it’s alright to use AI to proofread and help research, but the student is responsible for any inaccuracies that AI might create.
Some professors do allow AI more widely in the classroom. One student wrote, “There are set places in assignments to use GPT and guides for how to use it to be a better and more efficient student.”
Three students self-reported that they had used AI on an assignment without their instructor’s knowledge or approval, while 18 students self-reported that they had not.
Many students reported using generative AI to help with education-related tasks. The most popular use for generative AI tools was brainstorming ideas, followed by looking up information and studying.
When it comes to how students feel about the use of AI in education, the results are mixed. 36% of students reported feeling a mix of both positive and negative toward AI. 23% of students reported feeling very negative and concerned. Only 9% – or two students – reported feeling very optimistic about AI in education.
“It does take away the satisfaction of completing an assignment completely on your own. You may not learn as much while using AI to do the assignment. It could be more beneficial as a study tool [than] anything else,” one student wrote.
Another student wrote that it was ironic that universities often have strong anti-AI policies, while they may also use AI-based services such as plagiarism detection software.
“This is a problem the university quite literally helped create,” wrote the student, “And now they act like education is under attack and they are just so vulnerable to these large language models.”
When it comes to AI use outside of education, students were most concerned about the use of AI in politics, with 60.9% reporting feeling very negative and concerned.
Students were also concerned with AI generating online and social media content, with a majority of students feeling either very negative and concerned, or a little bit negative.
59.1% of respondents also reported feeling very concerned about AI misinformation being spread online.
One student wrote, “One thing that people forget is that AI text generators, when asked certain questions, can produce extremely incorrect answers with strong confidence. This can mislead the user if the user does not exercise caution.”
65.2% of students reported being very concerned about photorealistic deepfakes – digitally manipulated AI videos that realistically pretend to be another person.
“I think [AI] is very impressive and demonstrates the potential for great things, but the costs, such as the potential for accurate deepfakes, outweigh these benefits,” wrote one student.
More than 60% of respondents reported being either “very concerned” or “concerned” about bias in AI, while slightly more than half reported being either “very concerned” or “concerned” about copyright issues with AI.
“I think it can be used in helpful and useful ways, but there are aspects of it that could be detrimental to society, such as AI-generated art stealing from the work of human artists,” wrote one student.
While some respondents said that AI is a “net positive” and others said it’s “garbage,” many students acknowledged both the possible benefits and disadvantages in their responses.
One student wrote, “It could offer immediate help and explanation when there is no one else to ask. On the other hand, connecting with other people is a necessity.”
“It is very helpful for solving equations, but it is not great at morals,” wrote another.
The Northern Light is continuing to report on other AI-based stories, all of which can be found online.