New AI tools are being introduced as study tools for students. Do they do more harm than good?

by admin
New AI tools are being introduced as study tools for students. Do they do more harm than good?

Once upon a time, educators worried about the dangers of CliffsNotes, study guides that presented great works of literature in a series of bullet points that many students used as a substitute for reading.

Today it seems quaint.

Suddenly, new consumer AI tools hit the market that could take any piece of text, audio, or video and provide the same kind of simplified summary. And these summaries are not just a series of pleasant texts in bullet point form. Nowadays, students can have tools like NotebookLM from Google transform their course notes into a podcastwhere sunny-sounding AI robots joke and riff on key points. Most of the tools are free and do their job in seconds with a single click.

Naturally, all of this has some educators worried, who are seeing students offload the hard work of synthesizing information about AI at a rate never before possible.

But the overall picture is more complex, especially as these tools become more common and their use begins to become the norm in businesses and other contexts beyond the classroom.

And these tools provide a particular lifeline for neurodivergent students, who suddenly have access to services that can help them get organized and support their reading comprehension, education experts say.

“There is no one-size-fits-all answer,” says Alexis Peirce Caudell, a lecturer in computer science at Indiana University in Bloomington, who recently completed an assignment in which many students shared their experiences and their concerns about AI tools. “Biology students are going to use it one way, chemistry students are going to use it another way. My students all use it in different ways.

It’s not as simple as assuming students are all cheaters, the instructor points out.

“Some students were concerned about the pressure to use the tools: If all their peers were doing it, they would have to do it, even if they felt it was detrimental to their authentic learning,” she says. They ask themselves questions like, “Does this help me complete this specific assignment or this specific test because I’m trying to get through five courses and internship applications” – but at the expense of learning?

All of this adds new challenges for schools and colleges as they try to set limits and policies for the use of AI in their classrooms.

Need for “friction”

It seems like almost every week, if not every day, tech companies announce new features that students are adopting in their studies.

For example, last week Apple released Apple Intelligence features for iPhones, and one of the features can recreate any piece of text in different tonesas casual or professional. And last month, OpenAI, the creator of ChatGPT, released a feature called Cloth which includes slider bars allowing users to instantly change the reading level of a text.

Marc Watkins, a professor of writing and rhetoric at the University of Mississippi, worries that students are lured by the time-saving promises of these tools and may not realize that using them can mean skipping the actual work needed to internalize and remember the material.

“From a teaching and learning perspective, it’s quite concerning to me,” he said. “Because we want our students to have a little difficulty, to have a little friction, because that’s important for their learning.”

And he says the new features make it harder for teachers to encourage students to use AI in useful ways – like teaching them to create prompts to change the writing level of something: “It takes away that last desirable level of difficulty when they can just press a button. crush and get a final version and also get feedback on the final version.

Even professors and colleges that have adopted AI policies may need to rethink them in light of these new types of capabilities.

As two teachers said a recent opinion article“Your AI policy is already outdated.”

“A student who reads an article you downloaded, but can’t remember a key point, uses the AI ​​assistant to summarize or remind them where they read something. Did this person use AI when there was a ban in the class? ask the authors, Zach Justus, director of faculty development at California State University, Chico, and Nik Janos, professor of sociology. They note that popular tools like Adobe Acrobat now have “AI assistant” features that can summarize documents with the press of a button. “Even as we evaluate our colleagues in tenure and promotion cases,” the professors write, “must you promise not to press the button when you read through hundreds of pages of student evaluations of teaching ?»

Instead of drafting and rephrasing AI policies, the professors argue that educators should develop broad frameworks for what constitutes acceptable assistance from chatbots.

But Watkins is calling on AI tool makers to do more to mitigate the misuse of their systems in academia, or, as he put it when EdSurge spoke with him, “to ensuring that this tool which is so widely used by students (is) truly effective for their learning and not just as a tool to offload it.

Uneven accuracy

These new AI tools raise a host of new challenges beyond those that were in play when printed CliffsNotes were the study tool du jour.

The first is that AI synthesis tools don’t always provide accurate information, due to a phenomenon of large language models called “hallucinations,” when chatbots guess facts but present them to users as things safe.

When Bonni Stachowiak first tried the podcast feature on Google’s NotebookLM, for example, she said she was blown away by how realistic the robot’s voices sounded and how they seemed to summarize the documents she had provided him. Stachowiak is the host of the long-running podcast, Teaching in higher educationand dean of teaching and learning at Vanguard University of Southern California, and she regularly experiments with new AI tools in her teaching.

But as she tried the tool and uploaded documents on complex topics she was familiar with, she occasionally noticed errors or misunderstandings. “It just flattens it — it misses all those nuances,” she says. “It feels so intimate because it’s a voice and audio is such an intimate medium. But as soon as it’s something you know a lot about, it’s going to fall flat.

Despite this, she says she found NotebookLM’s podcasting feature useful in helping her understand and communicate bureaucratic issues at her university, such as turning part of the faculty handbook into a podcast summary. When she checked it with colleagues who were familiar with the policies, she said they thought the work was “perfectly good.” “It’s very effective in making two-dimensional bureaucracy more accessible,” she says.

Indiana University’s Peirce Caudell says his students have also raised ethical issues surrounding the use of AI tools.

“Some say they are really concerned about the environmental costs of generative AI and its use,” she says, noting that ChatGPT and other AI models require large amounts of computing power and electricity.

Others, she adds, worry about the amount of data users end up providing to AI companies, especially when students use free versions of the tools.

“We’re not having this conversation,” she said. “We’re not having discussions about what it means to actively resist the use of generative AI? »

Despite this, the instructor sees positive impacts for students, such as when they use a tool to help them create flashcards for studying.

And she heard about a student with ADHD who had always found reading large text “overwhelming,” but who used ChatGPT “to get over the hurdle of that initial engagement in reading, and then they would check their comprehension with the use of ChatGPT. »

And Stachowiak says he’s heard about other AI tools that students with intellectual disabilities are using, such as A which helps users break large tasks into smaller, more manageable subtasks.

“It’s not cheating,” she emphasizes. “It’s about breaking things down and estimating how long something is going to take. It’s not something that comes naturally to many people. »

Source Link

You may also like

Leave a Comment