Now AI can write students’ essays for them, will everyone become a cheater? | Rob Reich

PParents and teachers around the world welcome the return of students to the classroom. But unbeknownst to them, an unexpected insidious academic threat is on the scene: a revolution in artificial intelligence has created powerful new automatic writing tools. These are machines optimized for cheating on school and university papers, a potential siren song for students who is difficult, if not downright impossible, to catch.

Of course, cheaters have always existed and there is an eternal and familiar cat-and-mouse dynamic between students and teachers. But where once the cheater had to pay someone to write an essay for them, or upload an essay to the web that was easily detectable by anti-plagiarism software, new AI language generation technologies are making it easier to produce high quality essays.

The breakthrough technology is a new type of machine learning system called a large language model. Give the template a prompt, hit enter, and you get full paragraphs of unique text. These templates are capable of producing all kinds of results – essays, blog posts, poetry, editorials, lyrics, and even computer code.

Initially developed by artificial intelligence researchers just a few years ago, they have been treated with caution and concern. OpenAI, the first company to develop such models, restricted their external use and did not release the source code for its most recent model because it feared possible abuse. OpenAI now has a global policy focused on permitted uses and content moderation.

But as the race to commercialize the technology has begun, these responsible precautions have not been adopted across the industry. Over the past six months, easy-to-use commercial versions of these powerful AI tools have proliferated, many without the strictest limits or restrictions.

A company’s stated mission is to use cutting-edge artificial intelligence technology to make writing painless. Another posted a smartphone app with a sample prompt for a frowning high school student: “Write an article on Macbeth themes.” We won’t name any of these companies here – no need to make it easy for cheaters – but they’re easy to find and often cost nothing to use, at least for now. For a high school student, a well-written, unique English essay on Hamlet or a short argument on the causes of World War I is just a few clicks away.

While it’s important for parents and teachers to know about these new cheat tools, there’s not much they can do about it. It is almost impossible to prevent children from accessing these new technologies, and schools will be overwhelmed when it comes to detecting their use. Nor is it a problem that lends itself to government regulation. While the government is already intervening (albeit slowly) to tackle the potential misuse of AI in various areas – for example, in hiring staff or facial recognition – there is far less understanding of the patterns languages ​​and how their potential damage can be addressed.

copy of the hamlet
“A unique, well-written English essay on Hamlet is just a few clicks away.” Photo: Max Nash/AP

In this situation, the solution is to get tech companies and the AI ​​developer community to embrace an ethic of accountability. Unlike the law or medicine, there are no widely accepted standards in technology for what counts as responsible behavior. There are few legal requirements for beneficial uses of technology. In law and medicine, standards are the product of deliberate decisions by practitioners to adopt a form of self-regulation. In this case, it would mean that companies would establish a shared framework for the responsible development, deployment, or release of language models to mitigate their harmful effects, especially in the hands of conflicting users.

What could companies do to promote socially beneficial uses and deter or prevent overtly negative uses, such as using a text generator to cheat in school?

There are a number of obvious possibilities. Perhaps all text generated by commercially available language models could be placed in an independent repository to allow plagiarism detection. A second would be age restrictions and age verification systems to make it clear that students should not access the software. Finally, and more ambitiously, leading AI developers could establish an independent review board that authorizes whether and how to publish language models, prioritizing access to independent researchers who can help assess risks and suggest mitigation strategies, rather than accelerating commercialization.

After all, since language models can be scaled to so many downstream applications, no single company can foresee all of the potential risks (or benefits) on its own. Years ago, software companies realized the need to thoroughly test their products for technical issues before release – a process now known in the industry as quality assurance. It’s high time that tech companies realize that their products need to go through a social assurance process before they hit the market, in order to anticipate and mitigate any societal issues that may result.

In an environment where technology overtakes democracy, we must develop an ethic of responsibility at the technological frontier. Powerful tech companies cannot deal with the ethical and social implications of their products after the fact. If they simply rush to occupy the market, then later apologize if necessary – a story we’ve been all too familiar with in recent years – society is paying the price for others’ lack of foresight.

  • Rob Reich is a professor of political science at Stanford University. His colleagues, Mehran Sahami and Jeremy Weinstein, co-authored this article. Together they are the authors of System Error: Where Big Tech Went Wrong and How We Can Reboot

#write #students #essays #cheater #Rob #Reich

Leave a Comment

Your email address will not be published. Required fields are marked *