Grammarly, ChatGPT and Us: The New Rules of Writing

Image Credit: Pexels

Johnny Lumley evaluates the benefits and risks of people’s reliance on generative AI for writing.

I am sure that everyone who is reading this who is currently in university has some experience with platforms that use AI technology to aid a creative process, help tailor the tone of a message towards the intended audience or to improve the effectiveness and clarity of a piece of writing. These new AI-driven writing assistants have significantly transformed the way in which people write, with these platforms promising ever-increasing improvements in their models. With AI becoming more and more integrated into our daily lives, it becomes significant to consider the exact consequences these tools may bring to the process of writing and how they have changed our relationship with writing thus far. 

One of the primary services that these writing assistants can provide is by dramatically reducing the work involved in many of the sometimes menial aspects of writing, such as the drafting process, brainstorming, researching and summarising information and by providing possible counterarguments. These can be invaluable tools, especially in outlining the shortcomings in the style or content of a particular piece of text which can greatly improve the skills of developing writers, especially given the ease of use of such platforms. Skills that once took years to develop can now be gained through such methods, which provides huge potential for the next generation of writers and storytellers. 

This is in conjunction with the opportunities that such technology can provide to non-native English speakers and those with disabilities. For those with learning disabilities, for example, these models are vital in aiding spelling, grammatical or punctuation related errors, or as a valuable focus tool to summarise key concepts or ideas. In relation to non-native English speakers, these AI assistants can suggest more natural phrasing and adjust the text's tone to sound more academic, professional or personal, depending on the intent of the user. Such corrections, as well as provided justifications, can help individuals learn English in a more natural, context-driven manner. 

These are all undeniably positive use cases of the technology to promote inclusivity and equity, as Grammarly, one of the most prolific platforms for AI writing assistance, says in its mission statement, which is to allow users to achieve more through effective communication. However, to say that the effect that these writing assistants are having is purely positive is not entirely accurate either. 

I believe that most people reading this article are guilty of utilising platforms such as Grammarly and ChatGPT’s AI writing assistance to generate or, at the very least, heavily aid in a menial task that we have determined not to be deserving of our intellectual effort. And while it may be disingenuous, it is not exactly a net negative to the history of literary contributions that it is no longer necessary to write a formal, stuffy email or a self-congratulatory CV cover letter with feigned interest in both the company and the potential position. Truly this may be the biggest contribution that this technology has made to the average person's lives: removing, while necessary, menial tasks that don’t really require creative contribution. However, the danger of such technology may lie in the aforementioned increased integration of this technology into our day-to-day lives. 

These platforms are only going to get more intuitive and integrated into everyday use, which may tempt students, academics, and authors alike to have an over-reliance on them during their writing process. This calls into question the problem of ownership in relation to these AI assistants, given the nature of the technology being built on the program's access to massive amounts of data and the generation of text based on this reserve of information. How best should we then attribute ownership to a piece of work if made in conjunction with an AI writing assistant? This is an issue that is plaguing Universities, with many struggling to find effective policies in order to curb unethical use of AI in students' work in relation to concerns of plagiarism. UCD, as an institution, still lacks strict guidelines for what constitutes ethical use of AI. 

It is also important to consider how overreliance on such technology may impact the creative capabilities of writers in the future. Overreliance on these models for grammar, structure and phrasing can skip a crucial development in the critical thinking capabilities of a writer, leading to only superficial understanding of key concepts and ideas. It also risks the homogenisation of innovative style, as aspects of how a writer may express themselves and their position may be forgone for what the AI model favours for supposed efficiency. While the convenience and undeniable value that these tools provide, they seem to be willfully ignoring that in a creative process, there is value not just in the final product, but in the process itself. 

We find ourselves at an interesting intersection with our relationship to AI. As demonstrated, the benefit of these writing assistants is evident and has the potential to open new opportunities for a huge population of individuals to communicate more effectively through their work, and may make writing as a whole more accessible. However, these benefits and tools also pose a legitimate, existential threat to the integrity of writing and the creative process as a whole. This leads to the consideration that, maybe, the problems that these tools were made to address were fundamental to the process itself - no matter how menial. But regardless of how much we can philosophize on the moral relationship between AI and writing, these tools exist and will only continue to improve and become more prevalent, making it the collective responsibility of all to foster an environment to encourage the ethical use of this new technology.