Write like a good person with Goodwrite, the new writing app that uses deep learning AI to make your writing smarter, safer and fairer. Learn more.


What is Goodwrite, really?

Goodwrite is a satirical web writing app that tries to draw attention to the intellectual questions and problems that are raised by AI-powered writing correction.

Going beyond spell-checking, AI-powered suggestions include, for example, the following, as notoriously seen in Google Docs in April 2022 with its new “inclusive writing” feature:

Goodwrite is a satirical attempt to demonstrate that suggestions such as the above could be problematic for a number of reasons, which will be touched upon below.

What do you have against AI-powered “inclusive warnings”?

Having an instinct for more inclusive writing is by and large a good thing. What is worrisome, rather, is trying to enforce inclusivity via software that attempts to control personal expression. Quoting the Vice article:

“Thinking and writing outside of binary terms like “mother” and “father” can be useful, but some people are mothers, and the person writing about them should know that. […] Trying to shoehorn self-awareness, sensitivity, and careful editing into people's writing using machine learning algorithms—already deeply flawed, frequently unintelligent pieces of technology—is misguided.”

The following questions are raised:

Goodwrite tries to show that while spell-checking and grammar corrections are obviously useful, AI-powered corrections of styles of personal expression, especially through software for personal writing, are, regardless of good intent, a step too far and could one day even constitute an intrusive and somewhat totalitarian norm in software.

What's wrong with being more inclusive? Are you against inclusive efforts more generally?

There is absolutely nothing wrong with striving for more inclusive norms and language. Many parts of the world today suffer deeply from inequality and a lack of inclusion, and we have nothing against inclusion attempts in general or even against a certain level of “politically correct” writing or speech, especially in public or professional contexts where it can be vital.

What we are criticizing here is limited to attempts to enforce that inclusion through software, which people could be using for an unknowable amount of things, including for example their private personal diaries. Should an algorithm goad people into how to express their thoughts, especially in private writing, and is that really morally justifiable? Because of the points listed above, we believe that it's intrusive, insincere, potentially dangerous, and unlikely to address the root societal causes for exclusion and inequality.

What are you suggesting be done regarding “inclusive warnings”?

It's simple: writing and publishing software should never implement AI that attempts to control writers and punish them for the algorithmically-judged morality of their self-expression style, and issues of inclusivity and equality should be dealt with through social elements instead.

I typed an egregiously offensive sentence into Goodwrite, and not a single word was flagged.

That's part of the satire. Google Docs suffered this problem as well in their inclusive warnings feature. Goodwrite intentionally flags offensive (as well as not-really-offensive) words while leaving truly egregious vocabulary untouched, intentionally gives warnings that make allusions about the moral character of the writer, etc. — it's all part of the satire.

Who made this?

Goodwrite Inc. doesn't exist. Goodwrite.app was created in roughly five hours by Nadim Kobeissi, who you can follow on Twitter.