Here is another thread related to what I wrote about yesterday. Even if text generation AI is primarily used to create boilerplate text, it can still be socially disruptive. There are a bunch of different categories of writing — letters of recommendation, job interview follow-ups, etc — where text, even if (or maybe because) it is largely formulaic, is used as an important social signal.

Kevin Munger, writing for Real Life magazine:

I can send a multi-line email thanking a friend for their birthday message, and neither of us can be sure to what degree the other was actually involved in the process. Such messages — despite potentially being identical to emails sent a decade ago — have now become more difficult to parse, not in terms of their information but their intention. Nothing can be reliably inferred from the fact my birthday was remembered or that I remembered to say thanks; no conclusions can be drawn from how timely the messages are.

Take letters of recommendation: the best letters are personal and heartfelt, but it has also become essential that they be long — vague claims about how great the student is are easy to come by, but a lengthy letter that contains specific details is a costly signal that the professor spent time and energy creating the letter. With GPT-3, however, it may become trivial for professors to plug in a CV and some details and end up with a lengthy, personalized letter. In the long run, this would undermine the value of such letters to credibly communicate effort and sincerity on the part of the recommender