ChatGPT may be polite, but it’s not cooperating with you
Big tech companies have exploited human language for AI gain. Now they want us to see their products as trustworthy collaborators
After publishing my third book in early April, I kept encountering headlines that made me feel like the protagonist of some Black Mirror episode. Vauhini Vara consulted ChatGPT to help craft her new book Searches,'" one of them read. To tell her own story, this acclaimed novelist turned to ChatGPT," said another. Vauhini Vara examines selfhood with assistance from ChatGPT," went a third.
The publications describing Searches this way were reputable and fact-based. But their descriptions of my book - and of ChatGPT's role in it - didn't match my own reading. It was true that I had put my ChatGPT conversations in the book, but my goal had been critique, not collaboration. In interviews and public events, I had repeatedly cautioned against using large language models such as the ones behind ChatGPT for help with self-expression. Had these headline writers misunderstood what I'd written? Had I?
Continue reading...