This article in general is an argument against the idea of using science fiction to think critically about AI
Instead, I ask whether science fiction is sometimes not only an inadequate context for such critical thinking, but an especially bad one.
I was mainly drawn to this quote later and issues of retraining based on removal of what a system has ingested in being training, the costs and the issue of applying human traits to machine processes
At the time of writing, there is a lively and important discourse around what rights creators should have in relation to the scraping and use of our works for the training of ML models. This discourse tends to demonstrate that the distinction between training data and model is not widely and deeply understood. For example, to definitively remove one short paragraph from GPT-4 would effectively cost hundreds of millions of dollars, insofar as the model would need to be retrained from scratch on the corrected training data.
Appreciation of how texts are (or are not) represented in LLMs could inform keener appreciation of how the world is (or is not) represented in LLMs, and help us to be aware of and to manage our tendency to anthropomorphize.