On the possibilities and dangers of AI-generated fiction
I'm currently working on a piece about the limitations of AI visual art. It seems to me that DALL-E, etc. are much more restricted in their abilities than the AI story generators. Which only means we will have to be more judicious and careful in our consumption of news and information . . .
One can also imagine a bleak future where stories are regulated to be beneath a quantifiable level of impact regardless of whether they were human originated or not, simply because it's no longer possible to tell the source of a story.
Stories as technologies aligns with Ursula Franklin's notion of technologies as the way things are done around here, or simply, technology as practice. She pinched the idea from Kenneth Boulding who pinched it from ..... Franklin, U. M. (2004). The Real World of Technology (Kindle ed.). Anansi.
Question: if stories are technology, when what would be the difference between a "story developer/analyst", a "story scientist", and a "story engineer"?
Speculation: 1. Aarchive of Our Own or FanFiction (copy/derivative), Wikia/Fandom or TVTropes (dissective), deviantArt or WebToons (innovative). 2. Also I would predict that this will follow a 70-20-10 distribution. 3. The 10x storytellers (10x the proructivity and 25x rigor between the top 4% and the bottom 4%) will be in demand in the near future.
I like the broad context and implications of this piece. We are absurdly vulnerable, given how our minds really want everything to be part of a story. I ran across some vignettes about malevolent story-telling outcomes at the AI Vignettes Project. That inspired me to write a long-winded, dark story of my own (Artificial Persuasion) wherein persuasive AI's pursue their unbounded goals and we don't even know what hit us. On the other hand, I've been trying to get GPT-3 to write an intro for a friend's photography book, and it's very hard to get out anything out of it that is neither irrelevant or trite. But, as you point out, we haven't (presumably, as far as we know =:-0 ) trained the models for max(persuasion). And somebody will probably do that, either to gain power or just to see what will happen.