21 Comments
Apr 13, 2023Liked by Roger’s Bacon

This is one of the most interesting commentaries I've seen on the subject. And I've read many. My largest concern has been how so many people seem to give LLM AI's credit and expectation for far more than they are capable of. They revere and fear them as humans and don't seem able to comprehend the differences.

As you've pointed out, the differences are marked and important ones.

I have far less fear of what the AI's will do than of what humans will do - in general, and in relation to the AI's abilities and potential consequences.

We shall see... As in all of life, we occupy a ringside seat to watch the great unknown of what's to come. This will be an interesting show.

Expand full comment
author

Indeed it will be... well said :)

Expand full comment
Apr 13, 2023Liked by Roger’s Bacon

Curiosity will likely kill this know-it-all Cat 🐈.

Unless we can fashion an optimum prime type AGI to save our hides.

And If they've read all of human history and our glorious & gory rise to the top as the alpha predators then AGIs have to be really stupid to spare us the rod.

To me its the 1st 30 minutes of the movie where well dressed scientists give detailed lectures & safari tours of Jurassic Park while the egg is hatching in front of clueless and amused who'll be doing a hell lot of running after the interval.

Expand full comment

You know’Evil’ Noah Harrari doesn’t have breakfast without consulting Aladdin, Watson and Deep Blue....I’d rather spend time with TJ Coles ;)

Thanks for this! Good stuff

Expand full comment

Neat article. I have seen so much about these LLMs that I decided to test them out for myself. I asked GPT-4 to write me a story and answer some questions about the story. The whole thing is here: https://comfortwithtruth.substack.com/p/chatgpt-writes-me-a-story-and-answers-a8f

As I sat there watching the cursor blink as it searched for the next word I had an epiphany. It wasn't really writing a story, it was determining which word should come next in response to my prompt. That is the secret to what is so inhuman about its writing. As an example, it can never use foreshadowing because it doesn't see the object casting the shadow.

How our conversation ended:

Me:'It is impossible for other people to determine a person's sincerity or insincerity. but the author of a story has the power to determine the truth about a character. as the author of this story determine if jake's apology was sincere'

Gpt4:'As an AI language model, I generated the story based on a random prompt and did not specifically create a character named Jake or include any details that would allow me to make a determination about the sincerity of his apology. Therefore, I am not in a position to provide a definitive answer to your question.

In general, the sincerity of an apology can be influenced by a variety of factors, including the specific words used, tone of voice, body language, and the context of the situation. Since I do not have access to these details in the story, I cannot provide an informed opinion on the sincerity of Jake's apology.'

There are no characters. There is no story. Because there is no author. There is no Artificial Intelligence only Simulated Intelligence. And the current path of development offers no prospect of that ever changing Just a program where we have obscured the logic so that we feel less lonely. Advances in AI are only advances in selfdeception.

Expand full comment
Apr 18, 2023Liked by Roger’s Bacon

AI can absolutely use foreshadowing if you get it to follow an iterative process.

First you get it to make the characters, setting, a general synopsis and theme. THEN you get it to produce an outline for the story. THEN you tell it to insert foreshadowing into the outline. THEN you get it to write it a chunk at a time (making sure the characters, theme etc. are being fed to it with each chunk so it doesn’t appear to “forget” what it’s doing.)

It’s a LOT more powerful than you give it credit for. Keep experimenting with GPT4, it’s really impressive what it can do.

Expand full comment

Can you show us an example? I am skeptical.

Expand full comment

You just give it a series of prompts and build the story out iteratively. You could completely automate this, but to do it manually, open up a ChatGPT 4 window and do:

1. Give me five popular themes for novels.

(Choose one.)

2. Give me five story ideas around (THEME).

(Choose one.)

3. I like idea number 3! Please expand it to a synopsis in 3 acts and give me information on the main characters. Make sure the character is suited to the theme we are exploring.

(Copy and paste this bit out; context windows aren’t huge.)

4. Here are the characters and the synopsis for Act 1 of the story (paste those back in.) Expand it to 8 chapters. Give me five sentences for each chapter containing all the relevant information. Bearing in mind our theme, incorporate foreshadowing to be used later in the story. Please mark the foreshadowing with “foreshadowing.”

(Repeat for Acts 2 and 3.)

(Copy everything out and start again.)

5. Here is the synopsis and character list for my novel. Here is the synopsis for Act 1. Here is the detailed information for Chapter 1. Write Chapter 1 in the style of William Faulkner with rich descriptive language and interesting imagery.

(And… repeat.)

For the actual writing bit, it’s a little tricky to get it to give good, useful prose instead of a summary, but with some finessing it can be done.

If you want to see a more automated version of this in action, go onto YouTube and search for one of the videos demonstrating “Story Engine by Sudowrite”. You can see someone using AI to iteratively generate a story from very rough idea to, a detailed outline, to generating prose equal or better than much of what is popular.

Expand full comment
Apr 19, 2023·edited Apr 19, 2023Liked by Roger’s Bacon

I still stand by my point. At each prompt it simply responds to the string that it is given. If you carry out your process and ask it, 'what did character A mean when he said this?' or, 'what is the connection between this event and that event?' the response will be nonsense. By carefully crafting prompts you can put words on a page that look or feel to us like a story, but to GPT there is no story.

Actually, your own example proves it better than I ever could. You instruct it to write foreshadowing for events that haven't been written yet, to show a shadow for something that doesn't exist. Foreshadowing is to show the connection between what is happening at point A and at point B. To GPT there is no point A, no point B, and no connection between them. It is all an elaborate forgery, not a forgery of an individual author but a forgery of all that is real...

Expand full comment
Apr 19, 2023Liked by Roger’s Bacon

Hmm.

When you say “By carefully crafting prompts you can put words on a page that look or feel to us like a story, but to GPT there is no story.” I think your issue is with how it’s doing it? But the results are the results. It doesn’t matter how the cake is made if customers are happy to eat it. The same “forgery” happens with human written work; writers will pop back to the beginning of a book to slip some foreshadowing in later.

The fact the software doesn’t comprehend the whole story at once is largely irrelevant? Humans forget huge chunks of a story too. I’ve written a lot of books and I’m frequently amazed at what I wrote when I go back and read them. I’ve slipped in ‘artificial’ foreshadowing at later times, and I’ve only been able to keep track of the whole structure by referring to my outlines. I can get an AI to do the same thing. You can get it to keep the “outline” in mind.

And with your first point, “what did character A mean when he said this?” That would depend on the context. If it was referencing something that happened outside of the context window, then it would indeed just be guessing. But if it had the story outline as well, it would probably get it right. A forgetful human would suffer the same problem.

I think (correct me if I’m wrong) that you think there’s something deeper to story; that it has some kind of… I don’t know… soul? Heart? that needs a human to comprehend it in order to create it. But I don’t believe this is necessarily the case.

About a year ago I was very confident that we were at least a decade away from AI creating whole novels. But things happened a lot faster than I was expecting. Less than a year later, I know people publishing fiction novels entirely created by AI! The fact the Ai doesn’t “understand” the stories is moot, to my mind. Humans do a lot of things without understanding what they’re doing as well. (Walking, running, jumping etc.)

Expand full comment

't doesn’t matter how the cake is made if customers are happy to eat it.' -this seems rather shortsighted. The Romans used to sweeten their wine with lead. Yumyum and the effects of heavy metal poisoning are slow and subtle. The whole idea of AI is to deceive ourselves, that is the heart of the Turing Test after all. We are not creating intelligence but simulating it.

'writers will pop back to the beginning of a book to slip some foreshadowing in later.'-this isn't the forgery, this is exactly what GPT can't and doesn't do. That's the right way to do it, the story evolves and grows in the writer's mind such that the later events exert a ripple affect changing the earlier events, there is an actual 'something' there in the writer's mind that is 'the story'. The proof of this is that the writer knows that the words may or may not tell the story well, or accurately. When the author changes the words he feels that they must be changed in a way that they are more faithful to 'the story'. He is trying to communicate something to the reader which is not words on a page.

The story has an independent existence. If I ask you to think of the story of Hercules it is unlikely that you are thinking of a particular telling of the story, perhaps some images that you have seen pop into your mind, but neither the images or words that you have read that tell the story are the story itself. And in judging any telling of the story you judge the words and the story separately. You might find one telling has particularly apt words and is well written, both technically and artistically, but criticise it because it does not match the Hercules story that you feel to be most real, on whatever grounds you make that judgment call.

A great example of this is the Hunchback of Notre Dame. The story of this outcast, this 'half made'(Quasimodo) man and his complicated relationship with Frollo and unrequited, even unperceived, love for Esmerelda and all of its interweavings is fascinating. But Victor Hugo is a terrible storyteller. Most of the book is almost unreadable minutiae about the cathedral and the streets of Paris. To Hugo, the story was actually about the building. The actual name is Notre Dame de Paris, the characters and story were just something that happened at a terrifically interesting building to him. But, having received the story, I can disagree with the author, because it has an objective reality. I can find elements of the story jarring and incongruous, because the author has failed to make a particular element match the rest of the story. I can say, 'Quasimodo hates Frollo.' even though Hugo perceives him to love Frollo. I can rewrite the story with a happy ending, as several people have, and it remains the same story. Anyway, my point is that words and stories are different and separate things. If they aren't then the writing is flat and stupid like an AI's. And no increase in RAM or streamlining of algorithms or dumping more training data in will change that.

Expand full comment
author

There is a lot it can't do (e.g. write any fiction worth reading) but yea it should have no problem using foreshadowing in some banal manner.

Expand full comment

What is “worth reading” varies very much from person to person. Much of what is popular is certainly not worth reading to me… but is to a great many other people.

And I can tell you, that right now, there are books mostly written by AI on the Amazon bestseller lists. I suspect neither you nor I would enjoy them, but other people… well… they’re buying and reading them!

Expand full comment

This is a good point. My children watch way too much Youtube, they are small and watch these weird morality fairy tale things sometimes and I was stunned by a strange flat, stupidity in many of the videos, I can get you an example but I promise you won't enjoy it. I can't prove it but I strongly suspect that they are being written by AI.

Expand full comment

It is indeed trivial to get Ai to write a short children’s story, and then you can get a pretty decent voice to ‘tell’ the story using ElevenLabs excellent AI voices.

That probably is exactly what’s going on! And no, I would not enjoy them haha.

Expand full comment
author

haha true

Expand full comment

Love the title and concept 👌

Expand full comment
May 20, 2023Liked by Roger’s Bacon

Conversely, our present "scientifical" imagination has reached its limits, its paradoxical conclution, its self destruct built-in switch, whereby it modus vivandi is the cuase of its own morerum. But not all thought/fantasy is so inclined.

Expand full comment

Is it weird that I experienced hope from Jon Cutchin’s disdain for LLM’s capabilities? Thinking of them as a parlor trick is comforting to me. And it’s the odds on favorite for the correct take.

Expand full comment
author

Not at all ;)

Expand full comment

“In creating LLMs, we’ve essentially taken a buggy prototype, scaled it up, and rushed it to market.”

Well, when you put it that way...

Expand full comment