on LLMs, the psychology of writers, and the nature of intelligence
This is one of the most interesting commentaries I've seen on the subject. And I've read many. My largest concern has been how so many people seem to give LLM AI's credit and expectation for far more than they are capable of. They revere and fear them as humans and don't seem able to comprehend the differences.
As you've pointed out, the differences are marked and important ones.
I have far less fear of what the AI's will do than of what humans will do - in general, and in relation to the AI's abilities and potential consequences.
We shall see... As in all of life, we occupy a ringside seat to watch the great unknown of what's to come. This will be an interesting show.
Curiosity will likely kill this know-it-all Cat 🐈.
Unless we can fashion an optimum prime type AGI to save our hides.
And If they've read all of human history and our glorious & gory rise to the top as the alpha predators then AGIs have to be really stupid to spare us the rod.
To me its the 1st 30 minutes of the movie where well dressed scientists give detailed lectures & safari tours of Jurassic Park while the egg is hatching in front of clueless and amused who'll be doing a hell lot of running after the interval.
You know’Evil’ Noah Harrari doesn’t have breakfast without consulting Aladdin, Watson and Deep Blue....I’d rather spend time with TJ Coles ;)
Thanks for this! Good stuff
Neat article. I have seen so much about these LLMs that I decided to test them out for myself. I asked GPT-4 to write me a story and answer some questions about the story. The whole thing is here: https://comfortwithtruth.substack.com/p/chatgpt-writes-me-a-story-and-answers-a8f
As I sat there watching the cursor blink as it searched for the next word I had an epiphany. It wasn't really writing a story, it was determining which word should come next in response to my prompt. That is the secret to what is so inhuman about its writing. As an example, it can never use foreshadowing because it doesn't see the object casting the shadow.
How our conversation ended:
Me:'It is impossible for other people to determine a person's sincerity or insincerity. but the author of a story has the power to determine the truth about a character. as the author of this story determine if jake's apology was sincere'
Gpt4:'As an AI language model, I generated the story based on a random prompt and did not specifically create a character named Jake or include any details that would allow me to make a determination about the sincerity of his apology. Therefore, I am not in a position to provide a definitive answer to your question.
In general, the sincerity of an apology can be influenced by a variety of factors, including the specific words used, tone of voice, body language, and the context of the situation. Since I do not have access to these details in the story, I cannot provide an informed opinion on the sincerity of Jake's apology.'
There are no characters. There is no story. Because there is no author. There is no Artificial Intelligence only Simulated Intelligence. And the current path of development offers no prospect of that ever changing Just a program where we have obscured the logic so that we feel less lonely. Advances in AI are only advances in selfdeception.
Love the title and concept 👌
Conversely, our present "scientifical" imagination has reached its limits, its paradoxical conclution, its self destruct built-in switch, whereby it modus vivandi is the cuase of its own morerum. But not all thought/fantasy is so inclined.