Man, I really love the mathematical biologist Robert Rosen for his work on "relational biology" - making formalizations of living systems and proving them to be Turing noncomputable
And generally all this sort of computationalism is a metaphysics, and these high priests confuse science with metaphysics and do bad metaphysics. That sucks
Finally, the emotionally overbearing and strongly worded essays I subscribed to! 😉
> Algorithms inhabit a “small world”: an environment with a pre-specified ontology in which all problems are well-defined
Out of curiosity, what do you think would happen if we create a simple "AI" which can only attempt to understand and control the environment its thrown in? What happens when such an intelligence inhabits a small part of the "Large World" where information "is scarce"? Would it not be an organism, by virtue of a complete lack of ontological restrictions? (since its simply just trying to survive)
I think the work of Jaeger et al. is arguing that you can't do such a thing within our current AI framework - algorithms inhabit a small world by definition. Building a large-world AI would amount to building an organism from scratch which we don't know how to do (yet).
I think the analogy is closer to algorithms inhabiting a small world when they're being built up, like a baby in a womb. However, it can then be pushed out into the real world (or atleast a simulation of one) and it can learn to act & survive.
But their intelligence is just restricted to the efficiency of the algorithms and the hardware its running on. Embodiment here only serves to showcase how effective our current AI systems are when dealing with huge uncertainty and "randomness" of the real world. So I'm convinced the bottleneck lies elsewhere - perhaps in the algorithms and their design.
The pump circulating "spirits" sounds reminiscent to the function of mirror neurons. Humans imitate what they see. Sartre described the mannequin in terms of a false positive in the brain's interpretation of the Other. After all, what kind of spirits of the nervous system was Descartes referring to?
If I have to pick a side, though I must admit to being a silicon simp, it will be with the anti-mechanists. Because the mechanists, most ironically, fail to see the beauty of their craftsmanship— the interplay between man and machine, the privilege of creation. And even in the machine, they care more for mere efficacy towards the production of its end, than they do for the elegance of its algorithms, or the visual beauty of its code. See the brute-force computation that spawned LLMs. A tangled mess of linear algebra is not what I’d call elegant. And they cannot even see the code that was generated!— an entire field has emerged with the goal of reverse engineering the nets into something sensible!
Not all things are computable; this is written already in their holy book, not in flowery poetry, but clear, in prose. This, of course, has not stopped their apologetics.
what a sad little man you must be
im sorry you've been hurt
Nos vemos en la torre. You should probably watch your back…
Man, I really love the mathematical biologist Robert Rosen for his work on "relational biology" - making formalizations of living systems and proving them to be Turing noncomputable
And generally all this sort of computationalism is a metaphysics, and these high priests confuse science with metaphysics and do bad metaphysics. That sucks
Just what we need; an invitation to a new jihad.
well what then, just meekly submit to the new computationalist world order? go fuck off
Finally, the emotionally overbearing and strongly worded essays I subscribed to! 😉
> Algorithms inhabit a “small world”: an environment with a pre-specified ontology in which all problems are well-defined
Out of curiosity, what do you think would happen if we create a simple "AI" which can only attempt to understand and control the environment its thrown in? What happens when such an intelligence inhabits a small part of the "Large World" where information "is scarce"? Would it not be an organism, by virtue of a complete lack of ontological restrictions? (since its simply just trying to survive)
I think the work of Jaeger et al. is arguing that you can't do such a thing within our current AI framework - algorithms inhabit a small world by definition. Building a large-world AI would amount to building an organism from scratch which we don't know how to do (yet).
I think the analogy is closer to algorithms inhabiting a small world when they're being built up, like a baby in a womb. However, it can then be pushed out into the real world (or atleast a simulation of one) and it can learn to act & survive.
I think embodiment is a necessary part of achieving any real "AI" but its not the bottleneck right now. For example, we already have AIs that interact with complex "worlds" like [VPT](https://openai.com/index/vpt/) and [GATO](https://deepmind.google/discover/blog/a-generalist-agent/) and even [Trackmania](https://www.youtube.com/watch?v=kojH8a7BW04).
But their intelligence is just restricted to the efficiency of the algorithms and the hardware its running on. Embodiment here only serves to showcase how effective our current AI systems are when dealing with huge uncertainty and "randomness" of the real world. So I'm convinced the bottleneck lies elsewhere - perhaps in the algorithms and their design.
The pump circulating "spirits" sounds reminiscent to the function of mirror neurons. Humans imitate what they see. Sartre described the mannequin in terms of a false positive in the brain's interpretation of the Other. After all, what kind of spirits of the nervous system was Descartes referring to?
If I have to pick a side, though I must admit to being a silicon simp, it will be with the anti-mechanists. Because the mechanists, most ironically, fail to see the beauty of their craftsmanship— the interplay between man and machine, the privilege of creation. And even in the machine, they care more for mere efficacy towards the production of its end, than they do for the elegance of its algorithms, or the visual beauty of its code. See the brute-force computation that spawned LLMs. A tangled mess of linear algebra is not what I’d call elegant. And they cannot even see the code that was generated!— an entire field has emerged with the goal of reverse engineering the nets into something sensible!
Not all things are computable; this is written already in their holy book, not in flowery poetry, but clear, in prose. This, of course, has not stopped their apologetics.
But I will say, cool it with the “destroy all external thought-tools” bit. Are you fucking insane? (Don’t answer that.)