A meditation: The key thing that's holding back devs and coders from developing AI is how they conceptualize AI. They currently see it as an artificial brain: a chatbot or neural net outputting messages to a model and the other elements (Atoms) in a scene, and that these elements responding to the chatbot/neural net's output are distinct from the AI.
This form of mind-body dualism is an utterly incorrect premise. The AI is in fact every interactive element of the scene working together as a holistic simultaneity.
What's holding back making the virtual human is a semantic fallacy, a problem caused the language we use to understand what an AI is, not one of code or anything else for that matter.
At Cambridge Ludwig Wittgenstein taught Alan Turing. One of the things Wittgenstein taught Turing was never trust what you think words and phrases mean. The originators of those words and phrases, at root-idea level, might have been wrong, and they often are.
To explain what Wittgenstein meant: It's like sticking-on the wrong label to describe the contents of a box. For example, there are red-coloured balls in the box, however, the box's label incorrectly states they are green balls. And if you were colour blind, you'd assume they were green balls because that's what the label tells you they are. But how do you know whoever wrote the label, wasn't also colour-blind?
The problem is nothing to do with the box's content, it could be anything: frozen chicken legs, a worn-out pair of sneakers, or inflatable crocodiles, the problem is the incorrect labelling of whatever is actually in the box. A word or phrase is like a box that contains anything we can conceptualize. So when we find the a box labelled 'AI is an artificial brain that runs the scene, this is what AI is', it is simply the wrong label for what is in the box. What should be on the label and what should be in the box is the premise: 'AI is a holistic simultaneity of every element in the scene that's interacting with every other element in the scene. This is what AI is'. However, most coders just glance at the label and work off the label's premise, when they should have actually double-checked what is inside the box.
"The limits of language are the limits of my world." LW ( work it over and over in your mind and visualise boxes and labels)
And all the above would be academic waffle and bullshit, unless I could prove otherwise, which is why I made the scenes and videos, to prove otherwise.
We need to rethink about how we think about AI in VAM.
Now, if you understand the above. PM me.
This form of mind-body dualism is an utterly incorrect premise. The AI is in fact every interactive element of the scene working together as a holistic simultaneity.
What's holding back making the virtual human is a semantic fallacy, a problem caused the language we use to understand what an AI is, not one of code or anything else for that matter.
At Cambridge Ludwig Wittgenstein taught Alan Turing. One of the things Wittgenstein taught Turing was never trust what you think words and phrases mean. The originators of those words and phrases, at root-idea level, might have been wrong, and they often are.
To explain what Wittgenstein meant: It's like sticking-on the wrong label to describe the contents of a box. For example, there are red-coloured balls in the box, however, the box's label incorrectly states they are green balls. And if you were colour blind, you'd assume they were green balls because that's what the label tells you they are. But how do you know whoever wrote the label, wasn't also colour-blind?
The problem is nothing to do with the box's content, it could be anything: frozen chicken legs, a worn-out pair of sneakers, or inflatable crocodiles, the problem is the incorrect labelling of whatever is actually in the box. A word or phrase is like a box that contains anything we can conceptualize. So when we find the a box labelled 'AI is an artificial brain that runs the scene, this is what AI is', it is simply the wrong label for what is in the box. What should be on the label and what should be in the box is the premise: 'AI is a holistic simultaneity of every element in the scene that's interacting with every other element in the scene. This is what AI is'. However, most coders just glance at the label and work off the label's premise, when they should have actually double-checked what is inside the box.
"The limits of language are the limits of my world." LW ( work it over and over in your mind and visualise boxes and labels)
And all the above would be academic waffle and bullshit, unless I could prove otherwise, which is why I made the scenes and videos, to prove otherwise.
We need to rethink about how we think about AI in VAM.
Now, if you understand the above. PM me.
Last edited: