After reading The Emotion Machine we had time to discuss -and argue- about some of the presented contents. We eventually came to a common vision, adding here and there our thought. Here you can read about it.
What's the Emotion Machine in terms of pieces of machinery? Minsky defines a 6 layers architecture baptized as The 6 Machine, theoretically capable of a consistent simulation of the human brain -and consequent behaviour- (including so called "emotions") operating together with embedded Innate, Instinctive and behavioural Systems.
Here's a list of the Six Layers (bottom-top enumeration):
- Instinctive Reactions
- Learned Reactions
- Deliberative Thinking
- Reflective Thinking
- Self-Reflective Thinking
- Self-conscious Emotions
Human beings share with lower animals the first levels of this stack (at least the first two, in specific cases arguably the first three).
Along with these 6 layers Minsky defines two kind of agents, which operate at every level of the stack: Critics and Selectors.
Critics are basically Feature Detectors. They recognize combinations of features in given problems. These given problems can be problems coming from the external environment or problems passed from layer to layer, e.g from a level of thought to another. The machine we are talking about is a goal oriented one so we look at problems as obstacles to the achievement of a given goal. It's easy to look at these combinations of features that we call problems as Patterns (some of them simple, some not), so that we can refer to the Critics as pattern detectors, allowing us to think they could be designed by following the current knowledge in the field of Pattern Recognition.
Talking about goals, there is an important distinction we think needs to be done: low level goals as survival (which implies satisfaction of basic animal needs) can be formalized as goals just because of our reflective capabilities. For example if we take an animal (which has just the first two levels of the stack, instictive and learned reactions) it doesn't really have any goal, it's just designed in a way that insures maximum probability of survival, e.g. its so called goals are something we can observe and formalize only because of our higher ability to create formal models. It's like saying the given animal was designed (by evolution) to achieve a goal -survival - but besides that it is just an organic machine that behaves the only possible way it could. Being animals we do have the first two layers of the stack and all that comes with them, so for low level instincts -such as survival or breeding- it doesn't make sense to define models, it just makes sense to design machineries which behave in the desired way, with a set of embedded If(Condition)->Do(Reaction) rules. The difference between most of animals and human beings is that we have the capability to create our own new goals -we actually do it all the time because of our nature (when we refer to nature we'll be referring to the set of our insincts, all of our embedded if-then-do rules) and we are arguably driven by this goal-setting loop- so it makes sense to define higher level goals, but only if it means we are somehow able -between certain boundaries- to program ourselves through the interaction of the higher levels of the stack. Another important issue about goals is that higher layers can override lower ones reactions for the same input pattern, so that instictive reaction can be overridden by learned ones (and in our case we can for example learn how to behave in a socially acceptable way). This is not just a human feature, but a feature of every animal that has more than the first layer (the instinctive one).
Talking about Selectors, they're defined like agents activated by the critics; their main duty is to map and activate ways to think to solve a given problem (maybe you're trying to fix a bycicle or maybe one of your lower layers got stuck trying to solve something else) recognized by a Critic.
So what is a way to think? Minsky defines ways to think as a combination of active resources, which, following the chain, are the pieces of machinery that compose our brain and regulate our reactions and behaviour through interaction with other resources. For example if you're scared of something it means you are using the set of resources labeled under fear, because some critics spotted something (a given input pattern) that made it activate a selector for those resources. So -in this case- you'll be mainly driven by those resources.
Ways of thinking and resources sure need much more space than this, even at a very high level. We'll present some further talk regarding this in one of the next topics here on Road Trip to Strong AI.
As usual, any comment would be highly appreciated.
0 comments:
Post a Comment