Sunday, December 14, 2008

Downloading images from your brain

Researchers at the ATR Computational Neuroscience Laboratories in Japan managed to pull down still images from firing neural patterns directly from people brain.

Read more about this on the original article.

Quite a milestone for Neural Science.


Sunday, November 23, 2008

AI going for a "homerun with bases loaded"

IBM has announced it will lead a US government-funded collaboration with 5 universities to make electronic circuits that mimic brains.

Dharmendra Modha (IBM Almaden Research Center) goes like this:

"We are going not just for a homerun, but for a homerun with the bases loaded"

For details see original
BBC source.

Sounds like a day to remember.


Tuesday, August 12, 2008

And I wish I could see like everybody can

...How I wish that I could be like any other man.

These words from the song "Nature's dance" (Ayreon, The final Experiment) have tormented my mind for a long time.
Even if the song probably refers to a blind man it makes no difference as that's exactly the way I felt for a long time since my childhood.


It's difficult to explain what I was feeling for many years (will be easier to understand for those who consciously experienced the same thing) but I'll try. I was about 10 years old when I realized it the first time. My sight was changing. I was not seeing things around me in the same way as before. I couldn't tell exactly when that happened, I could only remember what I was used to experience when looking around me when I was a child. It was astonishment, it was excitement, it was curiosity, I was "really" looking at the world around me. All of these experiences were gradually fading away but I realized it only when they were completely disappeared. Since that day looking around me was not the same. I could see things perfectly, my mind kept repeating "look this is my wardrobe, that's my window, that's the sun, but I can't sense it!", my brain was not stimulated in the same way. My way of experiencing the world around me had changed so much that I started feeling blind. I've received a catholic education which at that time had still its influences and I do remember praying God for long time before going to sleep asking him to give me back my sight. No God ever answered my prayers and that might have been my first God delusion.

So what happened during my childhood? What terrible experience left an innocent child like me with such a big torment to bear? I wish I had knew the answer at that time, it would have saved some prayers at least. When I was in the first years of my life I didn't have a model of the world. When we look at things for the first time our brain builds a model of them. We use these models for our whole life, we adapt and change them, they are the base of what we call imagination. We close our eyes thinking of a sheep and there it is, floating in a distributed neural configuration. These models are used when we make a prediction (conscious or not) and when we take a decision for instance. In every dimension of what we call thinking we make use of them. When I was 10 I had probably finished building a quite complete model of the world that surrounded me. Once such models are built the process of looking at something is a mere look-up of that model based on a visual pattern. When in the middle of a conversation we find out that one of these models don't match the reality we say "Oh really? I always thought that John was his brother" which shows a high correlation between what we call thinking and the interactions with these models (and also shows that John maybe wasn't his brother). For Buddhists there is no real thing and everything we sense is a mind projection. Some others say that we don't live in a real world but that each one of us just lives within his own model. However you want to look at it is sure that our brain builds and constantly updates this model.

Steve Grand thinks of a brain as a predictive machine that, based on this active model of the outside world is capable of producing predictive judgements that can drive proactive and anticipatory behavior
Marvin Minsky speaks of our ability to create new ways to represent information.The question of how we can create models of anything we could meet is indeed an intriguing one. But are we sure we can do it without any limit? If I ask you to think about an elephant with two heads I'm sure you can do it without too much effort. But can you really think of something which does not belong to this Middle world as Richard Dawkins defines it? Can we have such a model of a quantum force field for instance? How are models connected to each other? These questions require much more space and time to be analyzed and I'll go back on them at a later time. Wish me a good night now.

Monday, August 11, 2008

Day 2: No more days


We have been very busy with our real jobs -those we use to pay the bills- and the Day 1 is now just a confused memory. The road trip did not stop though and I've been personally thinking about many aspect of what we would like to achieve here. The Day x format though is probably one of the main responsible of not updating this blog lately. Having an enumeration supposes that you are facing different topics in a logical order, it supposes that you know where you are and where you need to go next to reach your destination. Needless to say we don't have the road map drawn yet, therefore following an enumeration would not make much sense at this stage. From now on we will just lay down our ideas, thoughts and progresses as they arise. We can still apply a sort algorithm closer to the destination if needed.

Thursday, February 14, 2008

Day 1: Critics, Selectors and Resources

After reading The Emotion Machine we had time to discuss -and argue- about some of the presented contents. We eventually came to a common vision, adding here and there our thought. Here you can read about it.


What's the Emotion Machine in terms of pieces of machinery? Minsky defines a 6 layers architecture baptized as The 6 Machine, theoretically capable of a consistent simulation of the human brain -and consequent behaviour- (including so called "emotions") operating together with embedded Innate, Instinctive and behavioural Systems.


Here's a list of the Six Layers (bottom-top enumeration):


  • Instinctive Reactions
  • Learned Reactions
  • Deliberative Thinking
  • Reflective Thinking
  • Self-Reflective Thinking
  • Self-conscious Emotions

Human beings share with lower animals the first levels of this stack (at least the first two, in specific cases arguably the first three).

Along with these 6 layers Minsky defines two kind of agents, which operate at every level of the stack: Critics and Selectors.

Critics are basically Feature Detectors. They recognize combinations of features in given problems. These given problems can be problems coming from the external environment or problems passed from layer to layer, e.g from a level of thought to another. The machine we are talking about is a goal oriented one so we look at problems as obstacles to the achievement of a given goal. It's easy to look at these combinations of features that we call problems as Patterns (some of them simple, some not), so that we can refer to the Critics as pattern detectors, allowing us to think they could be designed by following the current knowledge in the field of Pattern Recognition.

Talking about goals, there is an important distinction we think needs to be done: low level goals as survival (which implies satisfaction of basic animal needs) can be formalized as goals just because of our reflective capabilities. For example if we take an animal (which has just the first two levels of the stack, instictive and learned reactions) it doesn't really have any goal, it's just designed in a way that insures maximum probability of survival, e.g. its so called goals are something we can observe and formalize only because of our higher ability to create formal models. It's like saying the given animal was designed (by evolution) to achieve a goal -survival - but besides that it is just an organic machine that behaves the only possible way it could. Being animals we do have the first two layers of the stack and all that comes with them, so for low level instincts -such as survival or breeding- it doesn't make sense to define models, it just makes sense to design machineries which behave in the desired way, with a set of embedded If(Condition)->Do(Reaction) rules. The difference between most of animals and human beings is that we have the capability to create our own new goals -we actually do it all the time because of our nature (when we refer to nature we'll be referring to the set of our insincts, all of our embedded if-then-do rules) and we are arguably driven by this goal-setting loop- so it makes sense to define higher level goals, but only if it means we are somehow able -between certain boundaries- to program ourselves through the interaction of the higher levels of the stack. Another important issue about goals is that higher layers can override lower ones reactions for the same input pattern, so that instictive reaction can be overridden by learned ones (and in our case we can for example learn how to behave in a socially acceptable way). This is not just a human feature, but a feature of every animal that has more than the first layer (the instinctive one).

Talking about Selectors, they're defined like agents activated by the critics; their main duty is to map and activate ways to think to solve a given problem (maybe you're trying to fix a bycicle or maybe one of your lower layers got stuck trying to solve something else) recognized by a Critic.

So what is a way to think? Minsky defines ways to think as a combination of active resources, which, following the chain, are the pieces of machinery that compose our brain and regulate our reactions and behaviour through interaction with other resources. For example if you're scared of something it means you are using the set of resources labeled under fear, because some critics spotted something (a given input pattern) that made it activate a selector for those resources. So -in this case- you'll be mainly driven by those resources.

Ways of thinking and resources sure need much more space than this, even at a very high level. We'll present some further talk regarding this in one of the next topics here on Road Trip to Strong AI.

As usual, any comment would be highly appreciated.

Saturday, February 9, 2008

Day 0: The Emotion Machine

"I hope this book will be useful to everyone who seeks ideas about how human minds might work, or who wants suggestions about better ways to think, or who aims toward building smarter machines"

This is exactly how a book explaining how the brain could possibly work should start. Thanks to Marvin Minsky that is exactly the way that The Emotion Machine (Simon & Schuster, 2006) begins.
Marvin Minsky is a Professor of Electrical Engineering and Computer Science at the MIT.
Professor Minsky is one of the pioneers of intelligence-based robotics.

The Emotion Machine is the Day zero of our road trip.

The book presents through its nine chapters the theories of the Professor, trying to break the wall there where previous theories have failed.
Topics like love, pain, consciousness and common sense are faced and demystified. The Professor shows a possible path to follow to push the research beyond the actual limits, suggesting new models of how to represent the brain and how our thinking might actually work (see also The Society of Mind). Many ingredients are presented in more or less detail but the outcome is clear and there is enough new material to support many experiments in this following decade.

During our road trip we will propose a solution of how to fill in the remaining gaps of the theory providing the necessary engineering underneath required by a concrete implementation.

Day zero ends with a personal acknowledgment to Professor Minsky, to the man who awakened our dreams.

Friday, February 1, 2008

Do androids dream of electric sheep ?

"I've seen things you people wouldn't believe. Attack ships on fire off the shoulder of Orion. I watched C-beams glitter in the dark near the Tannhauser gate. All those moments will be lost in time, like tears in rain. Time to die."

Many of you will recognize the last words of Roy Batty, the most evolved replicant, prodigal son of Eldon Tyrell. It was 25 years ago when Ridley Scott was giving birth to Blade Runner. But it was 41 years ago when Philip Dick was writing, in the 1966, the sci-fi novel "Do androids dream of electric sheep ?" that would have inspired the movie 16 years later.
The first edition of the book takes place in 1992, but this has been pushed over to 2021 in later publications. Philip Dick's universe is still pure sci-fi in 2007. How many years do we have to wait? How many other times will we have to push over the year in which Deckard will be hunting the skin-jobs? Will this ever be reality? I've not a single doubt that it will. Don't worry, I'm in good company. 

From today on in this blog myself with my colleagues will make public the theories, results and researches of the last year of what was internally called A.I. Project. The goal of the project was to create what scientists use to call a strong AI. Starting with primitive forms of life the project aims to recreate a brain that can intelligently drive a body in a simulated or real environment. We are not talking of "find the closest path" or "avoid the obstacles" kind of things. We are speaking of a form of artificial life that, inserted  in an uncontrolled environment, will act and think as a real being.

Many of you at this stage would start showing skepticism, we don't blame you. We are very skeptical too in many of our over-night sessions on Skype. But we go on. In deep we know that we can do it. We are gonna make it. From now on we will start sharing our progress, ideas and results. 
It starts now, it's our road trip.  Please fasten your belts, the journey is going to be long and difficult.