Augmented Spatial Pooling and Spatial Pooling for Greyscale Images

I am trying to better understand spatial pooling and am wanting to code up a working version of it on my own.  I found two works (shown below) by Professor John Thornton in Australia as quite useful in this regard.  I hope to implement these in time.

I had trouble keeping up with my overall schedule this week but I plan to finish it all up by tomorrow.

 

Stressful week

It was a long and stressful work week.  I could not focus on NuPIC which was annoying. But, the contractors on Upwork told me that my mac system python has been removed and that is why NuPIC is not working properly.  So I basically need to backup and restore my laptop. Not looking forward to doing this but I wont get a chance to work on this till the weekend.

Still Struggling with the NuPIC error

Written by Chirag on August 9, 2015 Sunday (10:07pm)

I am still struggling to install NuPIC on my laptop.  I had gotten it to work 3 times before and now something just seems off.  I have tried everything from reinstalling python to xcode..relinking and unlinking.  Below is the error I am getting

ld: library not found for -lpython2.7

clang: error: linker command failed with exit code 1 (use -v to see invocation)

error: command ‘clang++’ failed with exit status 1

I am so desperate for help now that I have put up a project on upwork.com to resolve this.  The trouble is that I am not as well versed in command line syntax on my mac laptop.  I have tried everything I can possible think of but even my experienced friends are unable to resolve it.  So have to get some help.

On the more positive side, I have been keeping up with my reading schedule. I even got one fan for my blog.

 

Spatial Pooler and Nupic Error

I am trying to reinstall Nupic today and getting the sine-wave example to run.  So haven’t had a much time to deliver anything.

As far as Nupic goes, I found this overview on youtube video by Rahul pretty good in terms explaining implementation details for Spatial and Temporal Pooler.  I think once you have grasped the white paper, it’s good to go over this.

My goal is to implement encoder, spatial pooler, temporal pooler, CLA classifier.

I am getting this error in installing Nupic:

clang: error: invalid deployment target for -stdlib=libc++ (requires OS X 10.7 or later). 

I have os 10.10.1, so am already updated.. not sure why i am getting this error.

 

Onto Spatial Pooler

Written by Chirag on Sunday, July 26, 2016 (around: 6:30pm)

This week I am working on the Spatial Pooler from Numenta’s Cortical Learning Algorithm white paper.  Instead of typical CLA (Cortical Learning Algorithm) reading, I will be focused on implementing and testing my own version of CLA-Spatial Pooler (page 34 has the pseudocode).  So that I can  understand it better.

CLA has two key input pooling components. Effectively, in Jeff Hawkins words, pooling is the mapping of a set of inputs (visual, auditory, smell, sensorimotor) onto a single output pattern. There are two basic forms of pooling.

1) Spatial Pooler: “Spatial Pooling” maps two or more patterns together based on bit overlap. If two patterns share sufficient number of bits they are mapped onto a common output pattern.

2) Temporal Pooler: “Temporal Pooling” maps two or more patterns together based on temporal proximity. If two patterns occur adjacent in time they are likely to have a common cause in the world.

Quite honestly, given my limited python skills, I am finding it very difficult to code up the Spatial Pooler.  I am thinking it will take me at least six months. But, I think this will be worth the deep dive.

 

Grinding It Out!

Written by Chirag on Sunday, July 19, 2015 at 6:30pm

 I grinded through my weekly schedule.  One coincidental fact to add was that the Recursion book I am reading quoted Douglas Hofstadter’s GEB book.  This gives me further confidence that recursion might be a useful tool for building intelligent machines.

Here is a basic thought process on when to use recursion. A problem must have three distinct properties.  This is directly quoted from Thinking Recursively by Eric S. Roberts

* It must be possible to decompose the original problem into simpler instances of the same problem.

* Once each of these simpler subproblems has been solved, it must be possible to combine these solutions to produce a solution to the original problem

* As the large problem is broken down into successively less complex ones, those sub problems must eventually become so simple that they can be solved without further subdivision.

At the end of the first chapter, there were three problems to solve.  The last (and the difficult one) asked to find a light weight (counterfeit) coin among 16 coins. If you had a balance which you can use to compare two coins, how many trials would it take to find counterfeit coin among the 16 coins.  The standard answer is four trials but you can do better than that. The answer here is three.   I have uploaded a recursive solution (Recursion1\divideandconquer.py) on my github account.

I also ordered Rajesh Rao’s book Bayesian Brain. This will be on the next reading schedule. I also thought more about reaching out to Neuroscience PhD students. I really want to understand what I can gain out of attending a Neuroscience PhD program. So I have reached out to NJIT, UC San Diego and UT Austin Neuroscience programs.

Strange Loops

Written by Chirag on Sunday, July 12, 2015 at about 2:35pm

Over the past week, I have been more aggressive in trying to keep up with my schedule.   I have added one more book to my weekly reading list and have defined a key question that needs to be answered by science.

1) I have added Thinking Recursively by Eric S. Roberts.

2) The key question is why does our brain think it is alive?  To me it is endlessly fascinating that you can put together bunch of chemicals (as in what’s in our brain) and if put together exactly right, you create a system that thinks it is alive? How does this happen?  To me only person, who has even tried to ask this question and come up with a solution is Douglas Hofstadter in his book GEB. More on this a bit later!

Recursions and Strange Loops

Recursion: Separately, my view is that Strange Loops (Douglas Hofstadter’s Theory) and Recursion are quite related.   Also, as I read more about HTM, I don’t think the Brain uses Bayes’ Theorem directly.  It is coincidental as in the Brain is a memory system, so it is constantly using priors to make future predictions.  So it gives us a feeling that it is using Bayesian Inference to come up with a most likely outcome.

It’s worth emphasizing that Vicarious (AI company) algorithms are defined as Recursive Cortical Networks (RCNs). As per wikipedia, RCN is a visual perception system that interprets the contents of photographs and videos in a manner similar to humans. The system is powered by a balanced approach that takes sensory data, mathematics, and biological plausibility into consideration. On October 22, 2013, beating CAPTCHA, Vicarious announced its AI was reliably able to solve modern CAPTCHAs, with character recognition rates of 90% or better.

Also, I think Recursion by itself is a pretty cool tool to have in your arsenal as it is a super awesome problem solving technique.  I have coded my first recursion example in Python. Yes, am getting little more comfortable with Python and have pushed it out on my Github account here: https://github.com/g402chi

Strange Loops:As I am reading GEB by Douglas Hofstadter and his theory of Strange Loops. Example of Strange Loops are Catch-22s or Which came first the Chicken or the Egg. I like to use examples first because that is usually the easiest way to get a point across.

As per wikipedia:”A strange loop arises when, by moving only upwards or downwards through a hierarchical system, one finds oneself back to where one started.

Strange loops may involve self-reference and paradox. The concept of a strange loop was proposed and extensively discussed by Douglas Hofstadter in Gödel, Escher, Bach, and is further elaborated in Hofstadter’s book I Am a Strange Loop, published in 2007.

A tangled hierarchy is a hierarchical consciousness system in which a strange loop appears. In short, a strange loop is a paradoxical level-crossing feedback loop”

I realize Douglas Hofstadter, in his book GEB, is absolutely trying to answer the right question.  As in how does the brain of any given animal think it is alive.  This is the most key, tremendously important, question that modern science should be trying to answer.  My conjecture is that we can not build truly intelligent machines until we answer this question. My guess on the future is that if the nature is able to build this brain that thinks it is alive, it must be possible to do it.  We need to figure this out. I am not even talking about a human brain, take a C. Elegan brain, ant brain.  However small you would like it to be. All these insects, they are self aware and are acting on their behalf only.

As per Douglas Hofstadter,

“the psychological self arises out of a similar kind of paradox. We are not born with an ‘I’ – the ego emerges only gradually as experience shapes our dense web of active symbols into a tapestry rich and complex enough to begin twisting back upon itself. According to this view the psychological ‘I’ is a narrative fiction, something created only from intake of symbolic data and its own ability to create stories about itself from that data. The consequence is that a perspective (a mind) is a culmination of a unique pattern of symbolic activity in our nervous systems, which suggests that the pattern of symbolic activity that makes identity, that constitutes subjectivity, can be replicated within the brains of others, and perhaps even in artificial brains.

Should I get a Neuroscience, PhD?

Written by Chirag on July 5, 2015 (Sunday) at about 7:15pm

Last week, I was in UK (London, Scotland) and Netherlands. I found it difficult to stick with my AI/Neuroscience learning schedule because of the stress of travel.  So it took me two weeks to get through my routine of going through 5 neuroscience related tasks and 3 casual neuroscience readings/week.   Overall, I am finding Sparse Distributed Memory by Pentti Kanerva a really difficult read. I suspect, I will have to come back to it and reread it four to five times.

Curiously, while in London, I was inspired to learn more about DeepMind founder Demis Hassabis.  As most AI people know, DeepMind was acquired by Google. DeepMind founder Demis H. is a genius (computer science, chess and a gaming prodigy). He has been able to make contribution to the field of AI by spending significant time and effort learning about Neuroscience.  Prior to DeepMind, he spent eight years to be exact getting a neuroscience PhD and on related research work.  According to several youtube videos, he takes a systems neuroscience approach in trying to solve General Intelligence problems.  In Demis Hassabis’s videos, I was glad to hear that his emphasis on learning our brain better and using what is known about the brain to build general intelligence.

He also highlighted that solving intelligence was the most important and difficult problem.  And that it could take up-to 20 years to build human level AI.  All of which motivate me because I feel that it’s worth spending my time on this problem. It was interesting that in one of the interviews, DeepMind cited that the software was unable to play strategy games because it didn’t have the proper imagination/planning functions implemented based on Memory.  I immediately thought of Jeff Hawkins’ memory based framework on this instance. I strongly feel that Numenta has the right approach and it will be able to do cooler things, such as planning and strategizing as it is based on memory and past experiences.

Reading more about Demis Hassabis made me wonder, whether I should go get a Neuroscience PhD.  On my travel to London, I really enjoyed the city and I thought it might not be a bad idea to acquire a PhD in Neuroscience from University College London where Demis Hassabis got his start. UCL (London university) PhD program is only four years and is cheaper than ones in the states.

But, my intuition is telling me to stay the course of self learning AI/ Neuroscience and not fall in the trap of getting a fancy PhD.  It also tells me that modern education system is a sham and everything can be learned by rigorously applying ones self. However, I do concede getting a PhD in Neuroscience (if I was fortunate enough to get into a school) would put me in touch with like minded people and would lend some credibility to my work overtime.

This week most interesting part of my learnings was about Numenta HTM algorithm (Jeff Hawkins’ work).  So far, I have learned the overview of their learning algorithms in three steps.

1) Form a sparse distributed representation of the input

2) Form a representation of the input in the context of previous input

3) Form a prediction based on the current input in the context of previous inputs.

I am hoping to get to the algorithm learning part soon.

Learning Bayes’ Theorem

Written by Chirag on June 21, 2015 at 8:00PM Eastern Time

This is week three of my personal goal of learning/implementing all that is Brain and Neuroscience related.  It’s going pretty well.  I have been quite diligent in keeping up with my nine self-assigned tasks (see previous post table).  This week, what stuck out to me is Bayes’ Theorem.  It is used quite heavily on neuroscience modeling. In fact, a book I am reading “Probabilistic Models of the Brain” noted this

“There is now substantial evidence showing that humans are good Bayesian observers.”

Also a post by Vicarious cofounder, Dileep George, on his blog, references some paper that uses Bayesian Inference.  I have printed out the paper and its on my reading list.  But the key question here again is What the heck is Bayesian Inference???

So from the above statement and other papers I have been reading, it has become pretty clear that I must read all about Thomas Bayes and his Theorem.

In Graduate School and most probably in undergrad statistics classes, I learned about Bayes’ Theorem.  Prior to my readings, I had vague memory of it being something related to conditional probabilities.  I think it’s best to drop an example right about now. I think we all learn from examples, not sure why people start with abstract thinkings.

Bayes’ Theorem basically gives you what is called the posterior probability. A way of reversing conditional probability.  If you’ve got probability of symptom based on disease, how can you get Given a symptom, what is the likelihood you have X disease?  The reverse probabilities are almost always more useful.

P(A|B)  apply Bayes Theorem and you get P(B|A)

One of the most famous mathematician, Carl Jacobi, used to repeat to himself Invert always Invert.. as in solve problems backwards.  Bayes Theorem lets you invert. So it’s a useful tool.

Here is a seriously made up example..

Probability of Heartburn given that you’re an Investment Banker is 90%

Probability of Heartburn given that you’re a Techie is 1%

now what is the probability that you’re an investment banker given that you have a heartburn?? (posterior probability)

or Probability (Investment Banker|Heartburn)

Well it turns out you can figure that out by knowing some probability that you’re an investment banker and percentage of the population that has heartburn.

Let’s say 3% of the population has heartburn

Let’s say 1% of the population is investment bankers.

Probability(Investment Banker|Heartburn) = Probability(HeartBurn|InvestmentBanker) * Probability(Investment Banker)/ Probability(Heartburn)

= 90%*1%/3%  = 30%

This means that if you have a heartburn, there is almost 30% chance that you’re an investment banker.  I know, in this made up world, it would suck to be an investment banker.

Now let’s see if we can figure out probability (Techie|Heartburn)

We need some additional information.  What is the probability that you’re a techie of the population.  In this Utopian world, 30% of the population is Techies.

Probability(Techie|Heartburn) = Probability(HeartBurn|Techie) * Probability(Techie)/ Probability(Heartburn)

= 1% * 30%/3%

= 10% Chance You’re a Techie, if You’ve gotta heartburn.

So basically, you can use above kind of trickery to figure out what someone’s career might be.  Not really, this is just an example. In real world, one can use this kind of inference techniques to figure out what is the probability you have certain disease based on symptoms you show for example.  Two diseases may have similar symptoms  but one is extremely rare.  In which case, the correct conclusion (or inference) may be, with a high degree certainty, that the person has the more common disease and not the super rare deadly disease.  We draw these kind of conclusions in our everyday analysis without realizing.

For example the heartburn example..bankers are notorious for working long hours and eating late at night regularly at a young age..my doctor recently told me.. that almost all young people that come to his office with heartburns are usually investment bankers from a certain firm.  So if you’re a young person, with an heartburn, the doctor might infer that you’re an investment banker.  Even though, investment bankers aren’t a large group of population.

I am sure, I will come up with more relevant example to brains as we go along. But just wanted to give you all a flavor of what is to come.