Chugging Along

Have been chugging along the AI learning curve.  I am liking the University of Washington’s Machine Learning Certificate Track very much.  I think the professors are very good and the format of the lessons is very good.  You’re following along and doing hands on exercises.  I think all the programming courses should be like that.

 

I think Ben Franklin once said

Tell me and I forget. Teach me and I remember. Involve me and I learn.

Separately, I learned about Davide Maltoni’s HTM research paper.  He’s a professor at a school in Italy.  He used HTM for handwriting recognition and found that it outperformed other machine learning techniques.  So that’s cool. I am in the process of reading through paper now. 

As I am chugging along, I must tell you viewers of this blog what are my favorite things.  My favorite thing in the world right now is protein powder called vega proteins and greens.

Chirag

 

 

Spatial Pooler and Nupic Error

I am trying to reinstall Nupic today and getting the sine-wave example to run.  So haven’t had a much time to deliver anything.

As far as Nupic goes, I found this overview on youtube video by Rahul pretty good in terms explaining implementation details for Spatial and Temporal Pooler.  I think once you have grasped the white paper, it’s good to go over this.

My goal is to implement encoder, spatial pooler, temporal pooler, CLA classifier.

I am getting this error in installing Nupic:

clang: error: invalid deployment target for -stdlib=libc++ (requires OS X 10.7 or later). 

I have os 10.10.1, so am already updated.. not sure why i am getting this error.

 

Should I get a Neuroscience, PhD?

Written by Chirag on July 5, 2015 (Sunday) at about 7:15pm

Last week, I was in UK (London, Scotland) and Netherlands. I found it difficult to stick with my AI/Neuroscience learning schedule because of the stress of travel.  So it took me two weeks to get through my routine of going through 5 neuroscience related tasks and 3 casual neuroscience readings/week.   Overall, I am finding Sparse Distributed Memory by Pentti Kanerva a really difficult read. I suspect, I will have to come back to it and reread it four to five times.

Curiously, while in London, I was inspired to learn more about DeepMind founder Demis Hassabis.  As most AI people know, DeepMind was acquired by Google. DeepMind founder Demis H. is a genius (computer science, chess and a gaming prodigy). He has been able to make contribution to the field of AI by spending significant time and effort learning about Neuroscience.  Prior to DeepMind, he spent eight years to be exact getting a neuroscience PhD and on related research work.  According to several youtube videos, he takes a systems neuroscience approach in trying to solve General Intelligence problems.  In Demis Hassabis’s videos, I was glad to hear that his emphasis on learning our brain better and using what is known about the brain to build general intelligence.

He also highlighted that solving intelligence was the most important and difficult problem.  And that it could take up-to 20 years to build human level AI.  All of which motivate me because I feel that it’s worth spending my time on this problem. It was interesting that in one of the interviews, DeepMind cited that the software was unable to play strategy games because it didn’t have the proper imagination/planning functions implemented based on Memory.  I immediately thought of Jeff Hawkins’ memory based framework on this instance. I strongly feel that Numenta has the right approach and it will be able to do cooler things, such as planning and strategizing as it is based on memory and past experiences.

Reading more about Demis Hassabis made me wonder, whether I should go get a Neuroscience PhD.  On my travel to London, I really enjoyed the city and I thought it might not be a bad idea to acquire a PhD in Neuroscience from University College London where Demis Hassabis got his start. UCL (London university) PhD program is only four years and is cheaper than ones in the states.

But, my intuition is telling me to stay the course of self learning AI/ Neuroscience and not fall in the trap of getting a fancy PhD.  It also tells me that modern education system is a sham and everything can be learned by rigorously applying ones self. However, I do concede getting a PhD in Neuroscience (if I was fortunate enough to get into a school) would put me in touch with like minded people and would lend some credibility to my work overtime.

This week most interesting part of my learnings was about Numenta HTM algorithm (Jeff Hawkins’ work).  So far, I have learned the overview of their learning algorithms in three steps.

1) Form a sparse distributed representation of the input

2) Form a representation of the input in the context of previous input

3) Form a prediction based on the current input in the context of previous inputs.

I am hoping to get to the algorithm learning part soon.