Intelligence equals Zipping Information

Just been chugging along through coursera courses and my weekly readings.  This week, I came across FB’s AI research group’s Memory Networks paper.  I thought it was interesting particularly because they are having some success with it.  The paper effectively stores information in an array and retrieves it to make intelligent predictions. To me this further adds evidence that brains’ work via a memory system and this approach from FB folks is very similar to what Jeff Hawkins is proposing.

The second thing  I discovered this week was on what the so called Artificial Intelligence methods (such as the Gradient Descent and Logistic classifiers) are really doing.  In our brains we make prediction via our memory system and possibly using sparse codes.  Sparse codes are effectively a way of zipping and storing information.  In the current state of the art Artificial Intelligence Methods, the derived/fitted regression parameters are effectively mathematical ways of zipping information.  Thus the point of this entire “building intelligence” exercise is to store information in ways that can help you make accurate predictions.

Coursera Courses

I have signed up for two coursera courses to up my knowledge of modern machine learning techniques.  I wanted to do this to get more hands on experience with data manipulation with Python.  One of the courses on coursera is a machine learning track for which you can get a certificate.  so I paid for it.  Not that I care for a certificate but it’s useful as a motivation.  I think it will help me with NuPIC as I find my limited python handson experience as the biggest roadblock.  I would like to take a robotics course as well so I can start incorporating that with NuPIC.

Coursera: Courses

  1. Machine Learning by Andrew Ng: https://www.coursera.org/learn/machine-learning
  2. Machine Learning Certificate by University of Washington: https://www.coursera.org/course/machlearning

Separately, I thought the last chapter in Mr. Kanerva’s Sparse Distributed Memory book was quite enlightening.  Kanerva basically summarizes the role of senses (including encoding), memory and motor manipulation in building an autonomous machine.  I would recommend that anyone who is interested in NuPIC type of intelligence framework to read at least the summary section of Kanerva’s Sparse Distributed Memory book.

 

Coding up Augmented Spatial Pooler

As of now, I am focusing all my efforts on coding up Augmented Spatial Pooler as defined in this paper (working on it as of 20 September 2015)

http://www.ict.griffith.edu.au/~johnt/publications/IJMLC2012.pdf

We have put a binary version of spatial pooler here.

Our Spatial Pooler implementation on Github

we have also put a Augmented Spatial Pooler version of the code here.

Our Augment Spatial Pooler implementation on Github

Still Struggling with the NuPIC error

Written by Chirag on August 9, 2015 Sunday (10:07pm)

I am still struggling to install NuPIC on my laptop.  I had gotten it to work 3 times before and now something just seems off.  I have tried everything from reinstalling python to xcode..relinking and unlinking.  Below is the error I am getting

ld: library not found for -lpython2.7

clang: error: linker command failed with exit code 1 (use -v to see invocation)

error: command ‘clang++’ failed with exit status 1

I am so desperate for help now that I have put up a project on upwork.com to resolve this.  The trouble is that I am not as well versed in command line syntax on my mac laptop.  I have tried everything I can possible think of but even my experienced friends are unable to resolve it.  So have to get some help.

On the more positive side, I have been keeping up with my reading schedule. I even got one fan for my blog.

 

Spatial Pooler and Nupic Error

I am trying to reinstall Nupic today and getting the sine-wave example to run.  So haven’t had a much time to deliver anything.

As far as Nupic goes, I found this overview on youtube video by Rahul pretty good in terms explaining implementation details for Spatial and Temporal Pooler.  I think once you have grasped the white paper, it’s good to go over this.

My goal is to implement encoder, spatial pooler, temporal pooler, CLA classifier.

I am getting this error in installing Nupic:

clang: error: invalid deployment target for -stdlib=libc++ (requires OS X 10.7 or later). 

I have os 10.10.1, so am already updated.. not sure why i am getting this error.

 

Onto Spatial Pooler

Written by Chirag on Sunday, July 26, 2016 (around: 6:30pm)

This week I am working on the Spatial Pooler from Numenta’s Cortical Learning Algorithm white paper.  Instead of typical CLA (Cortical Learning Algorithm) reading, I will be focused on implementing and testing my own version of CLA-Spatial Pooler (page 34 has the pseudocode).  So that I can  understand it better.

CLA has two key input pooling components. Effectively, in Jeff Hawkins words, pooling is the mapping of a set of inputs (visual, auditory, smell, sensorimotor) onto a single output pattern. There are two basic forms of pooling.

1) Spatial Pooler: “Spatial Pooling” maps two or more patterns together based on bit overlap. If two patterns share sufficient number of bits they are mapped onto a common output pattern.

2) Temporal Pooler: “Temporal Pooling” maps two or more patterns together based on temporal proximity. If two patterns occur adjacent in time they are likely to have a common cause in the world.

Quite honestly, given my limited python skills, I am finding it very difficult to code up the Spatial Pooler.  I am thinking it will take me at least six months. But, I think this will be worth the deep dive.

 

My Plan for Building a Brain

Written by Chirag on Sunday, June 14, 2015

I have formulated a Comprehensive Plan 1.0 to get me really deep in the weeds of learning about Artificial Intelligence. I have a very ambitious goal of developing something brain like, even if it’s 100 X 100 Neurons:-).  I am trying to develop the algorithm on my own so I can manipulate it and understand it very easily.  Ultimate would be to build some product with it.  I am very set on creating a product that’s physical and interacts with the physical world.

I will most likely follow the model by Jeff Hawkins HTM algorithms or Cireneikual’s version (blogger).  I am avoiding academic degree programs purposefully to reach my goal because I think they will slow me down.  Also, school tends to take all the fun out of your hobbies.  I have learned overtime (through experience), that can become expert at anything provided they spend enough time on the material.

Before I present my plan to the world.  It’s worth emphasizing why I am doing this.  I started picking up AI as a hobby because I was just plain interested in brains and how intelligence worked.  I left software industry eight years ago (currently am a bond analyst) and I have always wondered why we have not made a meaningfully progress in AI.  I want to contribute in building intelligent machines and not just machines mimicking intelligent behavior.  I think companies like Vicarious and Numenta are at the forefront of creating intelligent machines.  If you follow their trails (books, papers, people) and read Ray Kurzweil’s Singularity is Near (2005), it will become pretty clear the technology based on cortical principles will be the future of Intelligent Machines and it will affect all matters which concern humanity and this universe.

Approach and Plan to Build a Brain:

I am a very slow learner and my approach to learning is tailored to my learning capabilities.  My main advantage is persistence or drive as they call it. Basically, I learn through repetition and in very small chunks. My day job requires a lot of hours and it’s impossible to put in any significant time during business days.  I read brain and AI related to literature during my commute (40 minutes each way) to NYC.  On Sundays, I do the heavy lifting towards my goal.

My day job is some what unrelated.  I have a medium-term plan of someday committing fully to building “Intelligent Machines”. My background is in computer engineering (bachelors) and masters in computational finance. Over the past eight years, I haven’t really done much coding. I don’t consider VBA coding. So it’s been a real struggle to get up and running on Jeff Hawkins’ NuPIC algorithms (it’s all python based).

Here is my current plan/schedule.  Each item in my plan has a purpose to address my weakness.  As of June 14, 2015 (Sunday), this is what I believe will me push me into understanding all that is NuPIC.  I want to code my own version of HTM (Jeff Hawkins) Algorithm as a starting point. After that, I want to run a lot of examples and really get good at using it.

Here is My Weekly Activity Schedule: Apologies will add the links in coming days to below books. Added Thinking Recursively on July 12, 2015 at 3:52pm.

Goal: Build a Brain
Activity When?
Review All Things Learned: This is a critical step for me. I like to keep what I learned in a repeating loop.  So that it becomes a handy tool in my daily abilities of learning more. I was able to significantly improve my engineering GPA through this approach (starting Junior year).Also constant repetition meant that I never had to study for exams. It meant that I had no stress. I liked that feeling. I want to have that and not be overwhelmed by my hobby. I have already created a set of notes/learnings for this week.  Once they become more coherent, I will post them online. Business Day Commute
Read 5 pages per week on Godel, Escher and Bach (GEB).  In this book, I am hoping D. Hofstadter answers the key question.  How can brain chemicals turn into living/thinking system. Why does a brain think it is alive when its subcomponents are not alive.  Modern so called artificial intelligence is mimicking intelligent behavior but it is not intelligent or alive.  I am hoping GEB answers this question for me. Business Day Commute
Read 5 pages of The Future of the Brain. This is a 2014 book.  I am hoping it will give me a sense of where the Neuroscience and AI field is going. Who are the big people in the field and what are their views. Business Day Commute
Read Numenta White Paper 5 pages/week, Take Digital Notes Weekend
Read 5 pages/week on Sparse Distributed Representations by Kanerva, Take Digital Notes.  It has been shown that brains only store few details of the current state of the world.  Jeff Hawkins’ HTM uses Sparse Distributed Representations (SDR) to reflect this attribute.  I want to understand inside/out what SDRs are and how they are used.  Pentti Kanerva has a book on it. So you know I be reading it/coding it and rereading and recoding it. Weekend
Read 5 pages/week on Bayes’ Inference/Theorem Book, by a British Computational Neuroscientist.  You gotta read this!!Berkley Paper on Bayes’ TheoremBayesian Probability Theoryhttp://greenteapress.com/thinkbayes/html/index.html, this is python examples straight up with explanations. you know I am gonna read this!Examples Take Digital Notes. I want to really get good understanding and applying Bayes’ Inference, Maximum Likelihood Estimators and Markov Chains.  There is a lot of evidence that humans are Bayes’ rule observers. Learned this from the book below. Weekend
Read Probabilistic Models Of the Brain Weekend
Practice python coding at least one hour or Read Python Data Science book week or weekend. Use Github, Look at AI examples.  Right now I am using codeacademyand am reading Datascience from Scratch

Read Thinking Recursively, 5 pages/week by Eric S. Roberts

Weekend/Business Day
Update The Blog.  I want to do this step because I want to attract like minded people and get a useful dialogue going. You know get the crowd sourcing going a bit. Set of a global meme on “building a brain”, in a crowd sourced way. Weekend

The Beginning!

Written by Chirag on Sunday, June 7, 2015.

Hello,

My name is Chirag. I have been intrigued by Jeff Hawkins 2004 book On Intelligence.

I believe this approach will be the future of artificial intelligence. My goal is to learn everything about the underlying methods of Jeff Hawkins approach.  Jeff Hawkins has a company called Numenta, which has released an open source version of its algorithms. The open source community is called NuPIC.The mission of this project is to build and support a community interested in machine learning and machine intelligence based on modeling the neocortex and the principles upon which it works.

In particular, Numenta’s open source online learning system is based on Hierarchical temporal memory (HTM).  The underlying algorithm uses Sparse Distributed Representations as well.  To learn more about  Sparse Distributed Representations, I am reading Pentti Kanerva’s book Sparse Distribute Memory.

As of June 7, 2015 (Sunday), I am reading and learning from Kanerva’s book.  The book is very well written.  It goes through the basic math and statistic involved in understanding the book first.

After reading the book, I hope to implement my own version of the algorithm in Python.  I know very little Python. Fortunately, someone has a basic version of the algorithm noted here.

Happy Aritificial Intelligence Learning!