My bet is on Chomsky

My bet is Chomsky is right in the Chomsky vs Norvig debate and Geoffrey Hinton just provided some amazing fuel on the fire.

If you don’t know what the Chomsky vs Norvig debate is, it started with Chomsky deriding AI researchers about heavily relying on opaque probabilistic models for speech and vision instead of doing “real” science to figure out how the brain actually works. Norvig provided a rebuttal where he claimed that because the probabilistic models are so successful, it “is evidence (but not proof) of a scientifically successful model.”

And if you have been witness the huge success that Deep Learning has had for processing text and images you might have agreed that Norvig is right. That there must be something deeply right about these probabilistic models. However, most researches in the know will tell you that Deep Learning is highly problematic because it requires a huge amount of data to train a good system. I have believed that because of how these systems are trained is so different from how the brain learns they simply cannot be evidence of a scientifically correct model. These Deep Learning systems are also easily fooled once trained.

Then comes Geoffrey Hinton, who is famous for helping ignite the Deep Learning revolution, with a new model he calls the Capsule model which is based on how he thinks the brain processes images. His model uses unsupervised learning to extract low level features into a linear manifold, and from there training labels requires very few examples. He has essentially come up with a system which he says “does inverse computer graphics”. Where pixels are turned into objects and their poses. The initial results of the new system are incredibly exciting and Wired did a nice write up about it.

What is exciting to me is that Geoffrey is doing real science. He has come up with a theory about how the brain must do image processing and created a computer model to validate the theory. He brings a lot of old ideas back into the spotlight. Not only did he come up with a computer model, but one that could possibly blow away any of the existing probabilistic models by requiring orders of magnitude fewer examples with the same or better performance. This is the kind of science Chomsky says needs to happen and I believe Mr Hinton has just shifted the debate.

Advertisements

Rejection Sampling with Perl 6

Perl 6 provides a way to create lazy lists using the gather/take keywords. What I wanted to do was create an infinite list of samples from a known distribution of values. A simple way to sample from a known distribution is to do Rejection Sampling and doing this in Perl 6 is super easy.


sub sample(%distribution) {
  gather {
    loop {
      my $v = %distribution.pick;
      take $v.key if rand <= $v.value;
    }
  }
}

This function creates a Seq and you grab values from it in a lazy way. Here is a simple example assigning 100 samples from a distribution
to an array.


my %distribution = a=> 0.3, b=> 0.4, c=>0.1, d=>0.2;
my @samples = sample(%distribution)[0..^100];

Perl 6 is super fun. It has taken all the cool features from all the other languages and ignored the bad stuff. In this case, lazy lists from Haskell.

Read Seqs, Drugs, and Rock’n Roll to learn more about Sequences in Perl 6.

Total Recall

I learned programming computers before I had the internet. While technically I am a Millennial, unlike most younger Millennial programmers I had a completely different experience than they did learning how to program computers. Programming became an obsession for me as a child when I accidentally pressed the wrong button on my keyboard playing Gorilla.BAS and seeing screens of hieroglyphs cross my eyes. It was written in QBasic and I didn’t know that Bill Gates wrote it with Neil Konzen. I didn’t even know who Bill Gates was.

I asked my best friend Dave what it was. He went to a computer summer camp that year and I thought he might have some idea. He said “oh, that’s code”. We spent some time together and he showed me what an variable was, what an “if” statement was, and how to get input and print text. My first program was a choose-your-own adventure book. I was hooked!

My only resource during this time was the library. My mother would take me and I would get stacks of books. My software development process looked like this.

  1. Make a guess about how something worked.
  2. Write some code.
  3. Run the code.
  4. It failed, look up what the function does in the reference.
  5. Repeat.

The process of looking up how things worked in a reference was so tedious that I was forced to guess before I expended the serious effort. Eventually I remembered how things worked and would rarely have to search a reference to get something done.

This process of trying to remember and guessing is called Active Recall. It is an incredibly efficient way of developing long-term memory. This is why I felt as time went on I would rarely have to consult a reference. This was especially fortunate for me since my references were library books and I would have to return them! Not only did it improve long term memory, but I had to be endlessly curious to solve my problems.

This is a completely different experience than what younger programmers have. With google at your finger tips, there seems to be no reason to try and guess. With StackOverflow the distance between you having a problem and there being an answer is very short. Looking things up is not tedious anymore.

Their development process looks like this.

  1. Open up a browser tab and type in some text related to what you are doing.
  2. Click on the StackOverflow link and read if someone else is trying to accomplish the same thing.
  3. Look at the answers.
  4. Copy and paste the solution and change it so that your code compiles.
  5. Repeat.

This new way is called Passive Review. Which as you can guess is not very efficient in cultivating long term memory. There is little reason left to guess and develop that muscle. There is also very little reason to be creative and come up with a solution, when you can simply search for one very quickly.

The June 2017 volume of the Communications of the ACM has a brilliant article called The Debugging Mind-Set which links Active Recall with the cultivation of the Incremental Theory of intelligence and Passive Review with the Entity Theory.

People with the Entity Theory mindset believe intelligence and ability is fixed at birth and you either got it or you don’t. People with the The Incremental Theory mindset believe intelligence and ability are not fixed and can improve with effort.

I squarely have the Incremental Theory mindset and I believe this mindset promotes growth and creativity and above all what I like to call “The Fighting Spirit”, which I believe is central to the hacker mindset. I also believe the way I learned how to program had a huge influence over me having this style to begin with. I had to constantly guess and be curious, trying things before looking up solutions in tedious references and manuals.

The question I would love to ask you dear reader is this, is this new era of software development, where you don’t have to remember things anymore but use a different set of skills to find, instead of develop, solutions to problems. Is this new era promoting a generation of Entity Theorists? And has the hacker spirit died with it?