The Reality Making Machines

This short essay is for computer people like me. Meaning, people who make software. If you don’t make software, you are going to find this very boring. OK, now for the boring fun stuff.

It is impossible to talk about what is really out there (I am talking about what we call reality) without evoking our notions, or models of what we believe is out there. It is meaningless to talk about what is real and what isn’t because we cannot escape model making. Whether it is the models, our senses create and feed the brain, or the models our brain innately constructs. So I will state here that there is no such thing as real and not real, but that everything is real in one sense or another. When you think about the letters you read here, they are real in the sense that our brain has a model about them. They are as real as a unicorn is real, or the American Revolution, or the color red.

It turns out what is really out there seems to be mystical and incomprehensible since Issac Newton defined the mystical force of gravity which countered the mechanical view of the world. Scientists in the 20th century further buried the reactionary mechanical view of Descartes and Galileo with the even more mystical nature of quantum mechanics. Where influence can appear between things without any mechanical contact. This “Spooky action at a distance” is “just is”, and we can describe it and model it, but it is entirely counter-intuitive with other experiences in our lives. Think that whenever you raise your hand, you move the moon! How spooky.

Computer people tend to have a mechanical view of the world but a ghostly view of software as a nonphysical thing which resides in the body of the machine. A machine which is strictly deterministic and mechanical. The way software is built today reflects this mechanical view with the resurgence of functional programming, static languages, and even design strategies like Test Driven Development.

Philosophically we tend to separate what happens in the computer as a fiction and what happens outside the machine as the real world. Computer people view their jobs as modeling the real world inside of software instead of the more exquisite role of using software to create new realities for the users.

What many programmers don’t seem to grasp is that when a person uses a program’s user interface, the save button is as real to them as the mouse button they are pressing or their notion of the words they are reading. The person has a model of what they expect the software to do the same way they have a model of how the mouse button works or the model of what the words mean that they read. Jean Piaget called this Operational Thinking and starts to develop around 7yrs of age. If you have ever seen the real anger and stress that software provokes from your loved ones when it acts in ways they don’t expect, then you quickly realize that the software we make isn’t a joke.

Computers are reality making machines, and computer programmers are the gods of that little world they are creating for users to explore. Anyone who has played or creates video games understands this reasonably well. It seems that software developers and video game developers have their own separate cultures.

Computers as reality making machines are especially getting interesting because of the AI revolution that is happening. At first, there was considerable excitement in Big Data and significant investments made in tools to process the petabytes of data created every day. Then came the excitement in AI and techniques like Deep Learning, which became feasible once there was enough processing power to train machines using the vast data that’s collected.

These machines are now capable of automatically creating new models which come from their file and network-based senses. This means they are automatically able to create new realities and new experiences for users.

Eddington in the early 20th century said everything is meter reading. We live our lives reading the meters from our fingers, lips, nose, ears, and mouth. Scientists have constructed large expensive meters like the LHC. And computers are now reading meters in the form of files and network traffic.

It is about time that the Computer people bury the mechanical view of the world just as the physicists have done so at the beginning of the 20th century over 100 years ago.

Computer people need to accept that software is reality construction, not reality modeling and that they build dynamic systems, not static.

Maybe it’s time to throw out the bad ideas and go back to a more organic and mystical understanding, so we don’t forget the real purpose of these magnificent machines, a purpose so well envisioned by Douglas Engelbart, which is to embrace and extend our human potential beyond anything we have ever dreamed possible.

Advertisements

My bet is on Chomsky

My bet is Chomsky is right in the Chomsky vs Norvig debate and Geoffrey Hinton just provided some amazing fuel on the fire.

If you don’t know what the Chomsky vs Norvig debate is, it started with Chomsky deriding AI researchers about heavily relying on opaque probabilistic models for speech and vision instead of doing “real” science to figure out how the brain actually works. Norvig provided a rebuttal where he claimed that because the probabilistic models are so successful, it “is evidence (but not proof) of a scientifically successful model.”

And if you have been witness the huge success that Deep Learning has had for processing text and images you might have agreed that Norvig is right. That there must be something deeply right about these probabilistic models. However, most researches in the know will tell you that Deep Learning is highly problematic because it requires a huge amount of data to train a good system. I have believed that because of how these systems are trained is so different from how the brain learns they simply cannot be evidence of a scientifically correct model. These Deep Learning systems are also easily fooled once trained.

Then comes Geoffrey Hinton, who is famous for helping ignite the Deep Learning revolution, with a new model he calls the Capsule model which is based on how he thinks the brain processes images. His model uses unsupervised learning to extract low level features into a linear manifold, and from there training labels requires very few examples. He has essentially come up with a system which he says “does inverse computer graphics”. Where pixels are turned into objects and their poses. The initial results of the new system are incredibly exciting and Wired did a nice write up about it.

What is exciting to me is that Geoffrey is doing real science. He has come up with a theory about how the brain must do image processing and created a computer model to validate the theory. He brings a lot of old ideas back into the spotlight. Not only did he come up with a computer model, but one that could possibly blow away any of the existing probabilistic models by requiring orders of magnitude fewer examples with the same or better performance. This is the kind of science Chomsky says needs to happen and I believe Mr Hinton has just shifted the debate.

Rejection Sampling with Perl 6

Perl 6 provides a way to create lazy lists using the gather/take keywords. What I wanted to do was create an infinite list of samples from a known distribution of values. A simple way to sample from a known distribution is to do Rejection Sampling and doing this in Perl 6 is super easy.


sub sample(%distribution) {
  gather {
    loop {
      my $v = %distribution.pick;
      take $v.key if rand <= $v.value;
    }
  }
}

This function creates a Seq and you grab values from it in a lazy way. Here is a simple example assigning 100 samples from a distribution
to an array.


my %distribution = a=> 0.3, b=> 0.4, c=>0.1, d=>0.2;
my @samples = sample(%distribution)[0..^100];

Perl 6 is super fun. It has taken all the cool features from all the other languages and ignored the bad stuff. In this case, lazy lists from Haskell.

Read Seqs, Drugs, and Rock’n Roll to learn more about Sequences in Perl 6.