Bar Charts With Julia

I was writing a new article for Merit and I wanted to make some charts and graphs. I used to use gnu octave for my data analysis needs but recently saw a tweet from Grady Booch about trying Julia which has a matlab esque sytnax. However, it being like matlab is definately at a superficial level because Julia is type safe and seems to use LLVM to precompile your code to get some pretty nice performance.

Julia has many nice graphing libraries but I couldn’t find any good tutorials on how to generate a bar chart with Julia. I decided on using Gadfly for charts because it seemed to have some nice documentation and nice looking charts. I ended up making a bar chart that looks like this.

100MRT

I think the end result is really nice and it was easy to optimize it to look nice on a mobile device. The code for the chart above is

using Gadfly
using DataFrames
using Colors
using CSV

Gadfly.push_theme(:dark)

function merit_colors(n)
    cs = distinguishable_colors(
        n,
        [colorant"#eca25c", colorant"#00a3cd"], # seed colors
        lchoices=Float64\[58, 45, 72.5, 90\],     # lightness choices
        transform=c -> deuteranopic(c, 0.1),    # color transform
        cchoices=Float64[20,40],                # chroma choices
        hchoices=[75,51,35,120,180,210,270,310] # hue choices)
    convert(Vector{Color}, cs)
end

d100 = DataFrame(
    Winners = [57, 193],
    Maxes = [250, 250],
    Ls = ["57", "193"],
    Algorithm=["PoG1","PoG2"])

p100 = plot(
    d100,
    x=:Algorithm,
    y=:Winners,
    ymax=:Maxes,
    color=:Algorithm,
    style(bar_spacing=1cm),
    label="Ls",
    Geom.bar(position=:dodge),
    Geom.label(position=:above),
    Scale.color_discrete(merit_colors),
    Guide.xlabel(nothing),
    Scale.y_continuous(format=:plain),
    Guide.title("Won More Than 100 MRT"),
    Guide.xticks(ticks=nothing))

draw(PNG("100MRT.png",4inch, 3inch, dpi=300), p100)

I’m just going to post the code without too much commentary. The interesting bits the merit_colors function which selects the Merit blue color as the first color. I used another array called Ls to control the labels of the bars in the chart because using the Winners array didn’t work because of type conversion errors. The other tricky thing to learn was how to modify the spacing between the bars, which ended up as a property of the theme as the bar_spacing parameter.

Other than figuring out the colors, labels, and bar_spacing, it was pretty straightforward. I hope this helps others trying to generate Bar Chars using Julia.

Advertisements

The Reality Making Machines

This short essay is for computer people like me. Meaning, people who make software. If you don’t make software, you are going to find this very boring. OK, now for the boring fun stuff.

It is impossible to talk about what is really out there (I am talking about what we call reality) without evoking our notions, or models of what we believe is out there. It is meaningless to talk about what is real and what isn’t because we cannot escape model making. Whether it is the models, our senses create and feed the brain, or the models our brain innately constructs. So I will state here that there is no such thing as real and not real, but that everything is real in one sense or another. When you think about the letters you read here, they are real in the sense that our brain has a model about them. They are as real as a unicorn is real, or the American Revolution, or the color red.

It turns out what is really out there seems to be mystical and incomprehensible since Issac Newton defined the mystical force of gravity which countered the mechanical view of the world. Scientists in the 20th century further buried the reactionary mechanical view of Descartes and Galileo with the even more mystical nature of quantum mechanics. Where influence can appear between things without any mechanical contact. This “Spooky action at a distance” is “just is”, and we can describe it and model it, but it is entirely counter-intuitive with other experiences in our lives. Think that whenever you raise your hand, you move the moon! How spooky.

Computer people tend to have a mechanical view of the world but a ghostly view of software as a nonphysical thing which resides in the body of the machine. A machine which is strictly deterministic and mechanical. The way software is built today reflects this mechanical view with the resurgence of functional programming, static languages, and even design strategies like Test Driven Development.

Philosophically we tend to separate what happens in the computer as a fiction and what happens outside the machine as the real world. Computer people view their jobs as modeling the real world inside of software instead of the more exquisite role of using software to create new realities for the users.

What many programmers don’t seem to grasp is that when a person uses a program’s user interface, the save button is as real to them as the mouse button they are pressing or their notion of the words they are reading. The person has a model of what they expect the software to do the same way they have a model of how the mouse button works or the model of what the words mean that they read. Jean Piaget called this Operational Thinking and starts to develop around 7yrs of age. If you have ever seen the real anger and stress that software provokes from your loved ones when it acts in ways they don’t expect, then you quickly realize that the software we make isn’t a joke.

Computers are reality making machines, and computer programmers are the gods of that little world they are creating for users to explore. Anyone who has played or creates video games understands this reasonably well. It seems that software developers and video game developers have their own separate cultures.

Computers as reality making machines are especially getting interesting because of the AI revolution that is happening. At first, there was considerable excitement in Big Data and significant investments made in tools to process the petabytes of data created every day. Then came the excitement in AI and techniques like Deep Learning, which became feasible once there was enough processing power to train machines using the vast data that’s collected.

These machines are now capable of automatically creating new models which come from their file and network-based senses. This means they are automatically able to create new realities and new experiences for users.

Eddington in the early 20th century said everything is meter reading. We live our lives reading the meters from our fingers, lips, nose, ears, and mouth. Scientists have constructed large expensive meters like the LHC. And computers are now reading meters in the form of files and network traffic.

It is about time that the Computer people bury the mechanical view of the world just as the physicists have done so at the beginning of the 20th century over 100 years ago.

Computer people need to accept that software is reality construction, not reality modeling and that they build dynamic systems, not static.

Maybe it’s time to throw out the bad ideas and go back to a more organic and mystical understanding, so we don’t forget the real purpose of these magnificent machines, a purpose so well envisioned by Douglas Engelbart, which is to embrace and extend our human potential beyond anything we have ever dreamed possible.

My bet is on Chomsky

My bet is Chomsky is right in the Chomsky vs Norvig debate and Geoffrey Hinton just provided some amazing fuel on the fire.

If you don’t know what the Chomsky vs Norvig debate is, it started with Chomsky deriding AI researchers about heavily relying on opaque probabilistic models for speech and vision instead of doing “real” science to figure out how the brain actually works. Norvig provided a rebuttal where he claimed that because the probabilistic models are so successful, it “is evidence (but not proof) of a scientifically successful model.”

And if you have been witness the huge success that Deep Learning has had for processing text and images you might have agreed that Norvig is right. That there must be something deeply right about these probabilistic models. However, most researches in the know will tell you that Deep Learning is highly problematic because it requires a huge amount of data to train a good system. I have believed that because of how these systems are trained is so different from how the brain learns they simply cannot be evidence of a scientifically correct model. These Deep Learning systems are also easily fooled once trained.

Then comes Geoffrey Hinton, who is famous for helping ignite the Deep Learning revolution, with a new model he calls the Capsule model which is based on how he thinks the brain processes images. His model uses unsupervised learning to extract low level features into a linear manifold, and from there training labels requires very few examples. He has essentially come up with a system which he says “does inverse computer graphics”. Where pixels are turned into objects and their poses. The initial results of the new system are incredibly exciting and Wired did a nice write up about it.

What is exciting to me is that Geoffrey is doing real science. He has come up with a theory about how the brain must do image processing and created a computer model to validate the theory. He brings a lot of old ideas back into the spotlight. Not only did he come up with a computer model, but one that could possibly blow away any of the existing probabilistic models by requiring orders of magnitude fewer examples with the same or better performance. This is the kind of science Chomsky says needs to happen and I believe Mr Hinton has just shifted the debate.