Introducing Bright Wire
Lately I've caught the machine learning bug. Symptoms may include: talking about machine learning with anybody that will listen, ignoring bored expressions on people's faces while further continuing to talk about machine learning, creating weird diagrams on napkins at social events - also, creating open source machine learning libraries on GitHub.
The first time you see machine learning working it feels like magic. The previously thoroughly stupid computer that can't ever do anything right (and that needs to be laboriously told what to do via tedious procedural instruction) suddenly gets a voice of it's own and comes up with results that are not only a little bit interesting, but genuinely exciting!
Of course, after a while the feeling of magic starts to diminish. The computer still makes stupid mistakes for seemingly no reason. But the lingering sensation that magic might still reappear at any moment keeps things interesting.
Before I started learning about machine learning I was interested in natural language processing (actually I still am). One of the many hard tasks in NLP is assigning part of speech tags to words (this word is a noun, this word is a verb etc). People have been working on this problem for decades and while the problem is kind-of-sort-of solved it's still a complicated process.
Or, at least it was before machine learning. Now it's just a matter of training a machine learning algorithm (of which there are many flavours) on what we know are the correct answers. There's no longer any need to invent fantastically complicated algorithms and data structures, or do any other time consuming computer science stuff. We know what we want and the computer figures out for itself the best way to deliver. Yay! That means we do more important things like go to the beach while the computer trains and trains itself smarter. Paradise is no longer lost.
Anyway, there didn't seem to any other machine learning libraries that did what I wanted. Specifically, a) running on the GPU (to train large data-sets), b) running on .NET (why? Because c# rocks). So I built one myself. It's open source and on GitHub. I've used it to train some large models and have already seen some excellent results.
One definition is that machine learning is "the automation of automation".
At a simplistic level, traditional programming is about telling the computer what to do with inputs into a system to produce some desired output.
Machine learning turns traditional programming upside down. Instead of telling the computer exactly what it should do, we instead show it examples of what we would like done. Then it's up to the computer itself to decide the best way to do it.
Why is this an improvement? Because it turns out to be very difficult to tell computers how to do complicated tasks. Complicated tasks are just so intricate that it is beyond the ability of traditional software to adequately describe all the things that need to happen to produce each desired output.
But it turns out that computers can in fact manage to work out highly complex processes themselves (even if we might not really understand how they do it). So when we want computers to make diagnoses from medical scans or translate text between multiple languages, machine learning is the only good tool that we have.
It's not just about Hard Stuff
If machine learning can do the hard stuff, it can also do the easy stuff. For any non-trivial problem that you need to solve, machine learning might be able to figure out the best way to do it - without you needing to write any software yourself.Not writing software means that you can go to the beach while the computer works out the tricky parts of your process. By the time you get back the solution might already be waiting for you.