These days AI has become a common term, almost every other Tech Guy is taking this term quiet often. Perhaps some have made it cheap, anybody who manages some data and manipulates with some algorithm in multiple levels is perceiving that its AI. Unfortunately there are many in today's Startup ecosystem around the world, and even I'm astonished the way they are fooling their investors that what they are doing is "AI" manipulating High-tech words and stuff. probably AI is an attention catchy term they use for marketing purpose. And believe me AI/ML/DL are not that Vulnerable.

I would like to put up this article in context to general applications and Not Specific to Robotics (the real Machine Learning) which I/We personally doing and creating the architecture of the next generation of Robots.

Earlier View and Statements - Artificial intelligence is the future. AI is science fiction. AI is already part of our everyday lives.

Current View and Statements - AI based wearable, eCommerce, Ad's , Home Automation is all around. Another example, when Google DeepMind’s AlphaGo Program defeated South Korean Master Lee Se-dol in the board game Go, earlier this year, the terms AI, Machine Learning, and Deep learning were used in the media to describe how DeepMind won. And all three are part of the reason why AlphaGo trounced Lee Se-Dol.

All those statements are true, it just depends on what "Context" you are referring to. But AI, Machine Learning, and Deep learning are not the same things.

Simply Deep Learning is a subset Of Machine Learning and Machine Learning is a subset of Artificial Intelligence

Over the past few years AI has reached new avenues and especially since 2015. Wide availability of Parallel Processing Units (Multi-cores, GPUs) makes ti possible and handle data processing faster, cheaper, and more powerful especially in-context to the so called Big Data movement) – images, text, transactions, mapping data.

Artificial Intelligence —  Human "LIKE" Intelligence Exhibited by Machines.


From earlier 50's researchers (so called AI Pioneers) were trying to make Complex machines that could imitate (at-least human like) capabilities, with computers of those times. Generally people perceive AI based Machines (Robots) in movies as friend C-3PO and as Foe- The Terminator

What's happening Right now is Lets Say- “Contextual AI.” Capabilities that are able to perform specific tasks as well as, or better than, we humans can. Examples:- Image Classification on a service like Pinterest and Face Recognition on Facebook.

Now these capabilities exhibit some facets of human (like) intelligence. Now to understand How the conclusive results are achieved we must go a layer deeper i.e Machine Learning (ML)

Machine Learning —  An Approach to Achieve Artificial Intelligence




Spam free : Machine Learning helps keep your inbox (relatively) free of spam

Machine Learning is simply Parsing data with Algorithms, Learn, Conclude and Predict about something in the world. Further in easy words, rather than hand-coding software routines with a specific set of instructions to accomplish a particular task, the machine is “Trained” using large amounts of data and algorithms that give it the ability to learn how to perform the task.

ML is achieved Quasi-Successfully after years of Squeeze of earlier days programming approach with Decision Tree Learning, Inductive Logic Programming which were nowhere close to today's "Contextual" avenues success.

It was like People would go in and write hand-coded classifiers like edge detection filters so the program could identify where an object started and stopped; shape detection to determine if it had eight sides; a classifier to recognize the letters “S-T-O-P.” From all those hand-coded classifiers they would develop algorithms to make sense of the image and “Learn” to determine whether it was a stop sign.

Good, BUT What will happen to a foggy day when the sign isn’t perfectly visible, or a tree obscures part of it. Also for example the Vision based Robot navigation will Suck-Up in a dim light situation. There’s a reason computer vision and image detection didn’t come close to rivaling humans until very recently, it was too brittle and too prone to error. Time, and the right learning algorithms made all the difference.

Deep Learning — A Technique for Implementing Machine Learning


Another BIPARTISAN Algorithmic approach were considered by Senior - Machine Learning guys and Super Senior- AI guys i.e NEURAL NETWORKS - Software Units similar to Biological Brain, where any neuron can connect to any neuron. #Discrete Layers #Connections #Data Propagation

Example - IF YOU WANNA DIG A BIT MORE ! - Take an image, chop it up into a bunch of tiles that are inputted into the first layer of the neural network. In the first layer individual neurons, then passes the data to a second layer. The second layer of neurons does its task, and so on, until the final layer and the final output is produced.

Each neuron assigns a weighting to its input — how correct or incorrect it is relative to the task being performed. The final output is then determined by the total of those weightings. So think of our stop sign example. Attributes of a stop sign image are chopped up and “examined” by the neurons — its octagonal shape, its fire-engine red color, its distinctive letters, its traffic-sign size, and its motion or lack thereof. The neural network’s task is to conclude whether this is a stop sign or not. It comes up with a “probability vector,” really a highly educated guess, based on the weighting. In our example the system might be 86% confident the image is a stop sign, 7% confident it’s a speed limit sign, and 5% it’s a kite stuck in a tree, and so on — and the network architecture then tells the neural network whether it is right or not.

PROBLEM with ANN (Artificial Neural Network) was even the most basic Neural Networks were very computationally intensive, it just wasn’t a practical approach. Still, a small heretical research group led by Geoffrey Hinton at the University of Toronto kept at it, finally PALLETIZING the algorithms for supercomputers to run and proving the concept, but it wasn’t until Multi-cores were deployed in the effort that the promise was realized.

If we go back again to our stop sign example, chances are very good that as the network is getting tuned or “trained” it’s coming up with wrong answers — a lot. What it needs is training. It needs to see hundreds of thousands, even millions of images, until the weightings of the neuron inputs are tuned so precisely that it gets the answer right practically every time — fog or no fog, sun or rain. It’s at that point that the neural network has taught itself what a stop sign looks like; or your mother’s face in the case of Facebook; or a cat, which is what Andrew Ng did in 2012 at Google.

Ng’s breakthrough was to take these neural networks, and essentially make them huge, increase the layers and the neurons, and then run massive amounts of data through the system to train it. In Ng’s case it was images from 10 million YouTube videos. Ng put the “deep” in deep learning, which describes all the layers in these neural networks.

Today, image recognition by machines trained via Deep Learning in some scenarios is better than humans, and that ranges from cats to identifying indicators for cancer in blood and tumors in MRI scans. Google’s AlphaGo learned the game, and trained for its Go match — it tuned its neural network — by playing against itself over and over and over.

DEEP LEARNING - Leading the realization of "AI"

Deep Learning has enabled many practical applications of Machine Learning and by extension the overall field of AI. Deep Learning breaks down tasks in ways that makes all kinds of machine assists seem possible, even likely. 

Robotics, Driverless/Robotic Cars, Preventive Healthcare, Movie Recommendations, are all here today or on the horizon. Beautiful amalgamation of Deep Learning into AI is building our future and In-fact some Life Changing Robots we are bringing soon !!