Artificial intelligence (AI) and machine learning might have felt like science fiction terms a decade ago, but they are increasingly becoming more common in today’s language. In fact, many systems today use machine learning, and, while it may sound similar, machine learning is different from artificial intelligence.
When you think of artificial intelligence, you might naturally think of robots from science fiction like the Terminator, R2-D2 from Star Wars, or even Futurama’s Bender, but sentient artificial intelligence like this does not exist yet. The idea of artificial intelligence has been around since the 1950’s and not just in science fiction, but in the minds of real computer scientists. Until recently, it had been little more than an idea out of reach.
However, recent breakthroughs in computer processing power has transformed the idea of artificial intelligence into having real world applications. It is nothing like R2-D2, which is why all artificial intelligence currently used is considered weak AI or narrow AI. It is artificial intelligence that is non-sentient and made for a specific task. Apple’s Siri is a good example of this. While she may be great at scheduling an appointment, don’t expect her to carry on a real conversation with you.
Machine learning is a newer idea when compared to artificial intelligence. It was first thought of in the 1970’s when computer programmers decided it would be far easier to program a machine that learns rather than program a machine that already knows everything. This is what machine learning is––the ability for a machine to learn something by itself. It can be expected that any real, sentient artificial intelligence will make use of machine learning. Today, most weak AIs use machine learning to accurately conduct their specific tasks. Siri uses machine learning to learn your voice as well as other aspects about you as you use her, and Pandora uses machine learning to pick songs in the same category of music you’ve liked.
When is Machine Learning Useful?
Machine learning can be useful wherever there are large sets of data that information can be derived. Google Image search is a good example of this. Google crawls websites across the internet and indexes them so users can find them in Google’s search engine. Data pulled from these websites includes images which are also entered into Google’s image database so they to can be searched.
If a user would like to see all the images of cats that Google’s image database has amassed, they simply search “cat” and are shown tons of cat images. However, consider this from Google’s perspective. It would take countless days, if not years, for someone to sift through all of Google’s images and identify which images are cats and need to be shown for a “cat” query.
Google solved this problem by using machine learning. Their software is shown a picture of a cat and then finds similar pictures and labels them accordingly. Humans then check and determine which pictures the software has labeled correctly and which it has labeled incorrectly. Continual iterations of this refine the software until it eventually is accurate in recognizing pictures of cats. It does this for all other images, signs, buildings, stairs––pretty much any image that can be labeled, it categorizes.
Of course it’s not perfect, which can be seen from time to time. It may not be able to distinguish a stop sign from a yield sign. Likewise, it might generate a variety of pictures when queried for abstract things like excitement, sadness, or love.
Today’s weak AI, while miles ahead of what it was years ago, is nothing like what we see in science fiction. But through machine learning, it is taking one step closer in that direction.