Machine Learning: A Brief High-Level Overview

Brett P Davis
7 min readJul 6, 2020
Image Credit: https://pixabay.com/illustrations/robotics-education-robotics-institute-4381582/

“[Machine Learning is the] field of study that gives computers the ability to learn without being explicitly programmed.” — Arthur Samuel, 1959

Machine learning is a rapidly growing field of technology, with applications from a basic spam filter on your email, to personal assistants such as Siri or Alexa, all the way up to self-driving cars. Because machine learning is a major subset of artificial intelligence that is focused on teaching machines how to learn, allowing them to gain experience and grow. This application of artificial intelligence allows an AI to perform actions that previously could only be performed by humans, by making sense of the messy, tangled information that is the real world.

So what is AI? Simply put, artificial intelligence is any technology or program that does something smart. An AI is a program within a system that will take information from it’s surrounding world, and use that information to increase the odds of success in whatever action it is designed to perform, however, the term is also commonly used to describe machines that can perform “human” actions such as learning and cognitive thinking, or even simply the study of such programs. When discussing artificial intelligences, they are typically split into two categories; narrow and general. Narrow AI are the kind we already see today, with a specific function they have been taught, but not explicitly programmed, to do, such as a personal assistant on a phone. General AI are at this point only theoretical and would have the same adaptable intelligence as a human, capable of learning and performing any task a human could. When we will be capable of developing general AI is hotly debated, with predictions ranging from as early as 2040–2050, to as late as centuries from now.

Now that we know what artificial intelligence is and how machine learning factors into it, what is machine learning? As put by Christ Meserole, “The core insight of machine learning is that much of what we recognize as intelligence hinges on probability rather than reason or logic”. When we drive somewhere, we don’t memorize every route’s exact travel time, as knowing the exact travel time isn’t possible. Instead, we estimate which route is likely to get us there fastest, based on factors such as the distance, speed we are allowed to travel, and what we think the traffic will be like. In the same way, machine learning is all about having a program create an algorithm to accurately predict the most likely method to reach success based on the information available, instead of having a human provide a large number of vague rules to follow. Whether you’re using an advanced algorithm such as Google’s deep neural network, or a simpler one such as making simple predictions with input information, machine learning boils down to estimating probabilities by breaking down the problem and information into small, bite-sized bits.

When talking about machine learning, the approaches used are typically split into a few categories: Supervised, unsupervised, semi-supervised, and reinforcement. As the name implies, the supervised approach involves having a person directly attend to the program as it learns. Large amounts of labeled examples are fed into the system, and it will slowly begin to recognize more and more of the data before finally, it can reliably distinguish between the data it is given. Due to this, supervised learning sees it’s most effective use when the task requires sorting data into two categories, making a selection between multiple types of answers, or making predictions based on its data or the predictions of other machine learning programs.

Unsupervised learning, to contrast, has a somewhat different goal. As labels are not provided for it to associate with the data, it cannot apply labels to the data it identifies. There is no correct answer here, and as such, the program has more freedom to find and show relevant data and how it is connected. What ends up happening is the program will group data by similarities it finds, such as how AirBnB will group houses available to rent by their neighborhood. Because unsupervised learning excels at finding any similarities or abnormalities in data, it excels at grouping data together by those similarities, abnormality detection, and other tasks that involve similarity detection.

In addition to these two approaches, there is also a compromise between the two known as semi-supervised learning. Where supervised learning is provided a large amount of labeled data (such as the ~9 million images used by Google’s OpenImages) and unsupervised learning is provided a large amount of unlabeled data for the program to group by similarities, semi-supervised programs are provided a small amount of labeled data and a large amount of unlabeled data. This serves to partially train the algorithm on the labels, before allowing it to label the unlabeled data, which gives it a starting point for an understanding while allowing it to explore the data on it’s own and develop its understanding of it. Current limitations make this less effective than supervised learning for labeling data, but as the approach progresses if it reached a similar level of effectiveness it would severely cut down on the need for massive databases of labeled data. In its current state, semi-supervised learning approaches create algorithms that excel at labeling a large group of data based on a small group of identified data, ranging from anything as mundane as translating languages based on a less than complete dictionary, up to fraud detection.

Finally, there is reinforcement learning. Reinforcement learning heavily differs from the other three, as instead of providing a large amount of data to sift through and look for similarities, reinforcement learning is rooted more heavily in a trial and error approach, where the program will attempt an action. Throughout the process, either a data scientist supervising will provide positive and/or negative cues, or the program itself will provide rewards or punishments based on the outcome of its actions. Either way, the steps taken are largely up to the program so long as they are in line with the chosen goal. This approach sees the widest use in multi-step tasks will clear rules, with real-world examples ranging anywhere from video games to robotics. One such example is AlphaGo, the AI designed to play go, which in 2015 became the first program to defeat a professional human Go player.

There is one final learning approach that while not technically machine learning, is often included as a subset of machine learning. Deep learning relies on neural networks, a type of algorithm inspired by the way human brains work. The main difference in performance between the standard machine learning models and deep learning boils down to the way data is piped into the system. With machine learning models, data needs to be structured in some way for them to understand and interpret it, and after they have learned if they are still producing inaccurate results, there is a need for a person to step in and “teach” them, correcting the errors. On the other hand, with deep learning models, the system is able to make use of the artificial neural network in order to detect when their results are inaccurate and correct themself. This does not mean deep learning cannot be wrong, however, as even with it correcting its own mistakes, inaccuracies in the data can, as always, lead to inaccurate results. Deep learning has revolutionized the field of machine learning, leading to the most human-like AI currently existing, such as Google Duplex, one of very few AI able to claim they passed the Turing test.

While many of the strengths have been outlined above, it is important to note that like any new technology, machine learning does have disadvantages. These mainly come in the form of issues arising down the line due to the program having incomplete data in the early stages of its learning. Insufficient data causes errors in the predictions of the algorithm, with the errors ranging from minor inaccuracies to being completely wrong on a system that would otherwise be reliably accurate. In certain cases, if the program is provided information about people that excludes a certain population, the information it returns can even be outright discriminatory.

The field of machine learning is still a new and growing technology, yet it stands to completely revolutionize the way we do everything. According to the research vice president of Gartner, “Ten years ago, we struggled to find 10 machine learning based business applications. Now we struggle to find 10 that don’t use it.”, and it rings true. Companies like Yelp use machine learning for image curation, IBM has deployed a machine-learning-based AI in hospitals and medical centers, and in 2016 the NYPD deployed Patternizr, a machine learning tool used for predictive policing, and many other corporations and services have found ways to incorporate these programs into their business models. While these technologies are still very new, it’s plain to see machine learning is already very powerful, and it is exciting to see what comes from it in the years to come.

Works Cited:
[1]: Daffodil Software. “9 Applications of Machine Learning from Day-to-Day Life.” Medium, Daffodil Software, 17 June 2018, medium.com/app-affairs/9-applications-of-machine-learning-from-day-to-day-life-112a47a429d0.
[2]: The Royal Society. “What Is Machine Learning? | Royal Society.” royalsociety.org/, The Royal Society, 14 June 2017, https://royalsociety.org/topics-policy/projects/machine-learning/videos-and-background-information.
[3]: Meserole, Chris. “What Is Machine Learning?” Brookings.Edu, Brookings, 25 Oct. 2019, www.brookings.edu/research/what-is-machine-learning.
[4]: Gupta, Mohit. “ML | Types of Learning — Part 2.” GeeksforGeeks, geeksforgeeks.org, 1 May 2018, www.geeksforgeeks.org/ml-types-learning-part-2/?ref=lbp.

--

--