Hit the share button below, and let us know your thoughts on this topic!

It’s easy to assume that the field of artificial intelligence(AI), machine learning, and smart computers is a recent phenomenon born in the last decade. But you’ll be surprised to learn that the roots of machine learning stretch all the way back to the 1940s.

To be perfectly clear, it’s nearly impossible to pinpoint a single inventor or a definitive moment when machine learning was born. It’s more of a collective effort, with numerous individuals contributing their inventions, algorithms, and frameworks. We’ll be looking at the history of machine learning in this post, exploring the key milestones that shaped the development of machine learning.

What is Machine Learning?

Machine learning is a branch of artificial intelligence (AI) that utilizes algorithms to analyze data, acquire knowledge from it, and subsequently employ that knowledge to make well-informed choices. A straightforward illustration of a machine learning algorithm can be found in the context of on-demand music streaming platforms like Apple Music, Spotify, or YT Music. For these services, machine learning algorithms come into play when determining which songs or artists to suggest to you.

These algorithms analyze your preferences and compare them to those of other users with similar musical tastes. Virtually all services that offer automated suggestions utilize this framework for optimized user experience. Although this method is usually attributed to AI, that’s only on a broader spectrum in practice. It is machine learning as a subdivision of AI that underpins these technical capabilities.

Machine learning drives a wide range of tasks across diverse industries. It aids data security firms in detecting and eliminating malware and assists finance professionals in receiving alerts for lucrative trades. These AI algorithms can continuously learn, evolving beyond the original parameters of their source code.

The Early Days of Machine Learning

Machine learning came into the limelight in 1943 when Walter Pitts and Warren McCulloch introduced the first mathematical model of neural networks in their seminal scientific paper titled “A logical calculus of the ideas immanent in nervous activity.”

The brain cell interaction model

The model is Donald Hebb’s brainchild. Just six years after Walter and Warren’s groundbreaking paper, Hebb published his theories on neuron excitement and communication between neurons in a book titled “The Organization of Behaviour(PDF).”

According to Hebb’s observation, when a cell consistently contributes to the activation of another, the first cell’s axon forms synaptic knobs or enhances existing ones, establishing contact with the second cell’s soma. Hebb’s ideas, when applied to artificial neural networks and artificial neurons, describe a method of modifying the connections between nodes (also known as artificial neurons) and the changes that occur within individual neurons. When two neurons or nodes are active simultaneously, their connection becomes more robust. On the other hand, if they are activated separately, the connection weakens. Hebb uses the word “weight” to describe these relationships. When both neurons have the same positive or negative charge, their weight is strong and positive. However, if the neurons have opposite charges, their weight becomes strong and negative.

The Turing test

In 1950, Alan Turing introduced the Turing Test to determine if a computer possesses true intelligence. The test evaluates if a computer can trick a human into believing it is also a human. Alan Turing presented this idea in his “Computing Machinery and Intelligence” paper while working at the University of Manchester. 

The paper begins by posing the question, “Can machines think?”

The Birth of “Machine Learning”

In 1952, Arthur Samuel of IBM developed the first-ever computer learning program. Due to the limited computer memory available at the time, Arthur Samuel introduced a technique called alpha-beta pruning in his program. In his design, Samuel incorporated a scoring function that relied on the positions of the pieces on the board. 

This function aimed to estimate the likelihood of each side winning. To determine its next move, the program employed a minimax strategy, which later developed into the minimax algorithm. Additionally, Samuel devised several mechanisms to enhance the performance of his program. One such method was “rote learning,” where the program stored and remembered all the positions it had encountered. It then combined this information with the values from the reward function. It is worth noting that Arthur Samuel introduced the term “machine learning” in 1952.

The First Neural Network Launches

In 1957, while working at the Cornell Aeronautical Laboratory, Frank Rosenblatt merged Donald Hebb’s model of brain cell interaction with Arthur Samuel’s advancements in machine learning to create the first-ever computer neural network. He named it the perceptron—the program aimed to simulate the brain’s thought process.

Initially intended as a hardware device rather than a program, the perceptron was designed for the IBM 704 computer. It was installed in a specially crafted machine called the Mark 1 perceptron, which had the specific purpose of image recognition. But by separating the software and algorithms from the hardware, the perceptron became versatile and could be used with other machines.

Basic Pattern Recognition Surfaces

The next significant step for machine learning came in 1967, a decade after perceptron when an algorithm called the nearest neighbor algorithm was created, marking the start of simple pattern recognition. This algorithm found its initial application in mapping routes and was among the earliest methods to tackle the traveling salesperson’s problem of determining the most efficient route.

Simply put, the algorithm allows a salesperson to find the fastest route in a short tour through all the cities in their itinerary, starting from any random city. Marcello Pelillo is widely acknowledged as the author  of the “nearest neighbor rule.” Interestingly, he attributes the renowned paper by Cover and Hart from 1967 as his source of inspiration for this rule.

Multilayers Spark an Evolution

During the 1960s, a milestone breakthrough occurred in neural network research with the discovery of multilayers. Researchers found that incorporating two or more layers in the perceptron significantly enhanced its processing capabilities compared to using only one layer. This opened up new possibilities for advanced neural network designs.

Following the development of the perceptron, which introduced the concept of “layers” in networks, various versions of neural networks emerged, leading to an ever-expanding range of neural network types. The use of multiple layers gave rise to feedforward neural networks and the backpropagation technique.

Backpropagation

This was developed in the 70s. Backpropagation enables a network to adjust its hidden layers of neurons or nodes to adapt to new situations. It involves the “backward propagation of errors,” where errors are calculated at the output and then distributed backward through the network’s layers for learning. Today, backpropagation is widely used to train deep neural networks.

Feedforward neural networks

Artificial neural networks (ANNs) are advanced versions of perceptrons equipped with hidden layers to handle more complex tasks. ANNs serve as a primary tool for machine learning. These networks consist of input and output layers and hidden layer(s) that transform the input into usable data for the output layer. The hidden layers excel at identifying intricate patterns that may elude human programmers, as humans might struggle to identify and teach the device to recognize such patterns.

Machine Learning and Artificial Intelligence Split

During the late 1970s and early 1980s, the direction of artificial intelligence (AI) research shifted towards logical and knowledge-based methods instead of algorithms. 

As a result, computer science and AI researchers moved away from neural network research, creating a divide between artificial intelligence and machine learning. Before this shift, machine learning had primarily served as a training program for AI. After being reorganized as a separate field, the machine learning industry faced challenges for almost ten years. Its main objective shifted from training for artificial intelligence to solving practical problems and offering useful services. 

Instead of relying on AI research approaches, the industry started using methods from probability theory and statistics. During this period, the focus remained on neural networks, and in the 1990s, machine learning experienced significant growth and success. The success was greatly aided by the expansion of the Internet, which provided access to vast amounts of digital data and allowed for the widespread sharing of machine learning services online.

The Birth of Boosting and the Concept of Learnability

Robert Schapire introduced Boosting in a paper called “The Strength of Weak Learnability” in 1990. It helped to reduce bias in supervised learning by transforming weak learners into strong ones. Schapire explained that a group of weak learners could collaborate to create a powerful learner. Weak learners are classifiers that are slightly better than random guessing, while strong learners are highly accurate classifiers.

Boosting algorithms combine multiple rounds of learning from weak classifiers to create a strong classifier. Each weak classifier is assigned a weight based on its accuracy. The data is also assigned weights, which are adjusted during the process. If a data point is classified incorrectly, its weight increases, while correctly classified data points have their weights reduced. This way, future weak classifiers focus more on the previously misclassified data points, improving their accuracy.

Boosting algorithms differ in the way they assign weights to training data points. One popular algorithm is AdaBoost, the first to work with weak learners. Other recent algorithms include BrownBoost, LPBoost, MadaBoost, TotalBoost, Xgboost, and LogitBoost. Many boosting algorithms operate within the framework called AnyBoost.

Machine Learns Speech Recognition

The next significant development of machine learning after boosting was the inception of speech recognition.

In 1997, Jürgen Schmidhuber and Sepp Hochreiter published the long short-term memory (LSTM) deep learning technique for training speech recognition systems. The LSTM allows the neural network to remember events that happened thousands of steps earlier. By 2007, the LSTM had begun to show better results than traditional speech recognition programs. Then, in 2015, the Google speech recognition program made a remarkable improvement of 49 percent in its performance by using LSTM, which was trained with a method called CTC.

Facial Recognition Becomes a Reality

As we stepped into the 21st century, the progress in machine learning had come full circle. The field of machine learning has grown and matured, becoming just as strong and advanced as the field of artificial intelligence. In 2006, there was a special event called the Face Recognition Grand Challenge

It was organized by the National Institute of Standards and Technology to test how well different face recognition algorithms could identify people. They used 3D face scans, iris images, and high-resolution face images to see which algorithms performed the best. The new algorithms were ten times more accurate than the ones used in 2002. And a hundred times more accurate than the ones used in 1995. Some of these algorithms were even better than humans at recognizing faces. They could tell apart identical twins.

Six years later, in 2012, Google’s X Lab created a machine learning algorithm that can independently search for and identify videos with cats. Two years after Google’s ML success, Facebook developed DeepFace, an algorithm that can accurately recognize and verify individuals in photos, just like humans can.

The Current State of Machine Learning and Its Legacy Across Time

Almost 80 years after the first mention of ML, the technology has become a driving force behind groundbreaking technological advancements. Machine learning plays a crucial role in the emerging field of self-driving vehicles and even assists in exploring distant planets, helping to identify exoplanets. 

Stanford University defines machine learning as “the science of enabling computers to take actions without explicit programming.” This definition captures the essence of this field, which has sparked many innovative concepts and technologies. These include supervised and unsupervised learning, advanced algorithms for robots, the Internet of Things, powerful analytics tools, interactive chatbots, and much more.

Widespread applications

Here are seven common applications of machine learning:

  • Machine learning is used in analyzing sales data to streamline operations.
  • Real-time mobile personalization enhances user experiences through tailored content.
  • Fraud detection algorithms identify and detect pattern changes for improved security.
  • Product recommendations leverage machine learning for personalized customer experiences.
  • Learning management systems utilize machine learning to assist in decision-making.
  • Dynamic pricing adjusts prices based on demand, optimizing revenue.
  • Natural language processing enables communication between computers and humans.

Machine learning models continuously improve their accuracy by constantly learning. 

New computing technologies enhance the scalability and efficiency of machine learning algorithms. Machine learning can effectively address various organizational complexities when combined with business analytics.

The legacy shaping the Future

In the future, we expect machine learning to improve and help us make predictions from data that doesn’t have labels. This will be important because it will allow algorithms to find hidden patterns or groups in the data, helping businesses better understand their market and customers.

Additionally, software applications will become smarter and more interactive. They will use cognitive services powered by machine learning to recognize images, understand speech, and respond to our commands. We can expect to see more intelligent applications with these features coming to the market.


Hit the share button below, and let us know your thoughts on this topic!