Hit the share button below, and let us know your thoughts on this topic!

Artificial intelligence (AI) may seem like a relatively new technological advancement, but its roots stretch back to the early 1900s. Although it has recently gained widespread popularity, the groundwork for AI was laid over a century ago. That said, significant advancements in AI didn’t occur until the 1950s. Nevertheless, these advancements were only made possible due to the contributions of early experts from diverse fields. Their pioneering work paved the way for the development of AI as we know it today.

Exploring the history of AI allows us to gain a comprehensive understanding of its current state and potential future developments. But before we get into the business of AI history, let’s do a quick rain check.

Related: Top 10 AI Tools and Platforms in 2023

What is Artificial Intelligence?

Artificial intelligence is a field of study within computer science that focuses on developing systems capable of emulating human intelligence and problem-solving capabilities. AIs accomplished this by leveraging vast data, which is then processed and analyzed.

What is Artificial Intelligence?

Through this data-driven approach, AI systems learn from their previous experiences, enabling them to optimize their performance and enhance their capabilities over time. The ultimate goal is continuously improving and refining their functionality for future tasks. Conventional computer programs, in contrast, require human intervention to address bugs, enhance processes and make improvements.

The Origins of AI

Having cleared the air on what AI is and how it differs from normal computer programs, let’s get into the headliner of today — the history of AI.

Early concepts of AI in ancient civilizations

The concept of “artificial intelligence” dates back thousands of years, originating from ancient philosophers who pondered profound questions concerning the nature of life and mortality.

Centuries ago, inventors crafted fascinating contraptions known as “automatons” that exhibited mechanical movements and operated without human intervention. These remarkable creations represented early examples of autonomous machines. Derived from ancient Greek, “automaton” roughly translates to “acting of one’s own will.”

Dating back to around 400 BCE, historical records mention an early automaton in the form of a mechanical pigeon, which a close associate of the renowned philosopher Plato crafted.
Another prominent automaton emerged several centuries later when Leonardo da Vinci crafted one of his most famous creations around 1495.

The emergence of the modern AI concept in the early 1900s

Although the concept of autonomous machines dates back to ancient times, we’ll be focusing our energy on developments in the 20th century. Engineers and scientists started making significant advancements toward developing modern-day AI during this era.

In the early 1900s, the fascination with artificial humans in various forms permeated the media, sparking inquiries among scientists from multiple disciplines. This resultantly raised the fundamental question: Can an artificial brain be created?

In fact, certain innovators during that time managed to create primitive iterations of what we now recognize as “robots.” Primarily steam-powered, these primitive robots could display facial expressions and, in some instances, even walk.

Here are notable dates that highlight the development of AI in this period:

  • 1921: The Czech playwright Karel ÄŒapek gained recognition for his science fiction play titled “Rossum’s Universal Robots,” which notably introduced the concept of “artificial people” referred to as robots. This work is recognized as the earliest documented usage of the term.
  • 1929: Professor Makoto Nishimura, hailing from Japan, achieved the distinction of constructing the first Japanese robot known as Gakutensoku.
  • 1949: The computer scientist Edmund Callis Berkley authored the book “Giant Brains, or Machines that Think,” which drew comparisons between the emerging computer models and the complexities of the human brain.

The Birth of AI as a Field of Study and The Early Years of Research: 1950-1956

During this period, the fascination with AI reached its peak. Alan Turing’s publication, “Computer Machinery and Intelligence,” in 1950, played a pivotal role in shaping the field. Turing postulated that humans utilize available information and reasoning to address problems and make decisions. Hence, the question arises: why can’t machines employ a similar approach?

With this question as a compass, Turing presented a logical framework outlining the construction of intelligent machines and proposed methods for evaluating their intelligence. Unfortunately, due to technological and financial constraints, Turing couldn’t get to work immediately to prove his theory.

The constraints

Before 1949, computers faced a critical limitation regarding intelligence—they could not store commands and only execute them. In other words, computers could receive instructions but not retain a memory of their actions.

In the early 50s, computing was an excessive expense, with leasing costs for computers soaring up to $200,000 per month. As a result, only esteemed universities and prominent technology companies had the financial means to venture into this unexplored territory.

These were the significant obstacles in the development of machine intelligence. To obtain funding, it became clear that presenting a proof of concept and gaining support from influential figures were essential steps.

Overcoming the obstacles

Five years after Turing presented his logical framework on machine intelligence, a significant breakthrough occurred with the initiation of a proof of concept called Logic Theorist. Allen Newell, Cliff Shaw, and Herbert Simon carried out this pioneering work. Funded by the Research and Development (RAND) Corporation, the Logic Theorist program was developed to simulate human problem-solving abilities.

The Logic Theorist is widely regarded as the first artificial intelligence program. It debuted at the Dartmouth Summer Research Project on Artificial Intelligence (DSRPAI) in 1956, an event organized by John McCarthy and Marvin Minsky. The conference brought together leading researchers from various disciplines, aiming to explore the possibilities of creating “artificial intelligence.” During the conference, participants discussed problem-solving, symbolic reasoning, and machine learning, laying the foundation for AI as an academic field. The event held at Dartmouth in 1956 marked a significant milestone for AI research. Its impact cannot be overstated, as it catalyzed the progress made in the field over the following two decades.

Rapid Growth of AI and Maturation: 1957-1970

Between 1957 and 1974, there was a period of great progress in AI. During this time, computers became faster — allowing them to store and process larger amounts of information — cheaper and more accessible.

Rapid Growth of AI and Maturation

Machine learning algorithms similarly made significant advancements. People have become more skilled at selecting the right algorithm to solve specific problems. 
Newell and Simon’s General Problem Solver showed notable examples of progress, which showed promise in solving various problems. And Joseph Weizenbaum’s ELIZA, which demonstrated the ability to understand and respond to spoken language.

The government comes into the picture

The accomplishments achieved in this period and the support and endorsement from influential researchers (DSRPAI attendees) played a vital role in securing funding for AI research. Government agencies like DARPA recognized the potential of AI and provided financial support to various institutions engaged in AI research. The government focused on developing a machine that could effectively transcribe and translate spoken language, along with the ability to process large volumes of data efficiently.

Optimism, expectations, and more obstacles

With growing interest from various parties, the government included, optimism soared, and expectations reached new heights. In 1970, Marvin Minsky boldly proclaimed to Life Magazine that within three to eight years, a machine with the general intelligence of an average human being would become a reality. He couldn’t be more wrong. Despite a fundamental proof of principle, significant progress still lies ahead in pursuing natural language processing, abstract thinking, and self-recognition — the ultimate goals of artificial intelligence. As the field of AI emerged, it faced numerous daunting challenges. 

The most prominent among them was the severe limitation in computational power, which hindered the ability of computers to handle substantial amounts of information efficiently. To effectively communicate, for instance, a person or machine must possess knowledge of numerous words and their meanings. They must also understand how these words can be combined and used in different situations.

Hans Moravec, a student of McCarthy, expressed the opinion that computers were still millions of times too weak to demonstrate intelligence. He was one of many others in the field that shared the same skepticism regarding the viability of AI. With diminishing patience, funding for AI research also dwindled, causing a slowdown in progress for a decade.

Reignition and AI Boom: 1980-1987

In the 1980s, AI experienced a resurgence due to the combination of new algorithms and increased funding. John Hopfield and David Rumelhart popularized “deep learning,” allowing computers to learn from experience using advanced techniques. Edward Feigenbaum developed expert systems that imitated human experts’ decision-making abilities. These systems could consult with domain experts and guide non-experts based on learned knowledge from various scenarios.

Here are notable dates that highlight the development of AI in this period:

  • 1980 — The American Association for Artificial Intelligence (AAAI) inaugural conference took place at Stanford University.
  • 1981 — A pioneering expert system called XCON (expert configured) emerged in the commercial market. Its purpose was to streamline the process of ordering computer systems by automatically selecting the appropriate components based on the customer’s requirements.
  • 1981 — The Japanese government made a significant investment of $850 million (equivalent to over $2 billion in today’s currency) towards the Fifth Generation Computer project. The project’s objective was to develop advanced computers capable of translating languages, engaging in human-like conversations, and demonstrating human-level reasoning abilities.
  • 1984 — The AAAI warned about an impending “AI Winter,” marked by reduced funding and declining interest in AI research.
  • 1985 — At the AAAI conference, a program called AARON, which can draw autonomously, was showcased.
  • 1986 — Ernst Dickmann and his team from Bundeswehr University of Munich successfully developed and showcased the first driverless car. Without human drivers, the vehicle could travel up to 55 mph on obstacle-free roads.
  • 1987 — Alactrious Inc. introduced Alacrity, the pioneering strategy managerial advisory system, to the market. Alacrity utilized a sophisticated expert system comprising over 3,000 rules, making it the first of its kind.

AI Winter and Resurgence

This period lasted from the late 20th century to the early 21st century.

Winter arrives: 1987-1993

Just as the AAAI warned, winter came. Japan’s FGCP failed to meet most of its ambitious goals close to a decade since its inception. Funding stopped, and subsequently, AI fell out of the limelight. The AI lasted from the late 80s to the early 90s.

The Resurgence: 1993-2011

Contrary to expectations, AI flourished during a period of limited government funding and decreased public enthusiasm. Significant milestones in artificial intelligence were reached throughout the 1990s(mid and late) and 2000s, marking the accomplishment of many major objectives.

In 1997, a historic event occurred as IBM’s Deep Blue, a computer program designed to play chess, defeated the reigning world chess champion and grandmaster, Gary Kasparov. Later that same year, another significant milestone was achieved when Dragon Systems introduced speech recognition software for Windows. This marked a significant step towards advancing the interpretation of spoken language.

At the time, machines appeared capable of tackling any challenge thrown their way. Kismet was a prime case in point. Kismet is a robot created by Cynthia Breazeal, which showcased the ability to recognize and express emotions. From the early 2000s up till 2011, the development continued. During this period, NASA landed and navigated two rovers on Mars without human intervention. 

Microsoft introduced the Xbox 360 Kinect, a groundbreaking gaming device that could detect and interpret body movements for immersive gameplay. That was also when the first widely used virtual assistant, Apple’s Siri, was launched.

Artificial General Intelligence: 2012 — present

In recent years, AI has seen widespread use in everyday tools like virtual assistants and search engines. This period also marked the rise of Deep Learning and Big Data, driving further advancements in the field.

Here are notable dates that have highlighted the development of AI from 2012 till now:

  • 2012: Google trained a neural network to recognize cats from unlabeled images.
  • 2015: An open letter signed by influential figures called for a ban on autonomous weapons in warfare.
  • 2016: Hanson Robotics created Sophia, a humanoid robot capable of emotions and communication.
  • 2017: Facebook’s AI chatbots developed their language during a negotiation experiment.
  • 2018: Alibaba’s AI surpassed humans on a reading and comprehension test.
  • 2019: Google’s AlphaStar achieved Grandmaster level in StarCraft 2.
  • 2020: OpenAI introduced GPT-3, a language model that produces human-like content.
  • 2021: OpenAI developed DALL-E, enabling accurate image understanding and captioning.

Related: How to use Google’s AI Chatbot

2022 and 2023 so far

AI has saturated numerous industries in the last two years and impacted the status quo globally. Here is a 3-point summary of significant developments in the field so far.

#1 Democratization of AI

AI technology is becoming more accessible to businesses and individuals thanks to the development of cloud-based AI platforms and the open sourcing of AI tools.

#2 Advancements in natural language processing

AI systems are improving at understanding and processing human language, leading to new applications like customer service, healthcare, and education.

#3 Development of generative AI

Generative AI systems can create new content, such as images, text, and music, which opens up new possibilities for creativity and innovation.

So, what changed?

A crucial factor changed despite no significant improvement in our approach to coding artificial intelligence. The limitations of computer storage that hindered us three decades ago ceased to be an obstacle. 

Following Moore’s Law, which predicts a doubling of computer memory and speed annually, our computational capabilities not only catch up with our requirements but often exceed them. 

This pattern helps explain the fluctuating nature of AI research: we push AI capabilities to match our current computational power, reach a saturation point, and await the advancements of Moore’s Law to bridge the gap once again.

The Future of AI

Now that we’ve reached the present, what lies ahead for AI? While we can’t do much to predict the future, one thing seems inevitable — as AI systems become more sophisticated, they will need to work more closely with humans to achieve their goals. This will increase the demand for innovative approaches to foster trust, transparency, and accountability in human-machine interactions.

Another prominent trend we can hedge our bets on is the growing adoption of AI technologies by businesses spanning different sizes and sectors. It should come as no surprise that the extensive integration of AI will have a profound and transformative effect on the job market. This impact will manifest in a combination of job losses due to automation and the emergence of new opportunities.

Moreover, we can anticipate remarkable advancements in robotics, autonomous vehicles, and other areas within the AI domain.


Hit the share button below, and let us know your thoughts on this topic!