From Turing to ChatGPT: A Brief History of AI

Konstantin Pilz, 28 April 2023

 

We founded KIRA because the development of artificial intelligence (AI) has reached a critical point. We believe that humanity should proceed particularly thoughtfully and cautiously in AI development from now on. At present, this is not happening. But how did we get here?

The Beginnings

In 1950, people's daily lives were entirely analog: Shopping amounts were calculated in one's head, letters were written instead of emails, and information had to be looked up in libraries. At that time, the first computers were built, for example by mathematician Alan Turing. These computers filled entire halls, were slow, and could do little more than multiply a few numbers. Nevertheless, Turing was driven by a question: Could computers one day exhibit intelligent behavior?

Together with Marvin Minsky and John McCarthy, Turing was one of the first researchers in the field of AI, a sub-area of computer science. They wanted to make computers imitate human behavior. However, after an initial phase of optimism, it quickly became clear that computers were too weak and slow to solve complex problems.

Nonetheless, progress was made. One of the first significant successes was achieved by Arthur Samuel in the 1960s. He designed a computer that could play the board game Checkers. Instead of displaying the game board on a screen, the AI indicated its moves using small lights. Although professionals easily defeated the program, after some work, the machine managed to beat less experienced humans.

Slow Progress

In the following decades, so-called AI winters and AI springs alternated. In some phases, there was much hope and significant investment; then, expectations were disappointed, and little research took place.

As computers became more powerful year by year, they could also be used for increasingly demanding purposes. In the 1980s, Martin Marietta developed the first self-driving car, which reached a proud 31 km/h (about 19 mph) on empty, straight roads.

AI made worldwide headlines for the first time in 1997. That year, the chess computer Deep Blue defeated world champion Garry Kasparov. Deep Blue was based on a fixed search method that sought the best move in a vast dataset of past games.

Machine Learning Opens New Possibilities

Systems like Deep Blue, which were largely based on fixed rules, were soon replaced by a more flexible approach: Machine Learning (ML). In this approach, a system independently recognizes patterns in large amounts of data. Mathematical structures called “neural networks” proved particularly useful for this purpose. Inspired by the functioning of our brains, they consist of many simulated neurons connected to each other. Through a training process, the network can learn to recognize specific patterns. The AI is fed large amounts of data – often millions of images, numbers, or texts. We will explain in more detail how this works in a later blog entry.

Neural networks slowly gained traction in the early 2000s, giving AI new capabilities. Programs like Siri understand spoken text; other AI systems defeated humans in complex video games like StarCraft. In 2014, Facebook developed an AI that could identify faces as well as a human. Soon, Google began integrating neural networks into its famous search algorithm. Shortly thereafter, Instagram, Twitter, Tinder, and many other platforms adopted the method to suggest relevant content and advertising.

A Turning Point: Deep Learning

As computers became more powerful, larger neural networks could be used year by year. This marked the beginning of the Deep Learning era: neural networks now had many more layers. This accelerated AI development and enabled several breakthroughs.

About ten years ago, it was still a real challenge for AIs to distinguish a dog from a cat. Now, AI can recognize hundreds of different dog breeds, more than most people can. Language processing has advanced significantly. Chatbots have reached a completely new dimension with ChatGPT and can dynamically respond to various situations. AI-generated images are so realistic that they are hardly distinguishable from actual photos.

Importantly, recent advances in AI technology have mainly been due to more computing power. Researcher Richard Sutton described this as a "bitter lesson": progress in AI does not come from a better understanding of intelligence. Instead, we equip increasingly general systems with more and more computing power without understanding why modern AI systems work so well. Consequently, we often cannot explain the reasons behind a particular AI decision.

Outlook: Possible Risks

As our computers continue to become exponentially more powerful year by year, and investments in AI keep increasing, AI systems are likely to become more complex and harder for humans to understand in the coming years.

Given this rapid development, it is difficult to predict what tasks AI will be able to solve next. A group of prominent researchers and business leaders recently called for a six-month pause in the development of even more powerful AI systems. They warn that the impact of AI on society is unpredictable. AI could soon replace many jobs, lead to economic inequality or destabilize democratic discourse through mass-produced fake images and fake news, they say.

The letter also warns that uncontrollable AI systems could be created in the future and that to prevent this, more should be invested in AI safety research. One important research area is to ensure that AI understands and internalizes human values. The signatories therefore call on governments worldwide to regulate how AI should be developed.

KIRA was founded to ensure that future AI systems are safe and fair and do not have negative consequences for society. No matter which potential impacts of AI concern us the most: it is in everyone's interest to quickly agree on adequate rules for the further development of AI.

Zurück
Zurück

Privitera Asks "Could We Lose Control over AI?" in Op-ed for WELT

Weiter
Weiter

Economist Op-ed: Reuel Proposes International Agency