Fundamentals of Artificial Intelligence – Part One: Origins
Yes, Did you like our first post focused on learning Artificial Intelligence (AI) for Linux users?You were probably already waiting for the second part. And if you haven't read the first part, we'll clarify that it focused on briefly and directly explaining what AI is and the various types that have already been created (Narrow or Weak, Generative, and Agentic) and those expected to be created (General or Strong, and Superintelligence). In this second part, we'll focus more on how Artificial Intelligence has developed over time and highlighting some of the most important milestones that have given life and relevance to this field.
Why? Because without a doubt, and regardless of the operating system we use, Artificial Intelligence is a technology that has not only arrived to become deeply integrated into everythingbut, by saving us time and working hours, most likely, Little by little, it will displace those who have the least understanding of its existence and management.Therefore, there's nothing better than starting to get some context about what it is, how it originated, how it has developed, and how to use it, among many other things.
Artificial Intelligence has a lot to do with the Linux universe, so today we offer you this first part to learn more about it.
But, before we delve into this "Second part of the exciting and surprising field of Artificial Intelligence"And more specifically about its current division and some notable historical events, we recommend you explore the previous and recent related publication in this series of publications, upon finishing reading it:
Artificial Intelligence is a field of computer science dedicated to developing systems capable of simulating human intelligence processes, such as reasoning, learning, perception, and creativity. All of this is achieved through algorithms and large volumes of data, managed by powerful machines or advanced/specialized hardware, which are capable of analyzing information, solving complex problems, making autonomous decisions, and improving their performance over time.

Fundamentals of Artificial Intelligence – Part Two: Origins
Part Two: Division of Artificial Intelligence in Time
While it is true that AI can be classified by its degree of computing power or intelligence (Current: Narrow or Weak, Generative and Agentic; Future: General or Strong, and Artificial Superintelligence), a more classic way of dividing has been the following: Symbolic Artificial Intelligence and Connectionist Artificial IntelligenceTherefore, below we provide some interesting details about it.
Symbolic Artificial Intelligence
Symbolic Artificial Intelligence (“classic AI” or GOFAI – Good Old-Fashioned AI) Symbolic AI is a branch of computer science based on the premise that intelligence can be represented through the manipulation of symbols and logical rules. Unlike current deep learning, which learns from patterns in large amounts of data, symbolic AI It is built "from the top down", directly introducing human knowledge into the machine.
Outstanding Features
Among its most outstanding features Some examples include:
- The independence of your dataIt doesn't need millions of examples to learn; it needs a human expert to define the rules of the game.
- Applied logical reasoningIt uses rules of inference (if A is true, then B is likely). It relies on formal logic to reach conclusions.
- The form of knowledge representationIt uses symbols (words, numbers, or concepts) to represent real-world objects and their relationships. For example: Is_A(Cat, Mammal).
- The readability of its processes (White Box)Unlike modern neural networks, which are "black boxes," symbolic reasoning is completely transparent. You can see exactly which rule was applied to arrive at an answer.
Connectionist Artificial Intelligence
If Symbolic AI is "logic", Connectionist Artificial Intelligence is "biology".This approach is inspired by how the human brain processes information. Instead of rigid, hand-programmed rules (like "if A, then B"), Connectionism uses networks of interconnected simple units (called artificial neurons) that learn through experience and signal adjustment.Therefore, it is the driving force behind what we know today as Deep Learning and neural networks.
Outstanding Features
Among its most outstanding features Some examples include:
- The emerging state of its processesIntelligent behavior "emerges" from the interaction of many simple units, not from a centralized intelligence.
- The readability of its processes (Black Box)Unlike symbolic AI, it is very difficult to explain exactly why a neural network made a specific decision. Knowledge is "distributed" across millions of numerical weights.
- Noise Tolerance (Inaccuracy and Imprecision)They excel at handling incomplete or imprecise data. If a connectionist network is missing a pixel from an image, it will probably still recognize the object; a symbolic system might fail due to a lack of an exact rule.

Historical milestones in Programming, Artificial Intelligence and Robotics from 1925 to 2025: 100 years of progress
Last century (1900)
First 50 years
- 1842The mathematician Ada Lovelace was the first to see the potential of computers beyond mathematics.
- 1921Karel Čapek, a Czech playwright, launched his science fiction play “Rossum's Universal Robots”, where he explored the concept of artificial people whom he called robots or “robota” (slave).
- 1939The inventor and physicist John Vincent Atanasoff, along with Clifton Berry, built the first digital computing machine. This computer was not programmable, but it could solve up to 29 linear equations simultaneously, making Atanasoff the Father of the Computer.
- 1943: US researchers present their model of artificial neurons, considered the first Artificial Intelligence.
- 1949The document “The organization of behavior” is published, which served as the basis for learning algorithms in artificial neural networks (Nebrija University, n.d.).
- 1950The English mathematician and computer science pioneer Alan Turing posed the question: "Can machines think?" In his article "Computing Machinery and Intelligence," Turing designed what is known as the Turing Test, or imitation game, to determine whether a machine is capable of thinking.
Last 50 years
- 1956John McCarthy, a professor at Dartmouth College, organized a summer workshop to clarify and develop ideas about thinking machines, and chose the name "Artificial Intelligence" for it. The Dartmouth conference, widely considered the foundational moment of Artificial Intelligence (AI) as a field of research, aimed to find "how to get machines to use language, form abstractions and concepts, solve problems now reserved for humans, and improve themselves."
- 1961General Motors installs the first industrial robot to replace humans in assembly tasks.
- 1964Joseph Wizenbaum develops the first natural language processing computer program, which simulates conversation with humans.
- 1965DENDRAL, the first expert system focused on molecular chemical analysis, is released.
- 1966: The first general-purpose mobile robot capable of reasoning about its own actions bursts onto the market.
- 1966: ELIZA, the first chatbot capable of simulating a human conversation, is unveiled; and Shakey, the first autonomous mobile robot with task planning.
- 1967The Perceptron Mark 1 machine is built, known as the evolution of Rosenblatt's neural network on specialized hardware.
- 1968: Mac Hack, the first chess program to reach a Class C competitive level.
- 1969The book "Perceptrons" is published, a mathematical critique that halted research on neural networks for years.
- 1972WABOT-1, the world's first humanoid robot, developed at Waseda University, is built.
- 1972: The MYCIN software is developed, an expert system for the diagnosis of blood infections and medical prescription.
- 1974: The first AI winter begins, characterized as a period of drastic reduction in funding and interest following the Lighthill Report.
- 1976PROLOG, a programming language based on formal logic, is developed.
- 1980: XCON / R1, the first large-scale commercial expert system used by DEC.
- 1981Fifth Generation, a massive project from Japan to lead AI through parallel computing.
- 1982Hopfield networks, introduction of neural networks with associative memory based on physics.
- 1984: CYC Project, Beginning of the creation of a universal common sense knowledge base.
- 1984WABOT-2, a robot capable of reading sheet music and playing the electronic organ.
- 1985: AARON, an autonomous artistic painting system demonstrated at the AAAI conference.
- 1986: Backpropagation, the popularization of the fundamental algorithm for training deep networks.
- 1986Autonomous Vehicles: Ernst Dickmanns demonstrates the first robotic car capable of traveling at 55 mph.
- 1987Second Winter, characterized by a new crisis of confidence and a fall in the market for specialized hardware for AI.
- 1992: TD-Gammon, a backgammon program that reached expert level through reinforcement learning.
- 1997Deep Blue, historic defeat of world chess champion Garry Kasparov.
- 1997: NaturallySpeaking, the first commercial continuous-stream speech recognition software.
- 1998: Robot Kismet, a Pioneer in human-robot emotional interaction.
- 1999AIBO, the first consumer pet robot with learning capabilities.

Current century (2000)
First decade
- 2000Kismet, a robotic head capable of recognizing and recreating human emotions and social cues, was designed. It was an experiment in social robotics and affective computing that featured input devices mimicking human visual, auditory, and kinesthetic flexibility.
- 2002: Roomba, the first mass-produced robotic vacuum cleaner sold by iRobot, capable of navigating and cleaning the home, is launched on the market.
- 2004The DARPA Grand Challenge, the first major competition for autonomous vehicles in desert terrain, is underway.
- 2006The Netflix Prize emerged, a competition that boosted recommendation algorithms based on massive data.
Second decade
- 2011Apple integrates Siri, a virtual assistant with a voice interface, into its iPhone 4S. IBM's Watson system emerges, winning first prize on the popular American television quiz show Jeopardy! And Google Brain, a massive neural network project capable of recognizing cats on YouTube without supervision, is developed.
- 2013DeepMind Atari is developed, an AI capable of mastering classic games through deep reinforcement learning.
- 2014The first computer program passes the Turing Test by convincing one-third of the judges participating in the experiment that it was a human being.
- 2015Open technologies such as TensorFlow and PyTorch are introduced, initiating the democratization of AI through open source libraries for massive research.
- 2016The program, based on a deep neural network, beats Lee Sodol, the world Go champion, in five games.
- 2016AlphaGo, an AI computer, was developed that was able to achieve a historic defeat of the Go champion, demonstrating artificial strategic intuition.
- 2016Sophia, a humanoid robot capable of gesturing and holding simple conversations, is unveiled.
- 2017The "Transformer" is developed, an attention architecture that enabled massive language processing.
- 2017AlphaZero is developed, an AI that learned chess, Shogi and Go in hours, surpassing specialized programs.
- 2018BERT, a Google language model that revolutionized context understanding, is developed.
- 2019GPT-2, the first major generative text model with fluent writing capabilities, is unveiled.
- 2019AlphaStar is revealed, an AI that managed to reach the Grandmaster level in StarCraft II, handling imperfect information.
Last 5 years
- 2020The company “General Motors” installs the first industrial robot to replace humans in assembly tasks.
- 2020GPT-3, the first massive language model capable of generating text indistinguishable from human text, is announced.
- 2021AlphaFold 2, a software focused on solving protein folding, is announced, which has managed to accelerate medicine by decades.
- 2021DALL-E, an AI that started the generation of artistic images through natural language, is released to the public.
- 2021GitHub Copilot is released, an AI integrated into software development that automatically writes code.
- 2022Stable Diffusion, an AI that started the democratization of image generation through open source, is released to the public.
- 2022ChatGPT is developed, an AI whose launch popularized Generative AI globally across all sectors.
- 2023GPT-4, the first multimodal model with advanced reasoning and extended context memory, is announced.
- 2023: The company Meta releases Llama, an AI whose launch kicks off the race for powerful open-source language models.
- 2023: Gemini 1.0 Ultra is unveiled, the first model to outperform human experts in massive multimodal testing.
- 2023Silent Eight is launched, a massive AI deployment project for financial compliance and crime detection at scale.
- 2024Sora, an OpenAI model capable of generating realistic videos of up to one minute from text, is released to the public.
- 2024The legislative project "EU AI Act" is announced, the world's first comprehensive legislation to regulate the development and use of AI.
- 2024Stability AI unveiled StableLM 2, an improved open large language model, and Google released Gemini 1.5 in limited beta.
- 2024The OpenAI o3 model is released, an advanced model with deep reasoning and logical inference capabilities.
- 2025OpenAI releases GPT-5, Google launches Gemini 2.0 and Nano Banana.
- 2025The DeepSeek open AI model is unveiled, paving the way for other powerful and low-cost open models.
- 2025The development of AI PC Gen 2 emerges, marking the consolidation of computers with NPU chips to run advanced local AI.
- 2025This marks the beginning of the era known as "Agent Builders", characterized by access to a series of standardized platforms to create autonomous agents capable of executing business processes.

Summary
In short, we hope that this "Second part of the exciting and surprising field of Artificial Intelligence" with emphasis on the Generative Artificial Intelligence (IAG)May it have contributed a little more to strengthening and increasing your knowledge and skills in this modern and innovative field. In such a way that it allows you to gradually to catch up with others who are already more advanced both in terms of information and trainingThat is, in their actual use. Soon, we will continue with more posts in this series so that many more users and passionate readers of the Linux universe can learn more about AI in general, especially about Generative AI.
Lastly, remember visit our «homepage» in Spanish. Or, in any other language (just by adding 2 letters to the end of our current URL, for example: ar, de, en, fr, ja, pt and ru, among many others) to find out more current content. Additionally, we invite you to join our Official Telegram channel to read and share more news, guides and tutorials from our website.