Wa0006.
Wa0006.
Wa0006.
There are a lot of ongoing AI discoveries and developments, most of which are divided
into different types. These classifications reveal more of a storyline than a taxonomy,
one that can tell us how far AI has come, where it’s going and what the future holds.
These are the seven types of AI to know, and what we can expect from the technology.
1. Narrow AI
Narrow AI, also known as artificial narrow intelligence (ANI) or weak AI, describes AI
tools designed to carry out very specific actions or commands. ANI technologies are
built to serve and excel in one cognitive capability, and cannot independently learn skills
beyond its design. They often utilize machine learning and neural network algorithms to
complete these specified tasks.
For instance, natural language processing is a type of narrow AI because it can recognize
and respond to voice commands, but cannot perform other tasks beyond that.
Some examples of narrow AI include image recognition software, self-driving
cars and AI virtual assistants.
Though still a work in progress, the groundwork of artificial general intelligence could
be built from technologies such as supercomputers, quantum hardware and generative
AI models like ChatGPT.
3. Artificial Superintelligence
Artificial superintelligence (ASI), or super AI, is the stuff of science fiction. It’s theorized
that once AI has reached the general intelligence level, it will soon learn at such a fast
rate that its knowledge and capabilities will become stronger than that even of
humankind.
ASI would act as the backbone technology of completely self-aware AI and other
individualistic robots. Its concept is also what fuels the popular media trope of “AI
takeovers.” But at this point, it’s all speculation.
“Artificial superintelligence will become by far the most capable forms of intelligence on
earth,” said Dave Rogenmoser, CEO of AI writing company Jasper. “It will have the
intelligence of human beings and will be exceedingly better at everything that we do.”
4. Reactive Machine AI
Reactive machines are just that — reactionary. They can respond to immediate requests
and tasks, but they aren’t capable of storing memory, learning from past experiences or
improving their functionality through experiences. Additionally, reactive machines can
only respond to a limited combination of inputs. Reactive machines are the most
fundamental type of AI.
In practice, reactive machines are useful for performing basic autonomous functions,
such as filtering spam from your email inbox or recommending items based on your
shopping history. But beyond that, reactive AI can’t build upon previous knowledge or
perform more complex tasks.
• IBM Deep Blue: IBM’s reactive AI machine Deep Blue was able to read real-
time cues in order to beat Russian chess grandmaster Garry Kasparov in a 1997
chess match.
• Netflix Recommendation Engine: Media platforms like Netflix often utilize
AI-powered recommendation engines, which process data from a user’s watch
history to determine and suggest what they would be most likely to watch next.
5. Limited Memory AI
Limited memory AI can store past data and use that data to make predictions. This
means it actively builds its own limited, short-term knowledge base and performs tasks
based on that knowledge.
The core of limited memory AI is deep learning, which imitates the function of neurons
in the human brain. This allows a machine to absorb data from experiences and “learn”
from them, helping it improve the accuracy of its actions over time.
Today, the limited memory model represents the majority of AI applications. It can be
applied in a broad range of scenarios, from smaller scale applications, such as chatbots,
to self-driving cars and other advanced use cases.
• Chatbots and Virtual Assistants: Chatbots and virtual assistants are forms of
limited memory AI that use deep learning to mimic human conversation. As
users interact more with these systems, they learn from this data and remember
details about the user, allowing them to provide relevant and personalized
responses.
• Self-Driving Cars: Self-driving cars continually observe and process
environmental data around them as they travel on the road. This helps them
predict when they need to turn, stop or avoid an obstacle.
6. Theory of Mind AI
Theory of mind refers to the concept of AI that can perceive and pick up on
the emotions of others. The term is borrowed from psychology, describing humans’
ability to read the emotions of others and predict future actions based on that
information. Theory of mind hasn’t been fully realized yet, and stands as the next
substantial milestone in AI’s development.
Theory of mind could bring plenty of positive changes to the tech world, but it also poses
its own risks. Since emotional cues are so nuanced, it would take a long time for AI
machines to perfect reading them, and could potentially make big errors while in the
learning stage. Some people also fear that once technologies are able to respond to
emotional signals as well as situational ones, the result could mean automation of some
jobs.
7. Self-Aware AI
Self-aware AI describes artificial intelligence that possesses self-awareness. Referred to
as the AI point of singularity, self-aware AI is the stage beyond theory of mind and is
one of the ultimate goals in AI development. It’s thought that once self-aware AI is
reached, AI machines will be beyond our control, because they’ll not only be able to
sense the feelings of others, but will have a sense of self as well.
Self-Aware AI Example
• Perhaps one of the most famous of these is Sophia, a robot developed by robotics
company Hanson Robotics. While not technically self-aware, Sophia’s advanced
application of current AI technologies provides a glimpse of AI’s potentially self-
aware future. It’s a future of promise as well as danger — and there’s debate
about whether it’s ethical to build sentient AI at all.
Jobs
Companies
Articles