“The rise of powerful AI will be either the best, or the worst thing, ever to happen to humanity.” — Stephen Hawking

Intelligent machines are doubtless one of the most important inventions not only of this century but potentially of the whole history of mankind. Why so much drama? Because it combines powerful current technologies and we are still not sure how they will unfold in the future.

Not to mention, we are already quite dependent on AI in our everyday life. It calculates our routes within Google Maps, creates our playlists on Spotify and our news feed on Facebook.

So, what is AI? What should we know about its main concepts? What is weak AI vs strong AI? And of course, we will touch on some moral and ethical aspects as well. Ready? Let’s go!


Artificial Intelligence is the capacity of a computer-enabled system to perform cognitive tasks and flexibly react to changing environment in order to realize certain goal or a set of goals. The system can obtain data and learn from experience, with a possibility of mimicking human behaviors.


Did you know that we normally make around 20,000 decisions a day? Fortunately, most of them are unconscious. Based on our past experience (data gathered throughout life) our mind takes milliseconds to come up with a quick solution for mundane challenges. But what comes natural for humans is very difficult for machines.

Since 1950’s AI researchers have made significant progress, enabling complicated computer models through borrowing a great deal of knowledge from neuroscientists, looking into how the human brain works.

Yes, we are talking about neural networks that gave us self-learning machines.


It’s about apps making data-based decisions. There is an important difference between supervised and unsupervised learning. Supervised learning is normally a pair of input and output. But as soon as the system is trained with enough data, it is able to make its own connections. Unsupervised learning also suggests that the application creates its own classifiers and its own model.


It’s an aspect of Machine Learning also known as the supreme discipline of AI. It’s about analyzing great amounts of data for patterns and trends. Deep learning is now actively used in speech, face or object recognition. Neural networks are able to learn on their own and to link what they have learned with new content again and again, just like biological neural networks of the human brain do. As opposed to machine learning with its fixed models, deep learning develops the models independently with its own algorithms. No human being involved.


It’s about apps that can understand images or videos. A common example is a Face ID app that all Apple users use dozens of times a day. Face recognition also helps to tag our friends on Facebook or even identify crime suspects. In med tech these algorithms are doing miracles, recognizing x-rays, comparing them with huge amounts of patient’s data, and providing doctors with diagnoses.


It’s about how human speech is processed by a computer. Have you ever talked to Siri or Google Assistant? Congrats! You’re an NLP user.


WEAK (aka Narrow) AI is focused on a certain goal or area of the app, where it is able to self-optimize and learn within set limits. Currently existing AI are all weak. Google Maps is only responsible for navigation, Siri & Alexa are dealing with speech recognition and so on.

STRONG (aka general) AI is basically a super intelligence from sci-fi Hollywood dramas. It knows everything and everyone, never stops learning and is capable of making fast and independent decisions not only reactively, but of its own accord.


Artificial intelligence opens opportunities that were unimaginable for mankind, enabling fantastic advances in almost every sphere of our life – medicine, education, logistics, industry, just to mention a few.But it also has risks or even dangers. Machine learning highly depends on quantity and the quality of input information, which raises concerns on how we communicate and process information in general. Falsification or misconduct of data is a real problem for our society.

A good illustration of that problem is a Microsoft experimental AI chatbot that became both racist and anti-feminist after it was fed with Twitter conversations for just one day.
Another example is the AI tested by the American police that directed officers to disproportionately patrol African-American neighborhoods after consuming “dirty data” from decades of unjust police actions.

Another challenging aspect of AI is machine ethics. Because of the unprecedented pace of new technologies, machines making life-and-death decisions are only a matter of time. Just imagine a self-driving car deciding who gets to live: the kid who suddenly runs on a red light in front of it or two senior citizens in the vehicle who will die in case of maneuver.

But, probably, the greatest concern is Singularity – a hypothetical point of technology where AI might become uncontrollable and irreversible in its actions, resulting in unforeseeable changes to civilization as we know it.

At the current state of AI, we are far from a real danger, but the question remains. How can we know that strong AI will be good to us?