Among the most common questions about AI (artificial intelligence) is what limits it might reach or what threshold it might cross if it were to awaken and manage on its own, it is logical to ask this question within the inhabitants of the planet if when we humans supposedly think beings and with control of what we do often cannot control ourselves, sometimes we act by impulses, it could be possible in the future to dominate the artifical intelligence if it acted at its own discretion.
AI algorithms may replace the cognitive capacity of humans on the planet
One of the capacities that we humans possess are the senses and Sometimes we tend to trust ourselves with a certain sense of intuition that we possess but it will be possible for us to perceive what a machine would be capable of doing if it had the capacity to act on its own without human direction, a recent example of this affirmation can be found in an experiment that was carried out in the mid 2016 in the United States.
With the start up of a car that could be driven by itself, but unlike the other self-propelled vehicles, this one did not follow indications from any programmer or engineer but from an AI algorithm that had learned to drive from a human, it collected the learning data by a series of sensors that ended up in a network of artificial neurons.
However, the only problem that existed with this AI algorithm is that it could have the ability to make decisions, such as hitting anything or not stopping at the green light, the complexity of the system puts the engineers who designed it in a dilemma of reason because it did.
AI will be positive or negative for humanity
In the end, the technology of deep learning has been very positive at present in solving important problems such as the diagnosis of deadly diseases, the transformation of industries for progress, decisions of multimillion-dollar stock market amounts; but it would not be advisable in some cases to decide whether one is totally confident in AI or perhaps to seek a solution such as incorporating social intelligence into AI.
If society is built on acceptable behaviour, it would be necessary to create AI systems that conform to social norms. For example, if robotic war tanks are to be created to kill, let them make decisions according to our ethical and moral judgments, but always with caution, “If you can’t explain what you do better than us, then don’t trust us” Daniel Dennet.