In with my back knowledge in the subject

In his book Superintelligence, Nick Bostrom gives almost a full view of Artificial Intelligence (AI). He starts with a concise history lesson of AI, walks us through the different forms of AI, and all the ways we could possibly achieve it, and then takes us to the most important part in his book in my opinion, the -inevitable- dangers to such superintelligent systems and some possible methods we could avoid it. First of all, I want to start by stating that this book is closer to a text book than it is to a popular science book. It is not technical by any means, but it wasn’t a light read either. Even with my back knowledge in the subject and my major in engineering, I found the book sometimes very detailed and difficult to read. This is not necessarily a bad thing, because the ideas were still worthwhile, but it is something worth mentioning. As I’m sure most of the readers did, I found the second part of the book, where he talks about the danger that the AI will achieve total dominance of our lives and might even bring our species to extinction, the most interesting part. This part makes up the bulk part of the book. The growing concern that the androids’ goals won’t necessarily align with ours is in this case legitimate. I always thought movie scenarios where robots would turn evil and come for us just because they can, were nothing more than Sci-Fi scenarios. The author gives two theories regarding the possible evolution of motives in the superintelligent systems. The first one is the orthogonality thesis, which states that superintelligence could be compatible with any final goal. This means that these systems don’t have to necessarily be wise or benevolent, superior or even malevolent, it could just be coming up with a mathematical proof. The second one is instrumental convergence, according to which these systems would set intermediate goals like acquiring any possible resources in terms of self-preservation, in order to achieve their final goals. He worries, that a self-aware and self-improving system will go to extreme lengths in order to achieve its goals. The battle for resources could bring us to extinction before we even know it. This is why the problem with AI isn’t only “their malevolent motives”. We might for example end up having conflicting interests. History has proven that we, Homo Sapiens, haven’t necessarily been very cooperative when it comes to cohabitation even with each other, let alone with another smarter species. The evolution theory confirms that only the stronger species will survive, after destroying the similar but weaker one. The concern here is therefore not malevolence, but competence. Bostrom gives the scary (and honestly quite dystopian in a “Black Mirror” kind of a way) example of some perfectionist superintelligent systems that have the task to produce “more paperclips”. Imagine how they could just destroy all of us for resources to just make some more paperclips. The author calls this failure mode “Infrastructure Profusion”. He also gives some other modes of failure, like Perverse Instantiation where they would misinterpret our orders, and Mind Crime where they would for example make conscious simulations of human brains and run tests on them.As someone who feeds off dystopian movies, series and books, I found this part fascinating. The only problem with it was that it felt too dystopian in my honest opinion. The examples he mentioned, even for the pure sake of explanation, were sometimes too exaggerated that it made the idea a bit unbelievable. But again, maybe my mind isn’t ready to apprehend such complex concepts. He makes the assumption that the superintelligent systems will be capable of thinking in so many more ways than Homo Sapiens, that it will be able to surpass us, but easily ignores the fact that since it will think in a different way, you can not predict its behavior and how they will develop their motives and goals, especially if we are talking about autonomous AI systems. I could even follow the same reasoning he did and apply it on Homo Sapiens. Even though a lot of people are in some way “malevolent” or even take a nihilistic approach to life, this doesn’t mean we are all in the same batch, and all bad and dangerous. Also the assumption that he discusses in the 12th Chapter, that it’s going to be hard to implement ethical systems in Superintelligence is more or less unwarranted. I know the lack of embodiment and sensitivity to the environment would be an issue, but if we artificially achieve something as complicated as Intelligence, would implementing a moral system in it would really be that hard? If we get it to operate in our world, it must be then able to comprehend our morality, why would it so difficult for it to follow it since it is not automatically malevolent?It’s too soon to have concrete ideas, but there has to be some. We still have time to figure a way. But in the meantime, we can’t just ditch a promising concept like AI, just because we have some fears that might or might not turn out to be reasonable in the future.But should we actually be already worried about this? How plausible is it that we will achieve superintelligent systems in the foreseeable future, if not at all? According to a survey conducted by the MIT, 92.5% of AI researchers think superintelligence won’t arrive in the foreseeable horizon (more than 25%), and 25% of them even think that we will never be feasible. In the 1960s, scientists thought that AI would overtake human intelligence by 2000. It didn’t happen. Specific machines can surpass human abilities in some fields such as playing chess or executing more complex equations, but are still dependent on human orders and perform poorly outside of their “field of expertise”. The singularity theory, that the sudden explosion of AI will result in drastic changes in human civilization, was set to hit us by 2030. Scientists however no longer believe that. The reason I am mentioning these examples, is to argument that I do not really know whether we will achieve superintelligence in the future. As an engineer, my instinct answer would be yes, since science achievements have been surpassing their challenges in quite an impressive way in the past decade. Machine Learning methods are for example very promising nowadays. I am very enthusiastic about the potential of AI, and the several ways how it could be helpful in improving our quality of life, and that is why I would like to believe that AI is achievable. However, there are so many challenges about achieving cognitive behavior that need first to be overcome. Until now, scientists and philosophers still have to fully apprehend human consciousness and cognitive abilities. Maybe we will never find out what the essence of consciousness is in order to be able to summarize it into a few lines of code.From Nick Bostrom to Elon Musk, everyone is expressing their concern about AI . The ever-rising discussion about this technology is captivating to say the least. Whether it is something we need to welcome or fear, we cannot know for sure. Both scenarios are possible, it’s too early to judge in my opinion. We need to do a full safety research first to be able to make a rational decision. Right now, I can only say that we need to embrace the inevitable technological progress in all its forms, not only because it is helping us move forward in our cosmic conquest, but also because it’s happening whether we like it or not. No matter how dangerous it might look, it doesn’t look like it’s going to slow down anytime soon.