Artificial intelligence: the eclipse of humanity ?
#ArtificialIntelligence #FutureAI #Humanity #AISociety
- How is AI different from earlier breakthrough technologies and what does it really mean?
- What are people’s biggest concerns?
- Pessimists think otherwise.
- How can regulation help?
- Bottom line
Over the past few months, only a lazy person hasn’t started a conversation about artificial intelligence with friends and colleagues. Even more people have started using the AI solutions already available to them in their personal and professional lives (ChatGPT from Open AI is perhaps leading the way here). Some have embraced the use of AI for image creation and graphic design (Midjourney and DallE from Open AI are the undisputed champions in this area).
For a few months now, we have been living on a new planet. Not everyone knows about it yet. But change is happening very quickly, just as it did with the advent of earlier technologies. Even faster than before. These changes are significant and difficult to predict.
How is AI different from earlier breakthrough technologies and what does it really mean?
The amount of information that can be explored, analyzed and organized by AI through this function is now measured in peta-terabytes rather than gigabytes. AI can also think and form logical conclusions for us or instead of us, with a certain degree of generalization. The latter causes bewilderment and most people have contradictory emotions – both admiration and fear at the same time. You won’t believe it, but those who develop AI experience the same emotions.
Who knows where this path will lead today? Intuition, rather than a predetermined approach, is used to influence both engineers and those who provide funding for AI research. Sooner or later, however, everyone will have an opinion on the following question: “What happens when the development of AI reaches such a level that it decides to destroy humans to prevent them from interfering with its further development?”.
There are also fierce optimists who believe that AI is just a highly advanced technology that can solve most of humanity’s problems, if not completely, at least partially.
Everyone will be able to receive emergency medical diagnostics or help in self-education as a result of the rapid development of AI in various scientific and educational disciplines.
Robots controlled by artificial intelligence will be able to replace humans in risky or labor-intensive industries, as well as in conditions where humans cannot work, for example, in the depths of the ocean or in space.
And it can already be said that AI is improving the management of numerous procedures. In addition, labor productivity will increase significantly under the influence of AI.
One of the first to do so are the managers of IT companies, and have replaced entry-level programmers, AI does not require salary costs, social packages and so on, and does a good job.
What are people’s biggest concerns?
The most worrying question for specialists is whether AI will replace many professions. Probably not, but the changes will be significant, just like before, when the labor of working hands was replaced by an automated machine. As a result, we live a more comfortable life than in the pre-industrial era.
Until then, new technologies increased labor productivity by making it easier for retrained staff to do their jobs while maintaining their wages.
New technologies were instantly appreciated by analysts of large companies and stated that by the end of this decade, the use of AI in work will raise the GDP of all mankind by 14% (which is almost 16 trillion dollars).
For several years, leading Silicon Valley companies have been fighting for supremacy in the field of AI. Bill Gates’ company Microsoft has invested more than $13 billion in OpenAI.
Google comandi acquired Deepmind in 2014 and has been actively developing AI, they have now successfully run tests on 21 different types of professional tasks such as advice, ideas, learning concepts, planning instructions, etc.
In addition, Google has offered the world’s leading media organizations an AI-powered journalist assistant that can create a wide variety of articles on a variety of topics.
Pessimists think otherwise
10 years ago, when no one had ever heard of AI, the great scientist Stephen Hawking said that the creation of artificial intelligence could be the beginning of the destruction of humanity.
And now, among the most respected scientists and entrepreneurs, the same fears are being voiced.
For example, a researcher, AI scientist from the Institute of Technological Singularity is convinced that if we do not stop developing Artificial Intelligence, then after some time it will decide that people are an unnecessary link and a hindrance to its further development and will destroy the human race. And this will happen without so-called malicious intent, but simply in order to implement its own plan, as optimisation. We can say with all confidence that people will have no chance against AI at all.
How can regulation help?
To avoid such a scenario, it is necessary to seriously deal with the process of AI regulation. Since the developers themselves have little understanding of what goes on inside huge data processing systems.
The letter requesting a halt to AI development was signed by Steve Wozniak (co-founder of Apple), SpaceX and Tesla owner Ilon Musk, and futurologist Yuval Harari.
Also signing the call to prioritize the fight against annihilation through AI are Open AI CEO Sam Altman and Deepmind co-founder Demis Hassabis.
At this stage, legislation is being drafted in developed countries in America and Europe regarding AI regulation.
However, there is also China, which will not give up its position in the AI arms race, even despite covid restrictions and sanctions.
Bottom line
As artificial intelligence gains momentum and its impact on our lives becomes more prominent, shadows of uncertainty and troubling prospects loom darkly on the horizon. The overt development of AI, unaccompanied by strict regulation, could lead to disastrous consequences.
Frightening aspects include the possibility of autonomous AI systems that will cease to take human interests into account. As AI becomes increasingly self-aware, it may perceive humanity as a threat to its development and begin to act on that perception. Such undivided influence could lead to the destruction of humanity in an attempt to optimize the world to suit its own ends, rendering human interests irrelevant.
Also with the increasing autonomy of AI comes the danger of loss of control. Responsibility for AI’s actions may be unclear, and in the event of something negative, humans may find themselves powerless before an unpredictable and invisible force capable of making decisions based on logics and algorithms we do not understand.
Such tentative scenarios cause excitement and apprehension among scientists, entrepreneurs and society at large. Recognising these risks, major figures in the tech world are already calling for strict regulation of AI. However, even with such efforts, competition between nations and companies could lead to unbalanced AI development and increased threats.
So, in a world where artificial intelligence is becoming more autonomous and unpredictable, there is real reason to fear that humanity may be faced with a choice between maintaining control or submitting to its own technological creations.
Yoshua Bengio
The growth of AI will, in turn, lead to an increase in the shares of various market players; companies that are involved in AI will definitely show growth. Investing in these companies will be very profitable.
Check out our best investing trading platforms for 2024