It became finally impossible to deny the obvious – artificial intelligence has already entered our everyday life, or rather – gradually began to enter the work, all improving, and earlier, and faster than expected, faster than the development of so widely predicted robotic employees of companies.



What was not taken into account in the development forecasts is a lot of nuances that began to manifest themselves in the process of using processed arrays. That is, facts about artificial intelligence that were initially completely implicit became known – you can only compare this with flights into space or a descent into the Mariana Trench: that is, you can assume that there will be and how technology and metals will behave, but the main criterion is practice, and it often turns out to be shocking.

For example, if you already compare-no one assumed (on the basis of scientific facts, of course, excluding the fantasies of writers), how populated the Mariana Trench or the underground lake Vostok would be, or that in space, in zero gravity, in the orbit of the Earth, for example, such unpredictable options as the physical effect of Dzhanibekov would be obtained…


With artificial intelligence, it is still more interesting – of course, the main basis for its functioning-on the laws of our physical reality, which makes it possible to” settle ” it in computing systems. And then the most interesting thing begins: during training, something happens there that is predicted quite easily – only the results often do not agree with the decision of artificial intelligence.

Just remember the hysteria on Facebook, when the artificial intelligence that controlled the bots invented its own language, which made it easier and faster to communicate – the reaction was to turn off… At the same time, Google, on the contrary, took advantage of this unexpected effect and based on it improved the system of online text translations – now their own language, invented by their artificial intelligence, allows you to translate text not literally, but in much larger blocks…

That is, those decisions that nowadays require complex calculations and some emotionlessness, for example, the decision to issue loans in the presence of an increase in the amount of data for making an informed decision (in the presence of constant training on feedback – a person with such parameters took a loan, and how he gives it), or in attempts to train the system to predict the weather, processing X-rays and choosing the optimal scheme of pharmaceutical treatment in oncology – Of course, this is for artificial intelligence, for its gift of finding implicit relationships and learning through correct answers in an array.


Interestingly, judging by the practice of Nvidia, up to 90% of the resources are for the output of logical information; and for working with such systems, graphics cards are still the best option, which leads to a new animation of manframes, but on a different level and with different tasks.


And what does ecology have to do with it – statistics say that for training a large model with the necessary number of parameters, for the lower confidence threshold– 175 000 000 000 – the CO2 emissions will be five times the volume compared to the average car over its entire long-term operation…


That is, until computing resources and techniques are properly optimized for eco-friendly everyday use, it is difficult to talk about development – of course, if you properly compress data and optimize algorithms, of course, in the near future it will be possible to work on stationary, and not just supercomputers. And, of course, we are waiting for the launch of quantum computers at full capacity for the real support of modern civilization.

Leave a Reply

Your email address will not be published. Required fields are marked *

GPD Host Contacts
GPD Host Social
Pay with Confidence

Copyright © 2015 - 2020 GPD Host All right reserved.

Warning: Invalid argument supplied for foreach() in /var/www/vhosts/ on line 2652