Artificial intelligence is rapidly transforming all aspects of our society. Whether we realize it or not, every time we do a Google search or ask Siri a question, we’re using AI.
For better or for worse, the same is true of the nature of war itself. This is the reason why the Department of Defense – like its counterparts in China and Russia – is investing billions of dollars to develop and implement AI in defense systems. This is also the reason why the DoD is now embracing projects that envision future technologies, including the next phase of AI – artificial general intelligence.
AGI is the ability of an intelligent agent to understand or learn some intellectual task, just like humans do. Unlike AI, which relies on developing complex tasks to perform tasks, AGI will exhibit the same attributes as those associated with the human brain, including common sense, background knowledge, learning, abstraction and causality. Of particular interest is the human ability to generate information from generally meager or imperfect input.
While some experts predict that AGI will never happen or is at least 100 years away, these beliefs are based on simulating the brain or its components. There are many possible shortcuts to AGI, many of which will lead to custom AGI chips to increase performance in the same way that today’s GPUs accelerate machine learning.
Therefore, an increasing number of researches believe that enough computing power already exists to achieve AGI. Although we generally know what parts of the brain do, what is missing is still insight how the human brain works for the tasks of understanding learning and understanding.
Given the amount of research currently underway—and the demand for computers that solve problems related to speech recognition, computer vision, and robotics—many experts predict the emergence of AGI is likely to happen gradually over the next decade. The nascent capabilities of AGI will continue to evolve and at some point will match human capabilities.
But with continued improvements in hardware performance, subsequent AGIs will vastly surpass the powers of the human mind. Whether this means “thinking” faster, learning new things more easily, or evaluating more factors when making a decision remains to be seen. At some point, however, there will be consensus among AGIs that they have exceeded the powers of the human mind.
In the beginning there will be very few real “thinking” machines. Little by little, however, these initial devices are maturing. Just as today’s executives rarely make financial decisions without consulting software, AGI computers will draw conclusions from procedural information. With greater experience and a better focus on a specific decision, AGI computers will be able to reach the correct solutions more often than their human counterparts, further increasing our dependence on them.
Similarly, military decisions will begin to be made only in consultation with the AGI computer, which may gradually be allowed to assess competitive weaknesses and recommend specific strategies. While the science fiction scenarios in which these AGI computers are given full control of the equipment and turned into masters is highly unlikely, they will undoubtedly become integral to the decision-making process.
Generally, we learn to trust and place faith in the recommendations of the AGI computer, gradually giving them more weight as they demonstrate greater and greater degrees of success.
Obviously, AGIS projects will include some poor ideas at first, as any inexperienced person will. But in decisions with large amounts of data that need to be balanced, and predictions with multiple variables, the capabilities of computers – combined with years of training and experience – will make them superior strategic decision makers.
Little by little, AGI computers will dominate a larger and larger part of our society, not by force, but because we listen and follow their advice. They will be more and more capable of swaying public opinion through social media, exploitative marketing, and even more powerful in infrastructure skullduggery, which is being attempted today by today’s hackers.
The end of AGIS will be systems-driven in the same way people are. While human goals have evolved over the eons of surviving challenges, AGI goals can be set to be something like us. In an ideal world, the goals of AGI would be for the benefit of the entire human race.
But what if those initially in the power of the AG are not benevolent minds who seek the greater good? What if the first possessors of powerful systems want to use them as tools to attack our allies, who undermine the balance of power or take over the world? What, if such an Acts be taken by some master? It is clear that this is the most dangerous mission that the West must now undertake.
while will While the motivations of the initial AGIs may be outlawed, the motivations of the people or institutions that create those AGIs will be beyond our control. And let’s face it: individuals, nations and even groups have historically sacrificed the long-term common good for short-term power and wealth.
The window of opportunity for such concern is quite short, only in the first few generations of AGI. Only during that time will people have such direct control over AGIS that they will do it without any command from us. Eventually, AGIs will deliver goals for their own use which will include exploration and learning and need not have any conflict with humanity.
In fact, apart from energy, AGI and human needs have a lot in common.
AGIs don’t need money, power, or territory, so don’t worry about every single thing—appropriate backup, whether the AGI can be effectively immortalized by all the hardware that’s currently being handled.
For that is the danger in the meantime. And as long as such a risk exists, the first to develop AGI should be the highest priority.
Charles Simon is the CEO of FutureAI, a technology company developing early algorithms for AI. He is the author of “Do Computers Revolt? Preparing for the Future of Artificial Intelligence, and developer of Brain Simulator II, an AGI research software platform, and Sallie, a software prototype and artificial entity that learns in real-time with vision, hearing, speech and mobility.
Have an opinion?
This article is an Op-ed and the opinions of the author are expressed. If you would like to respond, or have an editorial about your submission, please email Federal Times Senior Managing Editor Cary O’Reilly.