The science fiction genre offers a number of disturbing scenarios that could occur if an artificial intelligence unit were to acquire ‘a consciousness’ and become a self aware entity. Especially a self aware software entity capable of transmitting and installing itself within different pieces of hardware.
Fortunately for us, we are not likely to create a computer program capable of becoming a self aware entity. Not in the near future.
However, I think we have been creating a very dangerous systems that might turn out to be as unpredictable as any self aware entity offered by the science fiction genre.
Graphical user interfaces are becoming more and more complicated. Consequently, the software required to manage GUIs and human interactions are becoming more and more complex. However, and more importantly, hardware is becoming more and more powerful.
Powerful meaning: physically more powerful (larger and faster cars, planes, trains, busses, drones, satellites, bombs, missiles, microwaves, etc.); and more powerful in terms of their indirect networked ability to affect other similar pieces of hardware and software as well as very different pieces of hardware and software thus different systems. Needless to say, in today’s world no piece of hardware exists as a standalone device, unless it is designed to be a standalone device. Thus their cascading neural/network potential is approaching infinity. It is all encompassing, planet wide.
To simplify the problem:
Despite the improved nature and quality of processors and other hardware, the input/output process is delayed, simply because of programs’ complexity. It is not so much the actual processing time factor that is the issue, as we are talking about fragments of a second, as it is the computational distance and complexity that we should be worried about. That is the mathematical distance (implied by the computer’s calculations) from the user’s original input. That is, the length and complexity of calculations as well as the extent to which the user’s input is modified.
Programs, codes and calculations performed by hardware and software are becoming longer and longer and there is more opportunity for computers to restrict our access, to lock us out.
A good example is to look at airplanes and any disasters caused by the autopilot override errors.
Like stated earlier, a physical action that becomes a digitized input or piece of digital data is an indirect way to control a piece of hardware to begin with. For example, when a pilot turns the control yoke in any direction the computer receives the order, encodes it, assesses it, examines the position of the plane, performs additional calculations then applies the command, checks the response, etc., etc. There is a delay. It’s a split second delay however it’s more than enough to initiate the first step of what could become a disaster.
So there are two problems. One caused by the software’s limited ability to deal with unpredictable events (it can predict only so many factors, whatever it is programmed to predict, how limiting is that); and the other which is the complexity of its calculations, which even though they might be inadequate, might dictate that it is necessary for it to prevent or restrict further human input.
The number of calculations that any computer must to perform in order to validate the pilot’s input is extraordinary. However the fact that the number of calculations it has performed is nowhere near a human being’s ability to assess any given situation and examine any factors which could be incredibly random and situation dependent (rather than something that can be applied as a standardized formula) and then compare them to previous experiences (and especially the brain’s ability to imagine and predict new scenarios) is a significant problem. Another problem which has caused a number of air disasters (and other disasters) is the fact that the autopilot remains engaged or partially engaged when it should not be.
All of this is fairly obvious, if not very obvious, if not extremely obvious.
The point is:
We have created and continue to create software which can and does and continues to reduce or completely remove or OVERRIDE the user’s input. It can and does replace our decisions with its own decisions. Not because it is sufficiently intelligent to do so but because we have been unable to design better software to hardware and hardware to software interaction process.
That is the point.
We live in a world where we have to carry on with what we’ve got. But what we’ve got might destroy us. We are building nuclear and chemical weapons and nuclear and chemical test laboratories, as well as other extraordinarily powerful and highly mobile piece of hardware (as mentioned above) that are controlled by the software that could remove our ability to control it!
How do we stop this software design error created by a gap in our perception, cognition and imagination.
Imagination because we have been creating software without asking are we creating software that might, if we continue along this path, reduce and ultimately remove our ability to control it, thus the hardware.
Had we been really intelligent we would have realized that certain hardware requires exceptionally advanced software. Arms and hands are complex, powerful and sophisticated but the brain is infinitely more complex.
Of course, I am very well aware that we are never talking about a single system of control. The more advanced the device the more back up systems, alternative systems, fail safe devices and so on there are. So yes we are talking about multiple systems with multiple failsafe devices, however, ultimately, all of them could fail, because while there are many different systems and backups, all of them use the same principle. The same software structure, which is becoming increasingly more complex, and, more importantly, increasingly self reliant.
How do we remove the self reliance element? Or how do we create a system that does not restrict our access?
What could be the answer?
That’s the problem.
I suppose the answer is obvious but there is no way to get there.
We should have focused on biology and chemistry instead of electronics.
A hardware system that can be controlled with our own nervous systems.
A neural network.
So how do we solve the problem of the delayed or altered input/output response, or worse, of being locked out.
That’s the thing. There is no way to get there over night.
That’s the nightmare.
We have to continue to create more and more powerful hardware and software and pray that we will be able to survive through the digital stage and use the digital realm to create more immediate thus biological, chemical, etc. thus neural interface devices.
It was impossible to examine our own bodies and focus on biology, chemistry, the nervous system, etc. because we lacked the hardware with which to examine our bodies. Microscopes, cameras, and other digital imagining devices had to be invented in order for us to learn about ourselves.
Perhaps we should have started with the biological approach way back when. Needless to say this could become a speculative science fiction argument.
Had we realized the power contained within the bio chemical and neural system (our own and within other beings) we might have abandoned our engineering pursuits and focused on biology and chemistry.
We could ask why hadn’t we done so?
I suppose the physical nature of our bodies and the fact that the chemical processes taking place within our bodies, which remained concealed by the skin, aided by the fact that we are highly mobile beings (unlike plants, etc.) required us to master our muscles and our immediate environments. Learning about ourselves and realizing that we are driven by and dependent on what is within us could not not have occurred without a safe environment? Or could it?
I am not sure how safe it is to say that the problem of perception and cognition is the ultimate obstacle that’s defined how we define ourselves.
How and why was it so difficult for us to observe ourselves and realize the nature, complexity, and potential of biological, chemical and organic systems contained within us and other beings.
Then again, the more important question is how to survive the digital evolution without creating software that continues to reduce and restrict our access to it and thus the hardware it controls.
How to create an organic system that won’t contain the same problem?
We know we are still incapable of creating a computer or computer systems that can simulate a human brain yet we continue to delegate them tasks that require perception, cognition, memory and humanity
How do we monitor the process?
How do we control the need for mindless automation and efficiency?