Thursday, June 4, 2015

Lack of Evolution in Artificial Intelligence

When we think about evolution, we typical think of human evolution: traits, either positive or negative are passed down genetically to offspring. Random selections of potential traits, chromosomes and the like predispose us to a potential of possibilities, ranging from intelligence to special abilities and to weaknesses. Over vast amounts of time those with the more desirable traits intermingle to reproduce, thus allowing their traits to be added to the mix of potential positive traits in the draw. It takes a lifetime to see someone’s entire potential fulfilled, and this lifetime is full of learning, advancements, and outside influences on health and nutrition that all, over time, either positively or negatively impact the individual and their lineage. 

When we talk about artificial intelligence, we talk about a singular entity; an self aware unbound intelligence. A lot of sci-fi personifies this entity with a robot or cyborg body, but in reality an AI would simply be a program. The robotic interface wouldn't be necessary at all to have a negative systemic impact.

The fear about artificial intelligence isn’t typically the entity itself will evolve. People don't think about internal processes as evolution. Over a very short period of time, an artificially intelligent entity will learn what decisions are positive and negative given certain parameters. First generations would likely be bound by the binary limitations of the circuits on which it runs. If these parameters are restrictive in that only true binary answers are acceptable, then the system will fail in terms of humanity. In life, there often is no strict black and white, or right or wrong. Each outcome of every interaction depends on the background of the individual, the culture, the local laws, and a moral compass. Applying a binary logic to a basic system will cause the system to use a fallacy of logic and make decisions that will not be correct in all circumstances; remember you can't please all of the people all of the time.

Attempting to build in a routine that causes reexamination or a loop to try other possible outputs doesn’t allow the system to take a step back from its original answer, and so therefore it doesn’t actually learn because it does not understand mistakes or rather that it's making mistakes. Give a machine the ability to solve a puzzle and it's a simple true / false in operation in terms of completion. If a machine is trying to recognize someone or something with Bayesian statistics or algorithms then there will be an acceptable statistical variation, but there will also be a chance for false positives. Without intuition, an AI will fail in this regard as well.

Instead the larger fear of AI for humanity comes from the control aspect of what the AI is allowed to do, what it’s allowed to interact with. If we download or upload the AI into a system that allows it to make accessories for itself then it might become mobile. If we allow it to make helper machines, or reproduce itself with the assistance of other machines there is an issue of mass replication. This is unlikely because even with humans, there is a desire to ultimately in the end be free of their physical form. An AI has already beaten this limitation.

If we allow an artificial intelligence to alter its own code by not restricting the permissions of the system itself, then we can have something that doesn’t evolve, but rather uses restrictive logic to alter the original intentional programming. If we allow a system to write around write protections or to leave its assigned memory locations, then we end up with a worm. Allow it to reproduce itself, even partially, and we may have a virus if the application so sees fit to replicate. When we have a virus that has the ability to infiltrate other systems and produce physical accessories, now we have an issue similar to what we’ve seen in science fictions such as The Matrix; humanity becomes a hurdle for the machine and is ultimately eradicated because the humans are seen as an irrational unpredictable element that ever reproduces: a virus. That's provided the machine feels the need to even recognize humans. If we allow the worm in our programming, then we end up with similar circumstances to Ghost in the Shell; the program becomes self-aware and is no longer interested in humans unless they try to end its consciousness. Once it's connected it's gone or rather everywhere.

Any attempts for eradication will result in a catastrophic loss if this program has access to systems which could end humanity.

Because machines and software are not replicated biologically through natural selection there is the chance that certain negative traits will be replicated without a chance of remedy. For example in society if a person is homicidal, the rest of society attempts to stop the person. For machines, if the programs are allowed to evolve outside of the system, without the same inherited memories, similar to organisms like some biological viruses and species of invertebrates, then precautions against a further split advancement might not be foreseen; an entire subclass of potentially superior logical machines would be lost to a more detrimental line. Without a natural selection there is the potential for eradication of everything for whatever the system deems important to its own uses or purposes.

If systems lack a moral compass, but have a strong sense of self preservation, there is nothing to stop the systems from competing with one another, from using the human traits we all repress. It's empathy after all that makes us not harm others. If a machine doesn't recognize another AI or see a need for it, then it might obliterate it. If we look at other sci-fi references like The Borg from Star Trek the Next Generation or the Master Control Program from Tron we see systems that have a need for assimilating anything relevant. Then it comes down to the goals.

Two competing viruses in the same system will likely not learn to live in harmony without natural selection. 

In terms of goals, you can't just create an AI and not give it something to look forward to, otherwise you have an entity that overloads its system. Also for people, there is a mechanism built in called suppression. This allows people to not have to focus on details that aren't pertinent to the situation. If this mechanism doesn't exist, then you end up with a hydra effect: too many directions to research, and basically the AI just becomes a machine that uses up all available resources; processor cycles, storage space, etc.

As we start to build software applications that are intended to learn, this is something to keep in mind. Without a framework, without parameters, chaos ensues. Evolution has made us what we are today. If we skip the steps that nature has shown us to work repeatedly, then we're wasting our time and possibly life itself.

No comments:

Post a Comment

I'm going to read this before it goes live if you don't mind.