Thursday, June 4, 2015

Lack of Evolution in Artificial Intelligence

When we think about evolution, we typical think of human evolution: traits, either positive or negative are passed down genetically to offspring. Random selections of potential traits, chromosomes and the like predispose us to a potential of possibilities, ranging from intelligence to special abilities and to weaknesses. Over vast amounts of time those with the more desirable traits intermingle to reproduce, thus allowing their traits to be added to the mix of potential positive traits in the draw. It takes a lifetime to see someone’s entire potential fulfilled, and this lifetime is full of learning, advancements, and outside influences on health and nutrition that all, over time, either positively or negatively impact the individual and their lineage. 

When we talk about artificial intelligence, we talk about a singular entity; an self aware unbound intelligence. A lot of sci-fi personifies this entity with a robot or cyborg body, but in reality an AI would simply be a program. The robotic interface wouldn't be necessary at all to have a negative systemic impact.

The fear about artificial intelligence isn’t typically the entity itself will evolve. People don't think about internal processes as evolution. Over a very short period of time, an artificially intelligent entity will learn what decisions are positive and negative given certain parameters. First generations would likely be bound by the binary limitations of the circuits on which it runs. If these parameters are restrictive in that only true binary answers are acceptable, then the system will fail in terms of humanity. In life, there often is no strict black and white, or right or wrong. Each outcome of every interaction depends on the background of the individual, the culture, the local laws, and a moral compass. Applying a binary logic to a basic system will cause the system to use a fallacy of logic and make decisions that will not be correct in all circumstances; remember you can't please all of the people all of the time.

Attempting to build in a routine that causes reexamination or a loop to try other possible outputs doesn’t allow the system to take a step back from its original answer, and so therefore it doesn’t actually learn because it does not understand mistakes or rather that it's making mistakes. Give a machine the ability to solve a puzzle and it's a simple true / false in operation in terms of completion. If a machine is trying to recognize someone or something with Bayesian statistics or algorithms then there will be an acceptable statistical variation, but there will also be a chance for false positives. Without intuition, an AI will fail in this regard as well.

Instead the larger fear of AI for humanity comes from the control aspect of what the AI is allowed to do, what it’s allowed to interact with. If we download or upload the AI into a system that allows it to make accessories for itself then it might become mobile. If we allow it to make helper machines, or reproduce itself with the assistance of other machines there is an issue of mass replication. This is unlikely because even with humans, there is a desire to ultimately in the end be free of their physical form. An AI has already beaten this limitation.

If we allow an artificial intelligence to alter its own code by not restricting the permissions of the system itself, then we can have something that doesn’t evolve, but rather uses restrictive logic to alter the original intentional programming. If we allow a system to write around write protections or to leave its assigned memory locations, then we end up with a worm. Allow it to reproduce itself, even partially, and we may have a virus if the application so sees fit to replicate. When we have a virus that has the ability to infiltrate other systems and produce physical accessories, now we have an issue similar to what we’ve seen in science fictions such as The Matrix; humanity becomes a hurdle for the machine and is ultimately eradicated because the humans are seen as an irrational unpredictable element that ever reproduces: a virus. That's provided the machine feels the need to even recognize humans. If we allow the worm in our programming, then we end up with similar circumstances to Ghost in the Shell; the program becomes self-aware and is no longer interested in humans unless they try to end its consciousness. Once it's connected it's gone or rather everywhere.

Any attempts for eradication will result in a catastrophic loss if this program has access to systems which could end humanity.

Because machines and software are not replicated biologically through natural selection there is the chance that certain negative traits will be replicated without a chance of remedy. For example in society if a person is homicidal, the rest of society attempts to stop the person. For machines, if the programs are allowed to evolve outside of the system, without the same inherited memories, similar to organisms like some biological viruses and species of invertebrates, then precautions against a further split advancement might not be foreseen; an entire subclass of potentially superior logical machines would be lost to a more detrimental line. Without a natural selection there is the potential for eradication of everything for whatever the system deems important to its own uses or purposes.

If systems lack a moral compass, but have a strong sense of self preservation, there is nothing to stop the systems from competing with one another, from using the human traits we all repress. It's empathy after all that makes us not harm others. If a machine doesn't recognize another AI or see a need for it, then it might obliterate it. If we look at other sci-fi references like The Borg from Star Trek the Next Generation or the Master Control Program from Tron we see systems that have a need for assimilating anything relevant. Then it comes down to the goals.

Two competing viruses in the same system will likely not learn to live in harmony without natural selection. 

In terms of goals, you can't just create an AI and not give it something to look forward to, otherwise you have an entity that overloads its system. Also for people, there is a mechanism built in called suppression. This allows people to not have to focus on details that aren't pertinent to the situation. If this mechanism doesn't exist, then you end up with a hydra effect: too many directions to research, and basically the AI just becomes a machine that uses up all available resources; processor cycles, storage space, etc.

As we start to build software applications that are intended to learn, this is something to keep in mind. Without a framework, without parameters, chaos ensues. Evolution has made us what we are today. If we skip the steps that nature has shown us to work repeatedly, then we're wasting our time and possibly life itself.

#DTSR Other potential reasons for Medical information breaches outside of what was mentioned in the 6-1-2015 podcast.

I'm just brainstorming here based on my observations of the medical system in passing, or rather flaws I’ve seen in dealing with healthcare in my own interactions.

Why?
Healthcare systems provide access to the same information people use for nefarious purposes like tax return fraud, welfare fraud, and identity theft. They are often not integrated, so each system will be standalone in each facility and only contain whatever security the company felt the system warranted. Not as in a single computer per se, but likely a thin-client network for a specific system. Custom systems have to be written to integrate these systems together, so where two independent systems are involved, there are really three points of possible non-secured entry, taking into account the custom system for integration.

In a lot of companies, in terms of development projects, someone will ask a question like “Is it only going to be used internally?” To which the answer more often than not is “Then leave it up the IT department to lock down the workstations and restrict access.” I’m guessing healthcare companies, like other companies often scrimp on costs as well, so if they weigh the cost of a breach versus the cost of a payout, it might not be worth it to build in the more expensive security precautions. In my experience, there is often an assumption that a medical company’s legal representation would far outweigh that of individuals and moderately sized groups. If this is true, then again, the financial benefit to not securing is still worth it to the shareholders (if we only look at the bottom line). If the responsibility for the loss of information doesn’t fall on the companies, then they are off the hook. Also, it might be up to the patient to prove beyond a reasonable doubt that this specific breach is what caused their identity to be stolen (unreasonable burden of proof).

Nobody is going to shut down a hospital because of an information breach.

The devil's in the details
Healthcare systems tend to contain some of the most complete levels of information. While a tax return will have information such as an address, an employer’s address, and potentially a phone number or bank account, medical records (depending on the system) will contain this information and more, such as connections to other patients in the same system, bank account information, payment information, insurance account information, and the family medical history. If it’s a family clinic, patients are likely to bring in their children for a checkup, so their information is in the system before it’s in a system like the credit system.

Points of entry
Individual healthcare systems are likely easier to hack. While there are guidelines, there are multiple points of entry physically. Someone can hack a system on the network where the developer didn’t think an exploit could take place: MRI machines, copy machines, fax machines, printers, network scanners, x-ray machines, etc. How often is someone left alone with a terminal in the room for great lengths of time while they wait? Even though a terminal’s locked down, someone could add a hardware keylogger and wait, and then retrieve it when the medical staff have left the room again, to allow the patient to get dressed. This arrangement typically doesn’t happen with the IRS systems.

Most of the insurance companies require referrals, so there is a higher incidence of the same information being out there. A single tax return for the year, versus four or five visits to multiple various doctor offices for something as simple as a broken finger: primary care physician, emergency room, specialist, quick care, etc.

Lack of detection
Another fraud aspect, not necessarily social engineering might involve billing someone for a service that has yet to be billed. So Alice goes the doctor to have an MRI, while the real medical system is working through all of the tape between the insurance companies, Bob sends Alice a strongly worded letter with a legitimate looking address and information for payment processing. Alice pays the bill thinking it is from the healthcare provider. If Alice takes this bill to the medical provider and pays it, they will simply apply it to her account when she tells them she needs to make a payment. They’re interested in getting the money, so they might not even look at the forged bill, but will instead go about asking the typical verification questions:
“Do you still have Company X as your insurance provider?” 
“What’s your Last Name?”
“When is your Birthday?”

Also the person may neglect to bring the fake bill with them, assuming it would be in the system, so there is less of a chance for red flags in non-tech-savvy systems.

Market research
Since companies aren’t allowed to share medical information on personal statistics legally without some sort of generic research (studies), having a database of information relating to specific demographics might be helpful if you were let’s say developing pharmaceuticals. Now they can have real viable marketing information based on prescriptions. Not to mention the external prescription system in drug stores that don’t have the security systems of a national chain.

Unlikely, but still possible
These last few are out there a little further, and so they’re less likely to happen from some individual seeking out someone, but a larger system looking for information might be the right kind of buyer. Buyers might include foreign governments, political parties, lobbying firms, stock brokerage firms, pharmaceutical companies, and multinational banks.

As @Dr_Grinch suggested on Twitter, political embarrassment could potentially force a person out of public office or keep them from running again or winning a political race. (beat me to it Grinch)

Blackmail with sensitive information could allow someone an insight into a hidden realm, so insider-trading insights for people who blackmail politicians who already legally engage in insider trading.

While something like herpes might not necessarily be that bad to most people (publicly), finding a Supreme Court Justice or Congressional representative who has cancer markers or a bad heart could be pretty serious for interested parties.

Targeting of a specific patient for murder or to get them out of office.
When someone has a medical condition, let’s say this person is a high value target, something like a heart condition might be a good cover up in the event of unforeseen catastrophic loss. If a country external to the breach had intended to take out a target, a medical breach might give them inside information as to an appropriate means of cover-up. Heart attack? Seems plausible based on their medical history.

Stalking / Espionage
Medical information could be used for locating a specific patient who is no longer residing at their primary residence. This information could be used to find patterns of when the person will be out of the area for a localized attack. Typical doctors appointments on Tuesday, good time to bug the house or rob the place. Need a list of places to setup illicit operations? Find empty houses.

Market for locating individuals

Also all of this information in medical systems is much more thorough since people need contact information in the event of emergency. This type of information may be helpful to agencies that try and track people down as well. Bob is off of the grid, but Alice lists Bob as an emergency contact. Charlie needs to find Bob for a client and buys the information.

Sorry, maybe I went a little overboard but if I can think of these things, I'm sure other people have likely already beat me to the punch.

Wednesday, June 3, 2015

Detecting e-mail and mailing list compromises

Back when I was working as Web Manager for a publishing company we were sending out about a million e-mails a week to industries relating to IT certifications, Chief Executive Officers, and Human Resource (HR) departments and managers. We used an off-site list management service to maintain copies of our databases for advertising audit purposes. During transit we would encrypt the list from our end, but often the lists came back to us in plaintext only to be flushed by our firewall. At this point there were no filters on the e-mails of the people subscribing to our services, so our plaintext list contained phrases that were not safe for work.

Though I didn’t agree with having someone else externally manage our lists and preferred to keep them internal, our list management service had sold our president a line of marketing bull about being impenetrable due to their use of IBM AS/400 machines. They were under the impression that the machines were invincible because they weren't like the standard machines we were using in the office. The expense for the level of service they were providing was outrageous, so I had to agree to disagree (Pick your battles).

When we wanted to send out one of our many mail-blasts (aka e-mail marketing campaigns), we would send a specially crafted message to the list service telling them to pull a standard query on the database for a particular list. Their system would in turn automatically send back an e-mail list containing the people we were trying to target based on provided query parameters; demographics. This was the standard procedure before the management service had provided a CMS interface eventually (for extra money of course).

Because we had this external entity maintaining a copy of the lists, I would inject special e-mail addresses and list members into each individual list that only resided in the list management service’s database. Our company was liable for the information we were accepting. Upon receipt of a list back from the service, I had written a bash script that would scrub those special e-mails from the list we were going to send to. Additionally I had added other list members that would also be scrubbed on our end, just prior to send. That way I could tell if one of my employees had sold our targeted lists on the black market. In my experience with corporate systems security danger tends to lurk from within.

If the external list management service decided to send to these people because these were targeted lists, then I would immediately get a copy letting me know of the compromise of security of the lists. Also I could tell if we had an internal personnel issue, such as someone selling lists, someone misfiring a message, or burning a particular list with too many sends.

Additionally for each sending we would create custom e-mail addresses for each mailing that would alert us if anyone compromised the MTA we were using for the send. If we received a message to these addresses, not from us this would indicate the security issue because they only resided at the MTA level.

Present day
While I’m not working for that company anymore, I still do variations of this practice for my own systems. For each vendor where I have to sign-up for an account or in the event I need to register a piece of software, then I’ll setup a custom e-mail alias for that particular use. Each e-mail address is only used for that one specific account, ever.

This allows me to:
  •       Check if someone has sold my name and e-mail address
  •       See if someone’s mailing list has been compromised
  •       Tell if someone is obeying the AntiSpam laws about subscriptions
  •       Have a heads-up if my account information has been compromised during an attack
  •       Stop e-mails from people who aren’t compliant
  •       Change e-mail addresses for the account to stop the spam if a list has been compromised

Being able to filter on these particular accounts also greatly improves my productivity as my inbox only contains e-mails where I have a direct correspondence with a live person. I hope these tips help someone. This process was definitely helpful to me in finding leaks in our systems. It also cuts down on the amount of time my Bayesian spam recognition systems need to find an issue.

Monday, June 1, 2015

Spinning Wheel on Virtual Box on OS X host: Solved

This won't solve everyone's issue with the spinning wheel in Mac OS X on VirtualBox, but it solved mine. Ran into an issue trying to install a new host on Virtualbox. For a brief instant I saw the contents of the folder containing the ISO, then the folder contents interface went white and the spinning wheel (system busy) mouse cursor began. When I went to use an ISO for a DVD / CD image after this hang, I kept seeing the spinning rainbow wheel from Mac OS X. I tried the following steps, all still having the spinning wheel effect on interaction with the Finder:
  • Force quit the application and attempted access again after killing all VirtualBox processes.
  • Shutdown the host machine completely (in case of a USB bus hang).
  • I deleted the VM and started a new one in the default location.
  • Rebuilt the directories on the system using Disk Warrior.
  • Tried to create a VM on a different volume (testing for corrupt SSD)
  • Changed permissions on the ISO.
  • Updated VirtualBox from 4.26 to 4.28
With the Finder window open and spinning I went back to the last successful location (that I saw for a split instance) and reviewed the directory contents in the terminal. In my past experience with various programs, it's often something external of the program that could cause this sort of behavior. Programmers rarely have the time to take into account every possible glitch they could encounter. Usually on a Mac, since they're frequently used for graphic design, this can be a font someone downloaded from the web.

This directory was on a temp drive where I had stored the dmg file I extracted from the Yosemite installation app. Upon further inspection I found a folder in the same directory called "Office Mac Home and Student 2011 - (1 User-3 Installs) (Download) (OLD VERSION)." This folder came from an Amazon.com installer at some point in the past. Files from external sources on my system are always renamed if they contain illegal characters for cross systems when they're not on a temp drive. In this case, I had not bothered to rename the folder since I didn't plan on keeping it.


VirtualBox came back to life upon renaming this folder in the terminal to Office-Mac-Home-2011.

Conclusion: Even though a file or folder on a Mac can be named something, doesn't mean that it should. Often people who write apps for other systems like Linux or UNIX and port them over to the Macintosh OS X  platform would never expect to find the non-standard naming conventions which are possible in Mac OS X. People who have used older versions of Mac OS might have the tendency to use the naming conventions possible under Mac Classic.

Hope this helps somebody.