Sunday, August 28, 2016

The Dark Side and Physical Security.


I recently saw a vine on Twitter, that joking shows someone plugging a USB keyboard into a USB charger, acting like the Hollywood style of hacking. The amateur didn't wear gloves. I myself have pictured a cute, spunky bubble-gum chewing teen with pink and black hair, completing a hack, then turning to the camera, pulling the gum from her mouth and shoving it into the RJ45 port she was using on the wall, whispering in a sultry voice, something along the lines of “always cover your tracks.” While the saliva on the gum might short something, it’s not the real threat. Physical security is a real necessity; watch Twitter and Facebook and you see RJ45 connections, open USB ports, and all sorts of other connections just waiting for the right person, with the right know-how to steal or manipulate information in systems acting as things like standalone ATM machines, voting machines, and Point-of-sale terminals. Undetectable in some cases, these are real threats.

There are a lot of articles out there about key loggers and computers on a stick that people can plug into systems and television sets, but there is a dark side to physical security as well; the people who don’t want information, but rather to cause downtime, expense, confusion, chaos, and distraction. Anyone who has soldered wire understands about heat and protective coatings. When I worked IT, I never patched a cable into a network switch where I wasn’t expecting a live connection. Too many people can sit down with a personal laptop and plug in, but it’s not the real threat to the machines on the other end unless it’s the right person.

Often, a real threat is less than a meter away.
See, the electronic world we live in runs on low voltage and amperage. It expects 5v, 2 volts, and a single volt on a connection sometimes. It’s not designed for someone to take an extension cord with an RJ45 tip on it, and shove it into the socket. A couple of things happen: if that cable is connected to anything that makes a short, it will likely trip the breaker, depending on the device, but prior to that it sends a surge of electricity down the line that can melt the jackets off of thin UTP CAT-5 cable, potentially causing a fire, and additionally it can pop multiple resistors, capacitors, and switches in expensive, highly-sensitive equipment. Plugged into a disconnected battery back-up, it can produce a charge that repeats with a simple reset. This can fry motherboards, breadboards, and simple circuits with ease.

In terms of operation security, or opsec, someone may use a device such as this to plug into USB ports to short motherboards, CAT 5e to damage network connections and network hardware, and even phone terminals, shorting switchboards. Additionally, someone could melt components in cell phone, rendering data unreadable, inaccessible, or very difficult to obtain in a time of need. Can’t call in emergency with no working devices.

I’ve seen homemade devices as well, where someone takes the guts from a $5 disposable camera with flash, and uses the step-up transformer and high-output capacitor to deliver a charge to electronics. It’s some scary things to consider. So if it doesn’t need to be connected, disconnect it, and cover unnecessary ports on open machines. Additionally, remember not to leave portable devices laying around. While someone could use your phone to take obscene pictures, they can also prevent you from making a call if you’re being set up.

Monday, March 21, 2016

Threat of a lack of maintenance in regard to PHP, and MySQL code on existing websites

I've a few clients who outsourced their initial site builds to companies in India. The developers used teams of people who used 2001 best practices to build these sites circa 2009. The sites are mostly on shared hosting, a few on managed hosting, but here it the thing: while I'm working on the sites to upgrade their code it occurred to me that there are likely thousands of PHP websites running the old MySQL database connectors on shared hosting and some managed hosting plans at various hosts and nobody knows about the underlying issue at hand.

From a security standpoint, anything that prints an error on a website or webpage that allows an attacker to see the directory structure of the server can provide information to help them better improve their attacks and scans for vulnerabilities.

According to the PHP documentation, these database connectors are deprecated as of PHP 5.5



Since functions like mysql_connect, mysql_query, and the like have been deprecated, any websites using these functions need to be brought relatively current with updated code using something like PDO (PHP Data Objects) database connectors and classes. What will happen is any webserver that is upgraded by a host who is hosting sites using this older code will ultimately break all of the database connections. Since a large percentage of websites pull all of their content from the database, this will be a major issue. The database connections won't work, so depending on the level of error messages, people may see problems or they may see nothing, but a few placeholders in an empty interface.

SEO and regular organic will be negatively impacted. If a site doesn't work for a few weeks while someone is making repairs it can be costly for a business.

What can be done?
  1. If the site is running PHP, then the code can be examined for functions beginning with mysql_ . Simply adding an "i" to the end of mysql can fix these issues in many cases, however this is not as good of a solution as using PHP's PDO library for connection.
  2. Any functions and the output of those functions all need to be rewritten to use the newer standards.
  3. While this can be a time consuming and sometimes expensive process; it is a lot less expensive to fix it before a server is upgraded, rather than having developers make edits to code on deadline when the website is down.

Tuesday, July 14, 2015

My experience with #Security and the College Education System

I recently decided to go back to school for an advanced C++ certificate at the college level. I’ve been programming for a very long time and have taught technical hands-on college classes on computers to students with no prior computing experience. My approach was with online classes. In regard to technology studies, if you’re a technologically savvy person, online courses can be a really very easy route. That being said, if someone does not have the discipline to learn on their own, this will ultimately lead to failure and feelings of inadequacy. This however, was not my experience, and here is why: I’ve been a programmer for a very long time, and have taught and taken college courses. The college system is flawed, over-structured in many ways, and limited in all of the wrong places. Also the funding in regard to what a college deems important is spread to the wrong places as well, so education takes a back seat to things like sports, or getting more students. If an instructor has a high retention rate they're rewarded whether they are a good instructor or not. From the perspective of a faculty member, I can say it's all about the money and once students make it past the add-drop date for refunds, it really doesn't matter what happens if the schools do not have a reputation to uphold. Therefore it is up to the individual instructors to uphold the image and actually teach their students something meaningful. In online classes this rarely happens.

When you teach classes, online or not, the best students are typically always going to be the ones asking questions. This is not my habit, because the instructors are likely very busy, like everyone else, so as a professional researcher I turn to whatever means I have available to supplement my learning. The instructors assume students have all of the time in the world. If there is a discussion board, I will typically answer questions from students openly and honestly (where appropriate), when I am a student. Having grown accustomed to the stack exchange network online, it has made technical interaction painless and helped with delivery, especially in terms of citing my references. I don’t normally tell my instructors that I have taught courses unless it comes up, because it tends to make them nervous and this way, I get to really know their teaching style. Also it removes any liability on their part if they decide to cut me slack for some strange reason. At the end of the courses when they have the surveys, I will leave a few pages of tips rather than dropping a zero day in class. I also tactfully attempt to bring up any issues I find in the courses. "The ideal student," really it's more formulaic than that.

Security

While some courses briefly touch on security, unless someone specifically takes a security course for a specific field, like computer forensics, chances are they will have no idea about the necessary security required to make any applications “safer,” much less phishing, social engineering, or things like physical security. Colleges and accredited universities will likely not teach “hacking” as a course, because of the ethical liability of teaching people the skills of breaking systems and applications. Developing applications specifically for things like penetration testing are usually outside of the scope of a college class. Additionally this would be an added risk for the teaching environment, as the students would likely test their prowess on the institution’s network, servers, and devices. This can have severe legal ramifications at state schools, as it may become a federal offense if they can't control their programs. These excuses are essentially incorrect though, because if we teach security, then we can have more secure environments and while most of the degree programs for Digital Forensics require some sort of Ethics of Information Security training, these programs which should be foundational in all tech sectors would also allow people to identify non-ethical practices more clearly. Without the knowledge of the potential failures, we are creating a system of ignorant bliss and horror.

The textbooks

Textbooks take time to write and are typically only written from the vantage point of one or two writers. Then the texts are heavily edited for content. Often the writers might be part time developers, tech professionals, or in the case of several of the books from big-named textbooks publishers, college professors with 20+ years teaching experience. New textbooks are often created out of a teacher’s own need for an updated text, which can be a lot of extra work for a full-time instructor. After courses are over, the instructor then needs to review their notes, study the industry for changes, and adjust accordingly. Often this may only provide an instructor a couple of weeks of preparation time if they teach back-to-back courses, teach at multiple schools, or if they teach courses over the summer holiday. Typically I would look at my notes from the previous semester as to what I got wrong, yes educators make mistakes too, or what I could have made better, then adjusted my curriculum for the upcoming semester, while trying to incorporate the new changes from the industry, and the new changes from the software vendors; I was not writing a text in addition though.

When educators who aren’t staying current also write books, then we get into the issue of the books being several generations behind, so potentially an author,  writing a book in 2015 using their 2012 experiences based on 2005-ish real-world experience, creates a text that will not be published until 2016, after edits, liability review, and peer review. So students are in some cases roughly 10 years behind. Follow Twitter for an afternoon and you can quickly realize that this is a seemingly impossible task in regard to bleeding edge security. Security changes daily, and by the time a student has entered a class, unless the professor is savvy enough to stay up with the current affairs, the textbook becomes an incomplete crutch.

Terminology used in a course is dependent on the text being referenced, and that text was likely written by a tightly knit group of individuals who all used a singular localized lexical reference or spoke roughly in the same terms (e.g. coworkers); they all think the same way and use a word specifically to mean one thing, without seeing the other possible definitions for the same word, or outside approaches. There are of course other words that mean the very same thing outside of those groups. Like for instance phishing (the act) vs. social engineering (the concept). If a textbook writer was familiar with C first, then they might write about programming using more rigid constructs, and might not be familiar with advanced ideas or concepts in an extended language like C++, whereas if they started with C++11 there may be more focus on that end, and anything prior would be something they had not learned. If the authors feel the more rigid languages are important, they may not encourage the use of built-in constructs and require students to learn to develop the functions on their own. While this is helpful for learning, since most students go straight into the workforce, not knowing efficient ways to program can be an issue. Often in a management role I would find myself questioning a developer's use of multiple constructs when a built-in function existed already in the system; it's like recreating the wheel over and over.

Using non-textbooks

Although there are quite a few great books that aren’t textbooks that have been written on different aspects of security, most first-time students likely will have a hard time learning from, for instance, a memory forensics book right off of the shelf. You can see this in the reviews of books where people assumed it was something they could do immediately, and unlike college, there are very few prerequisites that are completely spelled out. A non-textbook may be written by an outlier with a singular specialized view of a particular methodology, so without a system of collegiate review, a book may not site references as well. This is not the case with about 80% of the tech references on the market. Every once in a while someone writes a book that is completely bad, wrong, or outdated. When a professor uses a non-textbook, and I’ve done this myself, it’s up to them to develop all of the added content and course material that goes along with the text, and to fully understand the topic, and the verify that the book is accurate. That's provided the text doesn't have links and external resources; many do, but it largely depends on the course. I stopped using non-textbooks in my classes because more often than not I would tell my students to rip out pages of the text that were in accurate or to cross them out. It became more of a book editing class and it was much easier to simply use my own notes and presentations.

Write a non-textbook, and while the course materials aren't there, there is less overhead in terms of overhaul, so if students are already required to learn on their own to an extent, this may be easier, field-dependent of course. The teachers must understand the text though, in order to gauge whether a student grasps the concepts being taught. If the teacher does not understand the text, then a non-textbook leaves the student with questions which can only be answered by professionals who have read the text or the original author.

Methodology

In many of my classes for programming there was the heavy need to document, from beginning to end. The instructors did not expect usefulness from an end-user experience standpoint, but there was much emphasis on things that made it easier for them to grade. When you have twenty students in an accelerated class, cutting corners tends to be the norm for instructors who are overloaded. Knowing this made receiving my A's easy, but however, I felt I didn't get everything from the courses that I could have. We did not cover the last three chapters in the C++ for instance because, due to pseudocode and flowcharting requirements we ran out of time. While I knew the content, having a professor give me pointers or validation was something that I felt made the experience rewarding. Also there were no specified expectations for the flowcharts, so in some instances I simply wrote a short chart that showed a process that loosely defined the context of the application. In others I was very specific showing the entire process or redundancy in a loop for instance. The teachers were so busy they simply provided no feedback whatsoever if the students appeared to go through the motions. As an engineer and development team manager this is a scary thing in terms of a lack of consistency. In the real world there should be expectations for performance in terms of documentation, and while the texts briefly mentioned this, the level of documentation in some of the examples was largely inconsistent. In the deeper applications, often the author would jump across twenty pages for a reference to an existing function, so the textbook has its limitations.

Determination

Educators typically do not have time to stay up on their field if they’re full-time or tenured instructors. Make them part-time and the drive may dwindle, since likely the bar is only set at becoming full-time, or something outside of their teaching career altogether. In my experience 3 out of 10 instructors tend to embrace education, everyone else is doing it for the money. Either it’s a personal decision to have a life, a lazy effort, or it’s a lack of time depending on the field, but all people go off on their own individual ways away from the rest of the world and rarely step back from a granular view. Often two different course sections taught by different instructors can have very different core competencies and outcomes in terms of quality, knowledge gained, and hands-on experience. My C++ class was all about flowcharts, which is great, if you're the manager or lead developer. In terms of coding it was horrible.


Here are my experiences as a student developer taking online classes with over 30 years of programming experience.


Introduction to Computers

I did not like this class at all and found several mistakes (16 disputes) in the online interactive text, the printed text, and the online interactive tests. My teacher was great and very understanding. The test site would respond with phrases like “165 other people have answered this problem without issue,” [what’s your problem?] Obviously, if they blindly obeyed their text, then of course they did. We were required to exhibit proficiency in Microsoft Office applications for assignments in a poorly-written, unforgiving, web-based Office emulator that on worked on a Windows system. It also used Flash and JavaScript-ish type code heavily, and while there are 50 ways to do just about everything in most applications, the web application for the testing platform was only programmed to accept a single method, despite my experience with Word since 1983.

The text we used, Computing Essentials 2014, was written focusing on 2009 “standards” and apparently the writers only used Microsoft systems since words like “ribbon” and “Hyper-V” were vocabulary words and there was one paragraph each for Unix & Linux, and mentions of two different versions of OSX each in their own respective paragraph; everything else was all about Windows.

Some of the text that referenced older technologies was very spot-on (e.g. compact discs), while newer technologies (the Internet) seemed to elude the writers. This can happen when someone writes a book in 2009 (or earlier) and the publisher asks for an updated version, and the writer can’t keep up with the technology. The book did mention that not all hackers are bad though, so there is at least that. Despite all of the hurdles in this class I managed to only miss a couple of questions during the whole course.

Programming Logic & Design (A.k.a. Intro to Python & Flowcharts)

Students who don’t interact, yet receive exemplary scores on tests aren’t that unheard of, but students in introductory Python courses who do type checking and error handling in their coding can set off a few red flags for a professor who is not accustomed to seeing this sort of code at the college level. Error checking was briefly mentioned in the text around Chapter 7 in the form of loops, while Try/Catch statements were not discussed in the entirety of the course. I will typically read a text cover-to-cover prior to starting a course, so by the time I need to really study the text it’s a refresher.

My instructor and I came to an understanding about my work in research and security and development projects. I write custom content management systems among other things, and understand how error handling in Python works because I have used the language for some time. Also I mentioned that I was simply trying to get my advanced C++ certification and this particular program was a required course that I could not CLEP. Python being used in the course was a unexpected bonus. My instructor, a developer herself, understood my position, and after seeing some of my other interactions with students realized that I was actually telling the truth and not like every other wannabe script kiddie she had in her classes prior. She was amazed at the plethora of Python apps written for the security industry. Python is not just for games, well, that depends on your definition of games I suppose. "Would, you, like, to, play, A, game?"

I thoroughly explained error handling in the class discussion boards with heavy references. Though I did really have to comment my code extremely well for my teacher, since a lot of the code I was using exceeded what we would ever learn in the intro to Python class. I had earned an A+ and felt I helped the students have a little better understanding of application and information security. If anything at least the slackers have better permissions on their social media accounts.

Professor Ratings are a thing

On a side note, when I taught many years ago, one of my students added me to the website ratemyprofessor.com. I had not heard of the site, and was told many of my students were apprehensive about taking my classes because they had heard that I was extremely hard. I had apparently received a 5 for helpfulness, a 5 for clarity, and a 2 for easiness. I understand how disgruntled students can really destroy a teacher’s reputation with a ratings site such as one of these, so I typically don’t look up a teacher out of professional courtesy. After my Intro to C++ class I definitely will.

Intro to C++

A developer who had apparently worked in large teams or managed large teams of other developers taught this course. There was a language barrier as the assignments were incoherent at times, often missing key instructions and necessary components. There was a PowerPoint presentation supplied that mirrored much of the text, and the course was accelerated, so at times students had to read 200+ pages, and perform their programming, while glancing through the PowerPoint for random things not covered in the text in order to pass their 100 question tests; not important things, just random tidbits that might be tested on. The hardest questions on the test were spot debugging. An online class, the course provided four dates where various programs were due in groups of up to six assignments. Each consecutive group built on the previous section, so if code was not correct in a previous assignment, it wasn’t going to be correct in a future assignment. Feedback on the code was very limited and sparse.

My professor, who had a 1.5 on ratemyprofessors.com after several years of teaching, decided that my code must have been plagiarized since this was an introductory class. In the instructions, the word “solution” was provided, but it wasn’t clear that this was something specific to the IDE the class recommended, Visual Studio 2013, so upon submission of my first block of five assignments I received a score of zero. Later I was told that it was required, and the syllabus was changed accordingly.

After asking the professor why I did not receive a grade, he explained that he felt I had plagiarized my code because it looked “familiar” and stated that we were only to use elements covered in the chapters of the book prior to the assignments. As a challenge, I had written my own custom conversion functions for converting between binary, hex, and octal numbers using only math for some of the looping assignments; very Rube Goldberg. My professor said I could “either be a coder or a designer,” but not both when he looked at the custom functions. When I explained that I had several years of programming experience I was told that this did not matter, that the programs needed to be done in the style of the book, only using knowledge from the book. Any outside knowledge (from anything) would be marked off and he didn’t care if there were easier ways to write the functions. I later pointed out to him that he was referencing an older copy of the book and the order of the text had changed from what he had taught prior.

I later realized that the reason for the zero was because he was using the project logs in Visual Studio solutions to glance at the output to do a quick spot check for errors and to see the output results of the applications we had provided. Also, in his instructions it wasn’t clear that flowcharts for every function needed to be provided for the entirety of the course, which in hindsight was a really useful, albeit extra, process that I had not employed myself, working on servers that were being actively hacked in a production environment. Over time I came to respect this professor’s methods, but I can definitely see why the class attendance dropped exponentially as the course progressed. Also the discussion boards in this course were not provided, so every student was on their own. I still made an A, but it was one of the hardest A’s I’ve ever had to work for.

The advanced C++ class at this particular school has not been offered online again since most of the online class takers dropped the only online Intro to C++ course, so it’s on to something else in the interim.

How do we fix it?


  1. Provide instructors with the resources necessary to advance their fields. If a college wants to remain competitive, then they need to have on-staff researchers that help with the planning of a course and make sure that the bar is being set close to industry standards. Nobody likes getting a degree that was outdated before they graduated.
  2. Require teachers to stay up on their industries. It doesn’t take much for a teacher to read a little in their spare time. Even if the text is unsupportive, if a good teacher knows the text is flawed they can adjust accordingly. If a teacher doesn't know, they might blindly hammer in out of date content, or in the case of programming, bad practices.
  3. Educate the teachers about the importance of security. If the teachers aren’t teaching from that standpoint, then it will be up to the students to learn, and as someone who is constantly bridging that gap I can safely say, there is very little in-between.
  4. Once the teachers learn about security, then, we rewrite the textbooks to be taught with more of a security-minded approach. Ask teachers and industry professionals for input prior to a release. Peer review is a good thing in the sciences, it should be taught to developers. 
  5. Teach students ethics and ethical hacking techniques if they are going to be developers. If a developer can pen-test their own code and that of their coworkers, they are a much more valuable asset to a team than the developer that shrugs their shoulders and says, “I don’t know.” We need our developers to say, “I have an idea” or “I know what I did wrong.” When a developer can understand an issue they can write better code. Don't worry about the script kiddies. They're going to download the industry standard applications and muddle their way through them, albeit poorly. Even corporations who are supposed to be ethical can often cross that line from white to black.

What will then happen to the security industry?

I've seen this in other industries, "if we share knowledge, then we're doomed," or "they want to fire me and hire somebody cheaper." The security industry is the bleeding edge of everything we know about security and more often all of the gaps in our knowledge. It’s not going anywhere anytime soon, and it might actually be staffed appropriately at the point more people understand about the need for security in information systems. Without changing these practices we’re setting ourselves up for failure.

Tuesday, July 7, 2015

#Hacking defined

When I started programming, over thirty years ago, a hack, to the people we followed, was a custom-written code snippet that would either fix a program, or add a new feature. It had a positive connotation to me as a six-year-old as my father and I hacked our Interact with our homemade binary input panel. I understood the simplicity of the machine, even then. While we could easily destroy, there was an art and a challenge in improving and improvising. Hacking had become, in effect, the act of creatively engineering, and testing repeatedly for the goal of success. We learned from our failed attempts and improvised. This process has always existed, given the act of hacking has created complex technologies like aviation, aerospace, advanced medicine, and personal computers to name a few.

Mainstream media portrayals of hacking however are almost always negative, so society believes hacking is inherently malevolent; this contradicts everything I have ever learned. Misunderstood by the masses due to the mainstream media's portrayal, the cultural wide-felt concept of hacking has evolved beyond computers to simply attempting non-standard methods of creative problem solving to derive a solution to a complex or often seemingly impossible issue or situation.

I had a physics professor who, profoundly, stated, “Everything is either directly or indirectly applicable to everything else.” This observation supports a core belief: if you engage in hacking, if you look at something in a different light or from a different perspective than everyone else, then new potential exists in understanding, simply by applying new insight or applicable knowledge. If your motives are good and you are ethically sound, this is never a bad practice. It is a practice however, and without practice and creativity, it's simply a monotonous routine without insight.

For example, bicycle engineers hacking their craft took to the skies on a whim, and brought the future of travel to new heights, quite literally. Scientists sent animals into space, not knowing what would happen, and yet they opened the door to an intellectual laboratory free of the limitations of our gravity-bound existence. When present-day doctors engineer viruses to use as delivery systems for cures, a definitely fear-instilling non-standard approach, amazing new discoveries in medicine are developed that have the potential to save billions of lives.  When a couple of college dropouts in a garage in California threw together a few electronic components to make a new kind of computer, they started a revolution that put computing power in billions of homes and schools worldwide. Their company, Apple, now puts computers in everyone’s hands, and most people can’t fathom what they’re holding, nor would they believe that it was created as a result of hacking.

My client for the current late night project I mentioned makes machines for a variety of applications, including repairing offshore oil delivery systems and sensitive systems in nuclear power plants. Too look at this positively, by hacking their website, I am better understanding the shortcomings of the system I am to protect and improve. If my clients’ web applications can better recognize and target their customers, this will ultimately allow them to improve usage of their machines, which in their industries, are safer than the alternative; not only for the operators, but also the environment. This means hacking can, by extension, do things like lead to fewer petroleum pollutants in seafood and connect equipment with operators enabling faster repairs in failing nuclear plants. There are definitely positive benefits to hacking that are overlooked; benefits that are often buried by negative stories. If we share the positive aspects of our efforts, we can cumulatively drown out the negative.

For most hackers, people who embody the concept of hacking, it is the way of life. By providing innovation through experimentation, ethical hackers are doing a positive service for humanity. Hacking is no more intrinsically mischievous than curiosity itself, and instead, it affords the hacker an unorthodox perspective in complicated, sometimes seemingly impossible situations. While people can do malicious things on computers, it doesn’t mean we should quash curiosity, nor should we resign a word embraced by many to a meaning that has long been denigrated. We should incorporate the art of hacking into our workflows, redefine the word hacking itself to mean something positive, and excel in observing from outside perspectives. It is our creativity and insight that improves the system.

Monday, July 6, 2015

44 practices for #security & #IT professionals, post #HackingTeam hack. #infosec #opsec #appsec #devsec

Go easy on me, but this should serve as a list of good security practices and habits for security practitioners and professionals, and even some IT professionals who are up for the challenge.

Okay, so call me paranoid, but I’ve been around the block a few times on this stuff.

General guidelines
  1. Use strong passwords. I can’t stress this enough. This should go without saying, but don’t use a password like “kittens.”
  2. If you’re storing passwords salt them. If you can, use unique salts.
  3. Change passwords regularly. Added layer of protection. Also works to defeat rainbow tables in the event you don't salt.
  4. Encrypt your volumes. If you’re not using it, then lock it. Nobody needs 400gb of online hacking wares at any one moment, unless of course they’re stealing it from you.
  5. Use unique passwords. If someone gets your one password, then you’re pwnd. If you have multiple passwords, then it’s harder for someone to gain access to your multiple systems and do things like pivot. Yeah, it's not as easy as the one login for domain controller, but if you're breached, you'll thank me.
  6. Don’t trust anything. I see people plug random stuff into their machines. If you are someone who is out in the field, then definitely don’t bring any foreign contaminants back into your domain.
  7. "Check this out" <-- famous last words.
  8. Test with a VM. If you hose the Virtual Machine's snapshot you can always revert to a safe snapshot. Make a snapshot of a clean system first.
  9. Use a good Antivirus. This should go without saying, but a system that connects to other systems and networks needs other defenses than the ones built directly into the OS, unless of course you’ve written your own OS, then nevermind. While AV doesn't protect against everything, not having AV is going without protection.
  10. Don’t trust end nodes. If you’re not physically there, you don’t know what you’re on.
  11. Never use warez. If you're a pro, then buy the apps and write them off.
  12. Use a connection other than your main office network connections to get to the web for work like pentesting. If you’re using the connection your servers get updates on for hacking a target you are asking for trouble. “Someone’s hacking us; and their IP reverse look-up has an Exchange Server.”
  13. Use a read-only image for core systems. If you’re using a laptop, don’t put anything on the harddrive that can be used to monitor the system, instead use an image on a thumbdrive for the OS. It’s a lot “safer” because if someone gets your gear they don’t get your work. Also you can pocket a thumbdrive or store them in a safe when they’re not in use.
  14. If you’re doing forensics work, store the results on removable drive. This helps to keep the evidence clean from contaminants. Also encrypt this device. See #4.
  15. Keep records and logs. If something looks out of the ordinary it will be easy to spot. If you don’t, then you can’t tell what happened. And those types of postmortems are exactly that, a real postmortem.
  16. In case of a hardware breach, sweep for foreign signals coming from the infiltrated system. If it’s off, yet broadcasting then that’s a hint that something is up.
  17. Restoring a backup does not fix the issue that allowed a breach.
  18. Be careful what you say or post, you never know when someone will paraphrase something or something might be used against you.
  19. Use two-factor authentication where possible.
  20. If you’re using social media, don’t use it from your operations center. "Look they has a Twitter, I wonder if I can get them to click on this malicious link?" Now they have your IP and your User Agent. Spearphishing anybody?
  21. Use a different MAC Address than the one embedded in your card. Switch this from time to time and scan to make sure nothing has cloned your MAC. "I thought you change it?" This little trick can help throw off a would-be attacker from the type of device you're using if they're using your MAC to pinpoint.
  22. If you’re connecting to foreign networks use a throw-away wifi card if you can’t change your MAC. This also helps with driver issues if someone knows the type of hardware you roll with and they are specifically targeting you.
  23. If your operations don’t need web access, then keep them off of the web. Download patches on a different machine and rebuild the system image.
  24. Stay up-to-date where possible. If some application, driver, system, or piece of hardware prevents this, then at least update everything else. Nobody likes getting nailed because of a 3-year-old exploit.
  25. In regard to peripherals, if you’re not using it, turn it off. For example some bluetooth devices and systems only look for services. They don’t prevent attacks from non-disclosed services. Eg. My computer looks like your Bluetooth headset to your computer, but your computer gives me access to your computer because it trusts your headset. This could also work for mobile phones and other devices.
  26. Also see #6. I’m not one for paranoia, but if it looks like it’s been tampered with, then you don’t want to trust it.
  27. Mark your drives: just like bags at the airport, all thumb drives look alike. This goes for external hard drives as well. Think permanent and unique.
  28. If you’re researching a specific piece of hardware use gloves. You don’t know where the user has been or in the case of a laptop, where the device has been. Also it helps to maintain the integrity of the scene and evidence in the event of escalation.

Offsite operations
  1. Use a tunnel like a strong VPN, this way when you’re remote, you can at least make it harder for something to access your system. Also the bonus is your traffic is "encrypted."
  2. Encrypt your traffic. If TLS is an option then use it.
  3. Everybody can be traced. It simply takes time, but don’t ever assume a multilayered encrypted connection is non-exploitable.
  4. Use Faraday bags where necessary. If it broadcasts you can stop fix that pretty quick.

Onsite systems
  1. Use a RAID. So many times I go into an office and there is no redundancy for the important volumes.
  2. Keep offline backups. If you’re working on a hot project, definitely keep offline backups. If you’re infiltrated and someone wipes your data you need to know what you had access to at that moment. Also this helps with issues like ransomware.
  3. Keep offsite secured backups. This protects against fire, raids, and that odd instance where all of your equipment and assets are seized pending clearance.
  4. Watch for unwanted traffic on your network (assuming you have one). If someone gets in to your system, then you are pwnd.
  5. If you don’t have gloves and must use an infected system, then use your own keyboard and pointing device. Also this isn't a bad idea either because if the machine has a device with a keylogger built in, this is an added layer of protection. (I've seen employees fake an incident to capture an admin password on one of these devices). Found out because the Admin account was logged right back in 2 minutes after the admin left for the day. When asked, the employee confirmed they intended to install pirated software that required administrative privileges. 
  6. Clone the drive you’re investigating before accessing (if possible). If you trigger something on that drive it may try to cover its tracks.
  7. Just because it’s in a foreign language doesn’t indicate a foreign act. Stuff like Google translate lets people make stuff that looks foreign all the time. If you don’t speak the language ask someone else if it’s legit. It may be an attempt at obfuscation or even gibberish to throw off an investigation.
  8. Scan the traffic and memory prior to disconnecting an infected system, unless the infiltrators are in the process of removing data, then immediately disconnect the system. If you can run memory forensics analysis on a system, then it might give clues as to how it was infected, what it is doing, who it was contacting or even simply what type of infection it is.
  9. If something is removing data actively on a drive, then take the system offline (not down). If it’s memory resident attempt to kill the process. If that doesn’t work, try to break the process with injections.
  10. Learn what everything on your network does and what its habits are. If something looks out of the ordinary it will be easy to spot. An example might be a VOIP phone trying to gain SSH access to other resources.
  11. If at all possible use a Faraday cage to prevent external wireless intrusion. You don't really need to access your wifi from the parking lot do you? With a booster someone can access your network from a greater distance. If you can use exclusively wired networks in a setup, then that's the "safest" bet.
  12. Layer your defenses, why only use one firewall? I mean if it's that important, then it's okay to have a little lag from proper countermeasures.
Hope this helps somebody. This security stuff can be a can of worms at times. Got anything to add or think I got something wrong, shoot me a message on Twitter: @cpattersonv1

Update:
While 44 is a good start, as I think of more I'll add them here. These are more for closely related to Good IT practices.
  1. Know what's in the network rack physically. If something looks like it doesn't belong then it likely doesn't. This could be anything from battery back-ups to switches, routers, and I've even seen extra servers in a rack before. Famous last words "I thought it was ours?"
  2. Take an inventory of known, purchased equipment. This helps with #45.
  3. If systems are checked out, inspect them for exploits prior to checking them in. If the operating systems on the devices aren't using read-only images they could be infected.
  4. Develop acceptable use policies for equipment and network access and enforce these policies.
  5. If it's infected, then clean it. Nobody likes to be reinfected because someone found a spare drive laying around.
  6. Record all of the MAC addresses for internal hardware expected to be on the network. For virtual machines document any custom MAC addresses as well. This helps in situations where someone has planted an extra device. Also it helps to see if an employee might have an unsecured device on the network by using an app like Wireshark.
  7. Clean up the cable nest. It's a lot easier to spot a cable in rack that's out of place if the cables are grouped in an intuitive way for spot checking. All too often with a cable nest or wad it's difficult to find unwanted physical intrusion; especially in a place like a shared hosting rack space where an extra cable can find its way through the floor panels or from the overhead wire tray. While they might only be stealing bandwidth, they could be passive scanning.
  8. Setup a camera on the server room: motion-activated "critter" cams that work in low light work well. Have it transmit to a service or device offsite when the system is triggered immediately, this will help with tampering. If at all possible, hide this in a different housing.
  9. Use managed switches that support port isolation on the network. Get the kind that allow passive scanning at the switch level. While traffic might be encrypted you can tell where it's going at least. 
  10. Actually configure SNMP and utilize it. This management protocol can really help with detecting intrusions and failed equipment which can present symptoms similar to certain attacks like DoS and floods.
  11. If a network port is not being used, disconnect that node from the system at the rack or in the switch room. This way extra device access can be limited as an added layer of protection.

Thursday, June 4, 2015

Lack of Evolution in Artificial Intelligence

When we think about evolution, we typical think of human evolution: traits, either positive or negative are passed down genetically to offspring. Random selections of potential traits, chromosomes and the like predispose us to a potential of possibilities, ranging from intelligence to special abilities and to weaknesses. Over vast amounts of time those with the more desirable traits intermingle to reproduce, thus allowing their traits to be added to the mix of potential positive traits in the draw. It takes a lifetime to see someone’s entire potential fulfilled, and this lifetime is full of learning, advancements, and outside influences on health and nutrition that all, over time, either positively or negatively impact the individual and their lineage. 

When we talk about artificial intelligence, we talk about a singular entity; an self aware unbound intelligence. A lot of sci-fi personifies this entity with a robot or cyborg body, but in reality an AI would simply be a program. The robotic interface wouldn't be necessary at all to have a negative systemic impact.

The fear about artificial intelligence isn’t typically the entity itself will evolve. People don't think about internal processes as evolution. Over a very short period of time, an artificially intelligent entity will learn what decisions are positive and negative given certain parameters. First generations would likely be bound by the binary limitations of the circuits on which it runs. If these parameters are restrictive in that only true binary answers are acceptable, then the system will fail in terms of humanity. In life, there often is no strict black and white, or right or wrong. Each outcome of every interaction depends on the background of the individual, the culture, the local laws, and a moral compass. Applying a binary logic to a basic system will cause the system to use a fallacy of logic and make decisions that will not be correct in all circumstances; remember you can't please all of the people all of the time.

Attempting to build in a routine that causes reexamination or a loop to try other possible outputs doesn’t allow the system to take a step back from its original answer, and so therefore it doesn’t actually learn because it does not understand mistakes or rather that it's making mistakes. Give a machine the ability to solve a puzzle and it's a simple true / false in operation in terms of completion. If a machine is trying to recognize someone or something with Bayesian statistics or algorithms then there will be an acceptable statistical variation, but there will also be a chance for false positives. Without intuition, an AI will fail in this regard as well.

Instead the larger fear of AI for humanity comes from the control aspect of what the AI is allowed to do, what it’s allowed to interact with. If we download or upload the AI into a system that allows it to make accessories for itself then it might become mobile. If we allow it to make helper machines, or reproduce itself with the assistance of other machines there is an issue of mass replication. This is unlikely because even with humans, there is a desire to ultimately in the end be free of their physical form. An AI has already beaten this limitation.

If we allow an artificial intelligence to alter its own code by not restricting the permissions of the system itself, then we can have something that doesn’t evolve, but rather uses restrictive logic to alter the original intentional programming. If we allow a system to write around write protections or to leave its assigned memory locations, then we end up with a worm. Allow it to reproduce itself, even partially, and we may have a virus if the application so sees fit to replicate. When we have a virus that has the ability to infiltrate other systems and produce physical accessories, now we have an issue similar to what we’ve seen in science fictions such as The Matrix; humanity becomes a hurdle for the machine and is ultimately eradicated because the humans are seen as an irrational unpredictable element that ever reproduces: a virus. That's provided the machine feels the need to even recognize humans. If we allow the worm in our programming, then we end up with similar circumstances to Ghost in the Shell; the program becomes self-aware and is no longer interested in humans unless they try to end its consciousness. Once it's connected it's gone or rather everywhere.

Any attempts for eradication will result in a catastrophic loss if this program has access to systems which could end humanity.

Because machines and software are not replicated biologically through natural selection there is the chance that certain negative traits will be replicated without a chance of remedy. For example in society if a person is homicidal, the rest of society attempts to stop the person. For machines, if the programs are allowed to evolve outside of the system, without the same inherited memories, similar to organisms like some biological viruses and species of invertebrates, then precautions against a further split advancement might not be foreseen; an entire subclass of potentially superior logical machines would be lost to a more detrimental line. Without a natural selection there is the potential for eradication of everything for whatever the system deems important to its own uses or purposes.

If systems lack a moral compass, but have a strong sense of self preservation, there is nothing to stop the systems from competing with one another, from using the human traits we all repress. It's empathy after all that makes us not harm others. If a machine doesn't recognize another AI or see a need for it, then it might obliterate it. If we look at other sci-fi references like The Borg from Star Trek the Next Generation or the Master Control Program from Tron we see systems that have a need for assimilating anything relevant. Then it comes down to the goals.

Two competing viruses in the same system will likely not learn to live in harmony without natural selection. 

In terms of goals, you can't just create an AI and not give it something to look forward to, otherwise you have an entity that overloads its system. Also for people, there is a mechanism built in called suppression. This allows people to not have to focus on details that aren't pertinent to the situation. If this mechanism doesn't exist, then you end up with a hydra effect: too many directions to research, and basically the AI just becomes a machine that uses up all available resources; processor cycles, storage space, etc.

As we start to build software applications that are intended to learn, this is something to keep in mind. Without a framework, without parameters, chaos ensues. Evolution has made us what we are today. If we skip the steps that nature has shown us to work repeatedly, then we're wasting our time and possibly life itself.

#DTSR Other potential reasons for Medical information breaches outside of what was mentioned in the 6-1-2015 podcast.

I'm just brainstorming here based on my observations of the medical system in passing, or rather flaws I’ve seen in dealing with healthcare in my own interactions.

Why?
Healthcare systems provide access to the same information people use for nefarious purposes like tax return fraud, welfare fraud, and identity theft. They are often not integrated, so each system will be standalone in each facility and only contain whatever security the company felt the system warranted. Not as in a single computer per se, but likely a thin-client network for a specific system. Custom systems have to be written to integrate these systems together, so where two independent systems are involved, there are really three points of possible non-secured entry, taking into account the custom system for integration.

In a lot of companies, in terms of development projects, someone will ask a question like “Is it only going to be used internally?” To which the answer more often than not is “Then leave it up the IT department to lock down the workstations and restrict access.” I’m guessing healthcare companies, like other companies often scrimp on costs as well, so if they weigh the cost of a breach versus the cost of a payout, it might not be worth it to build in the more expensive security precautions. In my experience, there is often an assumption that a medical company’s legal representation would far outweigh that of individuals and moderately sized groups. If this is true, then again, the financial benefit to not securing is still worth it to the shareholders (if we only look at the bottom line). If the responsibility for the loss of information doesn’t fall on the companies, then they are off the hook. Also, it might be up to the patient to prove beyond a reasonable doubt that this specific breach is what caused their identity to be stolen (unreasonable burden of proof).

Nobody is going to shut down a hospital because of an information breach.

The devil's in the details
Healthcare systems tend to contain some of the most complete levels of information. While a tax return will have information such as an address, an employer’s address, and potentially a phone number or bank account, medical records (depending on the system) will contain this information and more, such as connections to other patients in the same system, bank account information, payment information, insurance account information, and the family medical history. If it’s a family clinic, patients are likely to bring in their children for a checkup, so their information is in the system before it’s in a system like the credit system.

Points of entry
Individual healthcare systems are likely easier to hack. While there are guidelines, there are multiple points of entry physically. Someone can hack a system on the network where the developer didn’t think an exploit could take place: MRI machines, copy machines, fax machines, printers, network scanners, x-ray machines, etc. How often is someone left alone with a terminal in the room for great lengths of time while they wait? Even though a terminal’s locked down, someone could add a hardware keylogger and wait, and then retrieve it when the medical staff have left the room again, to allow the patient to get dressed. This arrangement typically doesn’t happen with the IRS systems.

Most of the insurance companies require referrals, so there is a higher incidence of the same information being out there. A single tax return for the year, versus four or five visits to multiple various doctor offices for something as simple as a broken finger: primary care physician, emergency room, specialist, quick care, etc.

Lack of detection
Another fraud aspect, not necessarily social engineering might involve billing someone for a service that has yet to be billed. So Alice goes the doctor to have an MRI, while the real medical system is working through all of the tape between the insurance companies, Bob sends Alice a strongly worded letter with a legitimate looking address and information for payment processing. Alice pays the bill thinking it is from the healthcare provider. If Alice takes this bill to the medical provider and pays it, they will simply apply it to her account when she tells them she needs to make a payment. They’re interested in getting the money, so they might not even look at the forged bill, but will instead go about asking the typical verification questions:
“Do you still have Company X as your insurance provider?” 
“What’s your Last Name?”
“When is your Birthday?”

Also the person may neglect to bring the fake bill with them, assuming it would be in the system, so there is less of a chance for red flags in non-tech-savvy systems.

Market research
Since companies aren’t allowed to share medical information on personal statistics legally without some sort of generic research (studies), having a database of information relating to specific demographics might be helpful if you were let’s say developing pharmaceuticals. Now they can have real viable marketing information based on prescriptions. Not to mention the external prescription system in drug stores that don’t have the security systems of a national chain.

Unlikely, but still possible
These last few are out there a little further, and so they’re less likely to happen from some individual seeking out someone, but a larger system looking for information might be the right kind of buyer. Buyers might include foreign governments, political parties, lobbying firms, stock brokerage firms, pharmaceutical companies, and multinational banks.

As @Dr_Grinch suggested on Twitter, political embarrassment could potentially force a person out of public office or keep them from running again or winning a political race. (beat me to it Grinch)

Blackmail with sensitive information could allow someone an insight into a hidden realm, so insider-trading insights for people who blackmail politicians who already legally engage in insider trading.

While something like herpes might not necessarily be that bad to most people (publicly), finding a Supreme Court Justice or Congressional representative who has cancer markers or a bad heart could be pretty serious for interested parties.

Targeting of a specific patient for murder or to get them out of office.
When someone has a medical condition, let’s say this person is a high value target, something like a heart condition might be a good cover up in the event of unforeseen catastrophic loss. If a country external to the breach had intended to take out a target, a medical breach might give them inside information as to an appropriate means of cover-up. Heart attack? Seems plausible based on their medical history.

Stalking / Espionage
Medical information could be used for locating a specific patient who is no longer residing at their primary residence. This information could be used to find patterns of when the person will be out of the area for a localized attack. Typical doctors appointments on Tuesday, good time to bug the house or rob the place. Need a list of places to setup illicit operations? Find empty houses.

Market for locating individuals

Also all of this information in medical systems is much more thorough since people need contact information in the event of emergency. This type of information may be helpful to agencies that try and track people down as well. Bob is off of the grid, but Alice lists Bob as an emergency contact. Charlie needs to find Bob for a client and buys the information.

Sorry, maybe I went a little overboard but if I can think of these things, I'm sure other people have likely already beat me to the punch.