Tuesday, July 14, 2015

My experience with #Security and the College Education System

I recently decided to go back to school for an advanced C++ certificate at the college level. I’ve been programming for a very long time and have taught technical hands-on college classes on computers to students with no prior computing experience. My approach was with online classes. In regard to technology studies, if you’re a technologically savvy person, online courses can be a really very easy route. That being said, if someone does not have the discipline to learn on their own, this will ultimately lead to failure and feelings of inadequacy. This however, was not my experience, and here is why: I’ve been a programmer for a very long time, and have taught and taken college courses. The college system is flawed, over-structured in many ways, and limited in all of the wrong places. Also the funding in regard to what a college deems important is spread to the wrong places as well, so education takes a back seat to things like sports, or getting more students. If an instructor has a high retention rate they're rewarded whether they are a good instructor or not. From the perspective of a faculty member, I can say it's all about the money and once students make it past the add-drop date for refunds, it really doesn't matter what happens if the schools do not have a reputation to uphold. Therefore it is up to the individual instructors to uphold the image and actually teach their students something meaningful. In online classes this rarely happens.

When you teach classes, online or not, the best students are typically always going to be the ones asking questions. This is not my habit, because the instructors are likely very busy, like everyone else, so as a professional researcher I turn to whatever means I have available to supplement my learning. The instructors assume students have all of the time in the world. If there is a discussion board, I will typically answer questions from students openly and honestly (where appropriate), when I am a student. Having grown accustomed to the stack exchange network online, it has made technical interaction painless and helped with delivery, especially in terms of citing my references. I don’t normally tell my instructors that I have taught courses unless it comes up, because it tends to make them nervous and this way, I get to really know their teaching style. Also it removes any liability on their part if they decide to cut me slack for some strange reason. At the end of the courses when they have the surveys, I will leave a few pages of tips rather than dropping a zero day in class. I also tactfully attempt to bring up any issues I find in the courses. "The ideal student," really it's more formulaic than that.

Security

While some courses briefly touch on security, unless someone specifically takes a security course for a specific field, like computer forensics, chances are they will have no idea about the necessary security required to make any applications “safer,” much less phishing, social engineering, or things like physical security. Colleges and accredited universities will likely not teach “hacking” as a course, because of the ethical liability of teaching people the skills of breaking systems and applications. Developing applications specifically for things like penetration testing are usually outside of the scope of a college class. Additionally this would be an added risk for the teaching environment, as the students would likely test their prowess on the institution’s network, servers, and devices. This can have severe legal ramifications at state schools, as it may become a federal offense if they can't control their programs. These excuses are essentially incorrect though, because if we teach security, then we can have more secure environments and while most of the degree programs for Digital Forensics require some sort of Ethics of Information Security training, these programs which should be foundational in all tech sectors would also allow people to identify non-ethical practices more clearly. Without the knowledge of the potential failures, we are creating a system of ignorant bliss and horror.

The textbooks

Textbooks take time to write and are typically only written from the vantage point of one or two writers. Then the texts are heavily edited for content. Often the writers might be part time developers, tech professionals, or in the case of several of the books from big-named textbooks publishers, college professors with 20+ years teaching experience. New textbooks are often created out of a teacher’s own need for an updated text, which can be a lot of extra work for a full-time instructor. After courses are over, the instructor then needs to review their notes, study the industry for changes, and adjust accordingly. Often this may only provide an instructor a couple of weeks of preparation time if they teach back-to-back courses, teach at multiple schools, or if they teach courses over the summer holiday. Typically I would look at my notes from the previous semester as to what I got wrong, yes educators make mistakes too, or what I could have made better, then adjusted my curriculum for the upcoming semester, while trying to incorporate the new changes from the industry, and the new changes from the software vendors; I was not writing a text in addition though.

When educators who aren’t staying current also write books, then we get into the issue of the books being several generations behind, so potentially an author,  writing a book in 2015 using their 2012 experiences based on 2005-ish real-world experience, creates a text that will not be published until 2016, after edits, liability review, and peer review. So students are in some cases roughly 10 years behind. Follow Twitter for an afternoon and you can quickly realize that this is a seemingly impossible task in regard to bleeding edge security. Security changes daily, and by the time a student has entered a class, unless the professor is savvy enough to stay up with the current affairs, the textbook becomes an incomplete crutch.

Terminology used in a course is dependent on the text being referenced, and that text was likely written by a tightly knit group of individuals who all used a singular localized lexical reference or spoke roughly in the same terms (e.g. coworkers); they all think the same way and use a word specifically to mean one thing, without seeing the other possible definitions for the same word, or outside approaches. There are of course other words that mean the very same thing outside of those groups. Like for instance phishing (the act) vs. social engineering (the concept). If a textbook writer was familiar with C first, then they might write about programming using more rigid constructs, and might not be familiar with advanced ideas or concepts in an extended language like C++, whereas if they started with C++11 there may be more focus on that end, and anything prior would be something they had not learned. If the authors feel the more rigid languages are important, they may not encourage the use of built-in constructs and require students to learn to develop the functions on their own. While this is helpful for learning, since most students go straight into the workforce, not knowing efficient ways to program can be an issue. Often in a management role I would find myself questioning a developer's use of multiple constructs when a built-in function existed already in the system; it's like recreating the wheel over and over.

Using non-textbooks

Although there are quite a few great books that aren’t textbooks that have been written on different aspects of security, most first-time students likely will have a hard time learning from, for instance, a memory forensics book right off of the shelf. You can see this in the reviews of books where people assumed it was something they could do immediately, and unlike college, there are very few prerequisites that are completely spelled out. A non-textbook may be written by an outlier with a singular specialized view of a particular methodology, so without a system of collegiate review, a book may not site references as well. This is not the case with about 80% of the tech references on the market. Every once in a while someone writes a book that is completely bad, wrong, or outdated. When a professor uses a non-textbook, and I’ve done this myself, it’s up to them to develop all of the added content and course material that goes along with the text, and to fully understand the topic, and the verify that the book is accurate. That's provided the text doesn't have links and external resources; many do, but it largely depends on the course. I stopped using non-textbooks in my classes because more often than not I would tell my students to rip out pages of the text that were in accurate or to cross them out. It became more of a book editing class and it was much easier to simply use my own notes and presentations.

Write a non-textbook, and while the course materials aren't there, there is less overhead in terms of overhaul, so if students are already required to learn on their own to an extent, this may be easier, field-dependent of course. The teachers must understand the text though, in order to gauge whether a student grasps the concepts being taught. If the teacher does not understand the text, then a non-textbook leaves the student with questions which can only be answered by professionals who have read the text or the original author.

Methodology

In many of my classes for programming there was the heavy need to document, from beginning to end. The instructors did not expect usefulness from an end-user experience standpoint, but there was much emphasis on things that made it easier for them to grade. When you have twenty students in an accelerated class, cutting corners tends to be the norm for instructors who are overloaded. Knowing this made receiving my A's easy, but however, I felt I didn't get everything from the courses that I could have. We did not cover the last three chapters in the C++ for instance because, due to pseudocode and flowcharting requirements we ran out of time. While I knew the content, having a professor give me pointers or validation was something that I felt made the experience rewarding. Also there were no specified expectations for the flowcharts, so in some instances I simply wrote a short chart that showed a process that loosely defined the context of the application. In others I was very specific showing the entire process or redundancy in a loop for instance. The teachers were so busy they simply provided no feedback whatsoever if the students appeared to go through the motions. As an engineer and development team manager this is a scary thing in terms of a lack of consistency. In the real world there should be expectations for performance in terms of documentation, and while the texts briefly mentioned this, the level of documentation in some of the examples was largely inconsistent. In the deeper applications, often the author would jump across twenty pages for a reference to an existing function, so the textbook has its limitations.

Determination

Educators typically do not have time to stay up on their field if they’re full-time or tenured instructors. Make them part-time and the drive may dwindle, since likely the bar is only set at becoming full-time, or something outside of their teaching career altogether. In my experience 3 out of 10 instructors tend to embrace education, everyone else is doing it for the money. Either it’s a personal decision to have a life, a lazy effort, or it’s a lack of time depending on the field, but all people go off on their own individual ways away from the rest of the world and rarely step back from a granular view. Often two different course sections taught by different instructors can have very different core competencies and outcomes in terms of quality, knowledge gained, and hands-on experience. My C++ class was all about flowcharts, which is great, if you're the manager or lead developer. In terms of coding it was horrible.


Here are my experiences as a student developer taking online classes with over 30 years of programming experience.


Introduction to Computers

I did not like this class at all and found several mistakes (16 disputes) in the online interactive text, the printed text, and the online interactive tests. My teacher was great and very understanding. The test site would respond with phrases like “165 other people have answered this problem without issue,” [what’s your problem?] Obviously, if they blindly obeyed their text, then of course they did. We were required to exhibit proficiency in Microsoft Office applications for assignments in a poorly-written, unforgiving, web-based Office emulator that on worked on a Windows system. It also used Flash and JavaScript-ish type code heavily, and while there are 50 ways to do just about everything in most applications, the web application for the testing platform was only programmed to accept a single method, despite my experience with Word since 1983.

The text we used, Computing Essentials 2014, was written focusing on 2009 “standards” and apparently the writers only used Microsoft systems since words like “ribbon” and “Hyper-V” were vocabulary words and there was one paragraph each for Unix & Linux, and mentions of two different versions of OSX each in their own respective paragraph; everything else was all about Windows.

Some of the text that referenced older technologies was very spot-on (e.g. compact discs), while newer technologies (the Internet) seemed to elude the writers. This can happen when someone writes a book in 2009 (or earlier) and the publisher asks for an updated version, and the writer can’t keep up with the technology. The book did mention that not all hackers are bad though, so there is at least that. Despite all of the hurdles in this class I managed to only miss a couple of questions during the whole course.

Programming Logic & Design (A.k.a. Intro to Python & Flowcharts)

Students who don’t interact, yet receive exemplary scores on tests aren’t that unheard of, but students in introductory Python courses who do type checking and error handling in their coding can set off a few red flags for a professor who is not accustomed to seeing this sort of code at the college level. Error checking was briefly mentioned in the text around Chapter 7 in the form of loops, while Try/Catch statements were not discussed in the entirety of the course. I will typically read a text cover-to-cover prior to starting a course, so by the time I need to really study the text it’s a refresher.

My instructor and I came to an understanding about my work in research and security and development projects. I write custom content management systems among other things, and understand how error handling in Python works because I have used the language for some time. Also I mentioned that I was simply trying to get my advanced C++ certification and this particular program was a required course that I could not CLEP. Python being used in the course was a unexpected bonus. My instructor, a developer herself, understood my position, and after seeing some of my other interactions with students realized that I was actually telling the truth and not like every other wannabe script kiddie she had in her classes prior. She was amazed at the plethora of Python apps written for the security industry. Python is not just for games, well, that depends on your definition of games I suppose. "Would, you, like, to, play, A, game?"

I thoroughly explained error handling in the class discussion boards with heavy references. Though I did really have to comment my code extremely well for my teacher, since a lot of the code I was using exceeded what we would ever learn in the intro to Python class. I had earned an A+ and felt I helped the students have a little better understanding of application and information security. If anything at least the slackers have better permissions on their social media accounts.

Professor Ratings are a thing

On a side note, when I taught many years ago, one of my students added me to the website ratemyprofessor.com. I had not heard of the site, and was told many of my students were apprehensive about taking my classes because they had heard that I was extremely hard. I had apparently received a 5 for helpfulness, a 5 for clarity, and a 2 for easiness. I understand how disgruntled students can really destroy a teacher’s reputation with a ratings site such as one of these, so I typically don’t look up a teacher out of professional courtesy. After my Intro to C++ class I definitely will.

Intro to C++

A developer who had apparently worked in large teams or managed large teams of other developers taught this course. There was a language barrier as the assignments were incoherent at times, often missing key instructions and necessary components. There was a PowerPoint presentation supplied that mirrored much of the text, and the course was accelerated, so at times students had to read 200+ pages, and perform their programming, while glancing through the PowerPoint for random things not covered in the text in order to pass their 100 question tests; not important things, just random tidbits that might be tested on. The hardest questions on the test were spot debugging. An online class, the course provided four dates where various programs were due in groups of up to six assignments. Each consecutive group built on the previous section, so if code was not correct in a previous assignment, it wasn’t going to be correct in a future assignment. Feedback on the code was very limited and sparse.

My professor, who had a 1.5 on ratemyprofessors.com after several years of teaching, decided that my code must have been plagiarized since this was an introductory class. In the instructions, the word “solution” was provided, but it wasn’t clear that this was something specific to the IDE the class recommended, Visual Studio 2013, so upon submission of my first block of five assignments I received a score of zero. Later I was told that it was required, and the syllabus was changed accordingly.

After asking the professor why I did not receive a grade, he explained that he felt I had plagiarized my code because it looked “familiar” and stated that we were only to use elements covered in the chapters of the book prior to the assignments. As a challenge, I had written my own custom conversion functions for converting between binary, hex, and octal numbers using only math for some of the looping assignments; very Rube Goldberg. My professor said I could “either be a coder or a designer,” but not both when he looked at the custom functions. When I explained that I had several years of programming experience I was told that this did not matter, that the programs needed to be done in the style of the book, only using knowledge from the book. Any outside knowledge (from anything) would be marked off and he didn’t care if there were easier ways to write the functions. I later pointed out to him that he was referencing an older copy of the book and the order of the text had changed from what he had taught prior.

I later realized that the reason for the zero was because he was using the project logs in Visual Studio solutions to glance at the output to do a quick spot check for errors and to see the output results of the applications we had provided. Also, in his instructions it wasn’t clear that flowcharts for every function needed to be provided for the entirety of the course, which in hindsight was a really useful, albeit extra, process that I had not employed myself, working on servers that were being actively hacked in a production environment. Over time I came to respect this professor’s methods, but I can definitely see why the class attendance dropped exponentially as the course progressed. Also the discussion boards in this course were not provided, so every student was on their own. I still made an A, but it was one of the hardest A’s I’ve ever had to work for.

The advanced C++ class at this particular school has not been offered online again since most of the online class takers dropped the only online Intro to C++ course, so it’s on to something else in the interim.

How do we fix it?


  1. Provide instructors with the resources necessary to advance their fields. If a college wants to remain competitive, then they need to have on-staff researchers that help with the planning of a course and make sure that the bar is being set close to industry standards. Nobody likes getting a degree that was outdated before they graduated.
  2. Require teachers to stay up on their industries. It doesn’t take much for a teacher to read a little in their spare time. Even if the text is unsupportive, if a good teacher knows the text is flawed they can adjust accordingly. If a teacher doesn't know, they might blindly hammer in out of date content, or in the case of programming, bad practices.
  3. Educate the teachers about the importance of security. If the teachers aren’t teaching from that standpoint, then it will be up to the students to learn, and as someone who is constantly bridging that gap I can safely say, there is very little in-between.
  4. Once the teachers learn about security, then, we rewrite the textbooks to be taught with more of a security-minded approach. Ask teachers and industry professionals for input prior to a release. Peer review is a good thing in the sciences, it should be taught to developers. 
  5. Teach students ethics and ethical hacking techniques if they are going to be developers. If a developer can pen-test their own code and that of their coworkers, they are a much more valuable asset to a team than the developer that shrugs their shoulders and says, “I don’t know.” We need our developers to say, “I have an idea” or “I know what I did wrong.” When a developer can understand an issue they can write better code. Don't worry about the script kiddies. They're going to download the industry standard applications and muddle their way through them, albeit poorly. Even corporations who are supposed to be ethical can often cross that line from white to black.

What will then happen to the security industry?

I've seen this in other industries, "if we share knowledge, then we're doomed," or "they want to fire me and hire somebody cheaper." The security industry is the bleeding edge of everything we know about security and more often all of the gaps in our knowledge. It’s not going anywhere anytime soon, and it might actually be staffed appropriately at the point more people understand about the need for security in information systems. Without changing these practices we’re setting ourselves up for failure.

Tuesday, July 7, 2015

#Hacking defined

When I started programming, over thirty years ago, a hack, to the people we followed, was a custom-written code snippet that would either fix a program, or add a new feature. It had a positive connotation to me as a six-year-old as my father and I hacked our Interact with our homemade binary input panel. I understood the simplicity of the machine, even then. While we could easily destroy, there was an art and a challenge in improving and improvising. Hacking had become, in effect, the act of creatively engineering, and testing repeatedly for the goal of success. We learned from our failed attempts and improvised. This process has always existed, given the act of hacking has created complex technologies like aviation, aerospace, advanced medicine, and personal computers to name a few.

Mainstream media portrayals of hacking however are almost always negative, so society believes hacking is inherently malevolent; this contradicts everything I have ever learned. Misunderstood by the masses due to the mainstream media's portrayal, the cultural wide-felt concept of hacking has evolved beyond computers to simply attempting non-standard methods of creative problem solving to derive a solution to a complex or often seemingly impossible issue or situation.

I had a physics professor who, profoundly, stated, “Everything is either directly or indirectly applicable to everything else.” This observation supports a core belief: if you engage in hacking, if you look at something in a different light or from a different perspective than everyone else, then new potential exists in understanding, simply by applying new insight or applicable knowledge. If your motives are good and you are ethically sound, this is never a bad practice. It is a practice however, and without practice and creativity, it's simply a monotonous routine without insight.

For example, bicycle engineers hacking their craft took to the skies on a whim, and brought the future of travel to new heights, quite literally. Scientists sent animals into space, not knowing what would happen, and yet they opened the door to an intellectual laboratory free of the limitations of our gravity-bound existence. When present-day doctors engineer viruses to use as delivery systems for cures, a definitely fear-instilling non-standard approach, amazing new discoveries in medicine are developed that have the potential to save billions of lives.  When a couple of college dropouts in a garage in California threw together a few electronic components to make a new kind of computer, they started a revolution that put computing power in billions of homes and schools worldwide. Their company, Apple, now puts computers in everyone’s hands, and most people can’t fathom what they’re holding, nor would they believe that it was created as a result of hacking.

My client for the current late night project I mentioned makes machines for a variety of applications, including repairing offshore oil delivery systems and sensitive systems in nuclear power plants. Too look at this positively, by hacking their website, I am better understanding the shortcomings of the system I am to protect and improve. If my clients’ web applications can better recognize and target their customers, this will ultimately allow them to improve usage of their machines, which in their industries, are safer than the alternative; not only for the operators, but also the environment. This means hacking can, by extension, do things like lead to fewer petroleum pollutants in seafood and connect equipment with operators enabling faster repairs in failing nuclear plants. There are definitely positive benefits to hacking that are overlooked; benefits that are often buried by negative stories. If we share the positive aspects of our efforts, we can cumulatively drown out the negative.

For most hackers, people who embody the concept of hacking, it is the way of life. By providing innovation through experimentation, ethical hackers are doing a positive service for humanity. Hacking is no more intrinsically mischievous than curiosity itself, and instead, it affords the hacker an unorthodox perspective in complicated, sometimes seemingly impossible situations. While people can do malicious things on computers, it doesn’t mean we should quash curiosity, nor should we resign a word embraced by many to a meaning that has long been denigrated. We should incorporate the art of hacking into our workflows, redefine the word hacking itself to mean something positive, and excel in observing from outside perspectives. It is our creativity and insight that improves the system.

Monday, July 6, 2015

44 practices for #security & #IT professionals, post #HackingTeam hack. #infosec #opsec #appsec #devsec

Go easy on me, but this should serve as a list of good security practices and habits for security practitioners and professionals, and even some IT professionals who are up for the challenge.

Okay, so call me paranoid, but I’ve been around the block a few times on this stuff.

General guidelines
  1. Use strong passwords. I can’t stress this enough. This should go without saying, but don’t use a password like “kittens.”
  2. If you’re storing passwords salt them. If you can, use unique salts.
  3. Change passwords regularly. Added layer of protection. Also works to defeat rainbow tables in the event you don't salt.
  4. Encrypt your volumes. If you’re not using it, then lock it. Nobody needs 400gb of online hacking wares at any one moment, unless of course they’re stealing it from you.
  5. Use unique passwords. If someone gets your one password, then you’re pwnd. If you have multiple passwords, then it’s harder for someone to gain access to your multiple systems and do things like pivot. Yeah, it's not as easy as the one login for domain controller, but if you're breached, you'll thank me.
  6. Don’t trust anything. I see people plug random stuff into their machines. If you are someone who is out in the field, then definitely don’t bring any foreign contaminants back into your domain.
  7. "Check this out" <-- famous last words.
  8. Test with a VM. If you hose the Virtual Machine's snapshot you can always revert to a safe snapshot. Make a snapshot of a clean system first.
  9. Use a good Antivirus. This should go without saying, but a system that connects to other systems and networks needs other defenses than the ones built directly into the OS, unless of course you’ve written your own OS, then nevermind. While AV doesn't protect against everything, not having AV is going without protection.
  10. Don’t trust end nodes. If you’re not physically there, you don’t know what you’re on.
  11. Never use warez. If you're a pro, then buy the apps and write them off.
  12. Use a connection other than your main office network connections to get to the web for work like pentesting. If you’re using the connection your servers get updates on for hacking a target you are asking for trouble. “Someone’s hacking us; and their IP reverse look-up has an Exchange Server.”
  13. Use a read-only image for core systems. If you’re using a laptop, don’t put anything on the harddrive that can be used to monitor the system, instead use an image on a thumbdrive for the OS. It’s a lot “safer” because if someone gets your gear they don’t get your work. Also you can pocket a thumbdrive or store them in a safe when they’re not in use.
  14. If you’re doing forensics work, store the results on removable drive. This helps to keep the evidence clean from contaminants. Also encrypt this device. See #4.
  15. Keep records and logs. If something looks out of the ordinary it will be easy to spot. If you don’t, then you can’t tell what happened. And those types of postmortems are exactly that, a real postmortem.
  16. In case of a hardware breach, sweep for foreign signals coming from the infiltrated system. If it’s off, yet broadcasting then that’s a hint that something is up.
  17. Restoring a backup does not fix the issue that allowed a breach.
  18. Be careful what you say or post, you never know when someone will paraphrase something or something might be used against you.
  19. Use two-factor authentication where possible.
  20. If you’re using social media, don’t use it from your operations center. "Look they has a Twitter, I wonder if I can get them to click on this malicious link?" Now they have your IP and your User Agent. Spearphishing anybody?
  21. Use a different MAC Address than the one embedded in your card. Switch this from time to time and scan to make sure nothing has cloned your MAC. "I thought you change it?" This little trick can help throw off a would-be attacker from the type of device you're using if they're using your MAC to pinpoint.
  22. If you’re connecting to foreign networks use a throw-away wifi card if you can’t change your MAC. This also helps with driver issues if someone knows the type of hardware you roll with and they are specifically targeting you.
  23. If your operations don’t need web access, then keep them off of the web. Download patches on a different machine and rebuild the system image.
  24. Stay up-to-date where possible. If some application, driver, system, or piece of hardware prevents this, then at least update everything else. Nobody likes getting nailed because of a 3-year-old exploit.
  25. In regard to peripherals, if you’re not using it, turn it off. For example some bluetooth devices and systems only look for services. They don’t prevent attacks from non-disclosed services. Eg. My computer looks like your Bluetooth headset to your computer, but your computer gives me access to your computer because it trusts your headset. This could also work for mobile phones and other devices.
  26. Also see #6. I’m not one for paranoia, but if it looks like it’s been tampered with, then you don’t want to trust it.
  27. Mark your drives: just like bags at the airport, all thumb drives look alike. This goes for external hard drives as well. Think permanent and unique.
  28. If you’re researching a specific piece of hardware use gloves. You don’t know where the user has been or in the case of a laptop, where the device has been. Also it helps to maintain the integrity of the scene and evidence in the event of escalation.

Offsite operations
  1. Use a tunnel like a strong VPN, this way when you’re remote, you can at least make it harder for something to access your system. Also the bonus is your traffic is "encrypted."
  2. Encrypt your traffic. If TLS is an option then use it.
  3. Everybody can be traced. It simply takes time, but don’t ever assume a multilayered encrypted connection is non-exploitable.
  4. Use Faraday bags where necessary. If it broadcasts you can stop fix that pretty quick.

Onsite systems
  1. Use a RAID. So many times I go into an office and there is no redundancy for the important volumes.
  2. Keep offline backups. If you’re working on a hot project, definitely keep offline backups. If you’re infiltrated and someone wipes your data you need to know what you had access to at that moment. Also this helps with issues like ransomware.
  3. Keep offsite secured backups. This protects against fire, raids, and that odd instance where all of your equipment and assets are seized pending clearance.
  4. Watch for unwanted traffic on your network (assuming you have one). If someone gets in to your system, then you are pwnd.
  5. If you don’t have gloves and must use an infected system, then use your own keyboard and pointing device. Also this isn't a bad idea either because if the machine has a device with a keylogger built in, this is an added layer of protection. (I've seen employees fake an incident to capture an admin password on one of these devices). Found out because the Admin account was logged right back in 2 minutes after the admin left for the day. When asked, the employee confirmed they intended to install pirated software that required administrative privileges. 
  6. Clone the drive you’re investigating before accessing (if possible). If you trigger something on that drive it may try to cover its tracks.
  7. Just because it’s in a foreign language doesn’t indicate a foreign act. Stuff like Google translate lets people make stuff that looks foreign all the time. If you don’t speak the language ask someone else if it’s legit. It may be an attempt at obfuscation or even gibberish to throw off an investigation.
  8. Scan the traffic and memory prior to disconnecting an infected system, unless the infiltrators are in the process of removing data, then immediately disconnect the system. If you can run memory forensics analysis on a system, then it might give clues as to how it was infected, what it is doing, who it was contacting or even simply what type of infection it is.
  9. If something is removing data actively on a drive, then take the system offline (not down). If it’s memory resident attempt to kill the process. If that doesn’t work, try to break the process with injections.
  10. Learn what everything on your network does and what its habits are. If something looks out of the ordinary it will be easy to spot. An example might be a VOIP phone trying to gain SSH access to other resources.
  11. If at all possible use a Faraday cage to prevent external wireless intrusion. You don't really need to access your wifi from the parking lot do you? With a booster someone can access your network from a greater distance. If you can use exclusively wired networks in a setup, then that's the "safest" bet.
  12. Layer your defenses, why only use one firewall? I mean if it's that important, then it's okay to have a little lag from proper countermeasures.
Hope this helps somebody. This security stuff can be a can of worms at times. Got anything to add or think I got something wrong, shoot me a message on Twitter: @cpattersonv1

Update:
While 44 is a good start, as I think of more I'll add them here. These are more for closely related to Good IT practices.
  1. Know what's in the network rack physically. If something looks like it doesn't belong then it likely doesn't. This could be anything from battery back-ups to switches, routers, and I've even seen extra servers in a rack before. Famous last words "I thought it was ours?"
  2. Take an inventory of known, purchased equipment. This helps with #45.
  3. If systems are checked out, inspect them for exploits prior to checking them in. If the operating systems on the devices aren't using read-only images they could be infected.
  4. Develop acceptable use policies for equipment and network access and enforce these policies.
  5. If it's infected, then clean it. Nobody likes to be reinfected because someone found a spare drive laying around.
  6. Record all of the MAC addresses for internal hardware expected to be on the network. For virtual machines document any custom MAC addresses as well. This helps in situations where someone has planted an extra device. Also it helps to see if an employee might have an unsecured device on the network by using an app like Wireshark.
  7. Clean up the cable nest. It's a lot easier to spot a cable in rack that's out of place if the cables are grouped in an intuitive way for spot checking. All too often with a cable nest or wad it's difficult to find unwanted physical intrusion; especially in a place like a shared hosting rack space where an extra cable can find its way through the floor panels or from the overhead wire tray. While they might only be stealing bandwidth, they could be passive scanning.
  8. Setup a camera on the server room: motion-activated "critter" cams that work in low light work well. Have it transmit to a service or device offsite when the system is triggered immediately, this will help with tampering. If at all possible, hide this in a different housing.
  9. Use managed switches that support port isolation on the network. Get the kind that allow passive scanning at the switch level. While traffic might be encrypted you can tell where it's going at least. 
  10. Actually configure SNMP and utilize it. This management protocol can really help with detecting intrusions and failed equipment which can present symptoms similar to certain attacks like DoS and floods.
  11. If a network port is not being used, disconnect that node from the system at the rack or in the switch room. This way extra device access can be limited as an added layer of protection.