This posting was sparked by a few new exploits on the rise, a Java exploit and a couple of Ruby-on-Rails vulnerabilities. I found out about both of them from Dark Reading.
I've been using computers for a very long time (31 years) by technology standards. One thing I've become accustomed to is regular updates and patches to systems, programs, and apps. Sometimes the systems that need to be patched aren't the systems that people themselves might have access to, but they may be a web server, a mail server, a programming interface, or even a server-side plugin.
The good
The reason these things need to be patched and fixed isn't because the companies who are making the patches are making money off of them. It's actually kind of counter that. It's a huge issue for a company's brand (yes, PR and Marketing) when their software is the main reason most of the Internet or Corporate America goes down. Think of the damage control a company like Microsoft has to do when there is a massive worm spreading around the Internet like CodeRed or the Melissa Virus. It's huge. People change platforms, they decide they can no longer trust a company with such glaring vulnerabilities. They "switch." I myself started using Macs simply because I trust Unix way more than I do Microsoft's ability to protect my system e-mails and webpages.
Here's the problem though, those vulnerabilities usually aren't because some crazy hacker on a mission has decided they're going to ruin one of these worldwide brands. It's usually because the company themselves have someone, either on their payroll or contracted who has provided the notification of the exploit initially (internally or through a provider channel), either when they were working on the code, they crashed their own systems, or they had a hunch and tested their theory. They notify the company who in-turn rolls out a patch... these people are paid to provide this service.
People read everything with their own filter on the world. If they are a good person, when they see a patch, they probably think to themselves... I need to apply this because I don't want any downtime... but what if the people are bad? Okay, let's not say "good" and "bad" because that's not necessarily the case at all (and part of a larger discussion). Let's say they are users and then those other people who have "too much time on their hands" at the moment. I say this because at one point or another in a white hat hacker's life they have more than likely infected something or spread something on accident. They're not bad people, but if it's uncontrolled it could do just as much damage. Always test on an offline machine if you're going to open Pandora's Box.
The bad
So back to my point about the people filtering what they see. When someone who 1.) wants to experiment, 2.) has downtime, and 3.) a need of an idea for something to hack, they have this great expanse of information (the Internet)... I know it's pretty obvious right? Although [most] people think that most hackers all go to secret websites and have a secret handshake, that's really just the people who go to Defcon or who have friends who are hackers because they do it for a living, or they want to pretend they're hackers. Most of the other hackers I've met happened by accident because someone else mentioned that I hacked, then we talked about the level of what we were into.
Usually self-proclaimed "hackers" in my experience are in actuality script kiddies (people who use a program or a tool in a way they've read about to purposely cause chaos), so often when I'm confronted with the questions of what I do, I kind of go the other way and don't share what I'm into unless they let me know that they're "cool," A.K.A. not a script kiddie. Just like the branding issues companies have with being exploited, "hackers" white, black, and gray.... all hats, also have a branding issue because somewhere some [insert expletive here] is writing a virus that will cause harm and it says the same thing on my nametag to society that his says... I'm a creative professional with the means and ability. Society doesn't care whether I would do it or not or about my moral compass, but you have to think like a "bad guy" to outsmart a bad guy... it doesn't make me "bad." But it makes the unknowing populous marvel and wonder (in a bad way).
On with the Internet reference... when I say they have the Internet at their fingertips, they don't need to go to one of the heavily monitored websites for script kiddies or the IRC channels, all they have to do is browse through a company's patches. In the patches that most people install there is usually some bit of information that says what exploit or vulnerability is being patched. Apple doesn't share a lot of detail about this, but Microsoft usually tells you what they're patching if you follow enough links from Windows Update. Java, Ruby, PHP, and most other opensource languages will release it in a bugfix that you can read about. When it happened to Microsoft's brand before, Microsoft had already provided patches for the exploits for CodeRed and Melissa long before they were in the wild and running rampant. Most people however do not like applying patches because, just like going to the doctor, "If it ain't broke don't fix it."
I've heard all sorts of reasons why someone shouldn't patch something... "because if they don't know I'm running an older version I'm safe" or "it might bring down my machine so I wait a couple of months to test it." Zero-day patches just like zero-day exploits can also bring your machine to its knees. I wait about a week to make sure that a patch has been thoroughly tested by the masses. It takes most companies a couple of days to clean up after a failed patch, so that should be enough time to cover myself. (I can't afford to have downtime.)
The incurable
When an idle mind sees a patch and decides to take it upon themselves to figure out how to exploit it, then that's where the problems arise. The problems where there is a vulnerability that a company doesn't know about the day it's unleashed are called a zero-day exploits or holes and they're usually compromised in a zero-day attack... because the company has had zero days to prepare for the aftermath from a technical and marketing standpoint. These can be people purposely writing a virus or altering code and spreading it. Because a company has little or no warning then it can be catastrophic for the brand.
How is information sharing bad?
The problem is with the channels where information is shared. Most of the highly technical details about a vulnerability do not need to be out in the wild so a passing bot or web crawling search-engine can find them. They need to be behind at least one level of authentication. This makes is more like a deterrent because only the people who would really need to know about something would take the time and effort to go in and look at all of the specifics. Potentially harmful individuals might go in and still compromise a machine or series of systems, but a casual passer-by wouldn't see the info to get any ideas.
Really observant individuals might actually take the time to find a pattern in [poor] programming. For instance Microsoft has been pretty bad about securing Internet Explorer and the way it is interconnected with their operating systems. In the past, when someone logged in with the default Administrator account they could open an e-mail or a webpage and take down their machine with full privileges. Luckily it's a little more difficult for most users now.
On another note Whitepapers can be something of a major problem as well. I downloaded yet another Whitepaper on SQL Injection attacks again today. Nothing new or earth shattering, but it always pays to look to see what I might be up against. I'm always interested in new perspectives.
A thief who can see a whole building and examine it in full detail, might realize it's much easier for them to drive through the wall and bypass the door and window sensors on the alarm system altogether. The same thing applies to Whitepapers and Patch Descriptions on the web. Although much of the media clambers for information about the technical specifics of what happened, it's probably safer if all of that detail isn't on the record completely and in the open. PR and Marketing departments should be the main filter in brand protection. After all, too much self-provided information might actually help in destroying your brand. (Same goes for real hackers.)
Soap Box
If you have a person (or group of people) in your organization or company who really want to support some of the open-source platforms like Java, PHP, Ruby-on-Rails, and so forth they also need to understand the responsibilities that come with maintaining an effectively secure system. Everything needs to be patched and it needs to stay somewhat up-to-date. When companies invest in new ideas and those ideas fail the people who are working on the front lines and in the trenches are the ones that are hit. Most companies can reboot from a failed experiment, but most people can't.
That's all for now.
No comments:
Post a Comment
I'm going to read this before it goes live if you don't mind.