Monday, December 23, 2013

Why time travel into the past is not possible

There is a simple reason for why time travel into the past isn’t possible. Everything revolves in a predictable motion from sub atomic particles all the way up to planets around their suns and galaxies. With these rotations charges are exchanged or kept at atomic and subatomic levels, and everything works in reaction causing what we see as life or existence. There is no historical record of space-time (space time is simply man’s observation), and changes and fluctuations based on subatomic inconsistencies and interactions exist; random chaos at atomic levels.

Without a record of spacetime going backward in time would mean that some reactions might reverse while others which are unpredictable would not (eg. nuclear chain reactions). If you could move at the speed of light around the sun (matter can't travel that fast) you would simply be traveling around the sun at the speed of light in whichever direction desired... sorry Star Trek fans... no whales today.

The closest thing, involving stasis… a physical recording.

If you record all of the particles in motion as an event happens in some system you might be able to recreate a representation of the scene of the event using the involved structures, but you can not recreate the event itself, you can only alter the replayed recreation or arrangement of molecules and atoms. All matter would have to be contained in a confined space, and all molecular motion would need to be stopped in that confined space, in order to accurately recreate an event and capture all involved components at a sub-atomic level; thermodynamic equilibrium would need to be achieved to insure a proper record. To record all of the particles without interfering with their structure would require disassembly for a proper mapping; a deconstruction.

The process of structural recording should never, ethically be done on a living sentient thing which would also prevent things such as teleportation or exact atomic cloning of living organisms. The reason for this is because stopping all atoms in their present state would cause their [natural] interactions to fail and their atomic structures would be disrupted. If the matter was not stopped prior to recording, the speed of the atomic breakdown could prove extremely painful and the instantaneous reactions between cells in the organism would make the recorded position of the future recorded cells in a state of reaction. Imagine burning every cell off of your body one cell at a time, the reassembled image of the original (yourself) would contain evidence of the trauma because you couldn’t record all of the cells in their paused molecular subatomic state at the same time. This is all of course based on the organism itself surviving the pause or deceleration to maximum entropy.


Furthermore upon reassembly if the recorded cells were to be rearranged there would be a chance of accidentally creating an excited nucleus causing a fission reaction.

I would like to state that I've not studied these things at any level at all and this is simply my uneducated hypothesis based on my observation.

Sunday, December 22, 2013

The universe isn't a hologram, but it looks that way... here's why.

I read a headline on the Nature site stating “Simulations back up theory that Universe is a hologram.” I was surprised this was news or that they had taken a time to create a computer model to discern this bit of information. A hologram, according to Wikipedia, is a representation of an image in space (not outer space) made from an apparently random structure or representation of either varying intensity, density, or profile. According to the article:
“A team of physicists has provided some of the clearest evidence yet that our Universe could be just one big projection.”
Maybe I’m the only one who sees the universe this way, but I was thinking, in a more modern parlance, “duh.” Then they go on to talk about Quantum Physics and a 10-Dimensional Theory of Gravity and how the universe will be hopefully more easily explained in the future in terms of Quantum Theory. Okay, so there we have a problem.

Everything we see from Earth and near space is indeed a projection on whatever surface we’re using to view it (technically)… either it’s the lenses in our eyes, a camera lens, or the output of a computer model based on data that we’re gleaned from observation. No two eyes are alike, no two people are alike, and while we may see things similarly we do not see the exact same things.

Heavenly bodies beyond our solar system as we see them in the sky are but a historical representation of something that once was in time. The distance of the stars, each multiple light years away means that the light we see varies in age (it takes a really long time to get here). It takes longer for light from a star much further away to reach us. Any calculation in the movement of these stars has to be based on fallible things such as time and the amount of light and waves being measured, because there is not enough historical data for us to accurately predict how far away an object outside our solar system really is. We as a people with our present intellects have not existed long enough to gather enough information about the movement of all of the stars using the latest technology. We still get excited about landing remote controlled vehicles successfully in our own solar system... billions of dollars have been spent on this very act.

Additionally because objects can vary in size and because we have no way of discerning the size accurately in three dimensions here on Earth from our vantage point, parallax is a major issue and prevents us from actually appropriately gauging distance. We would have to map every star and object in the sky at all times from more than one vantage point. Add in assumptions for constants such as the speed of light in a vacuum (unbent by gravity), and because we can not measure all of the factors acting on the minute amounts of light that make it to our instruments we can make no solid theories as to anything remotely substantial, only calculations of the subset of data required to properly model our perceptions which contain very small amounts of data in the grand scheme of things. We can theorize about what atoms exist on other planets in our solar system but we still don't know.

Furthermore, this is all unprovable (in terms of their scientific research) because we will not exist long enough to determine whether the experiments are true, therefore we should stop wasting efforts on any sort of scientific rational relating to Quantum Physics, Quantum Mechanics, and Quantum Theory and focus on making life of today and tomorrow better for the people who exist now. There are so many more things that matter in life. What's next, interstellar space travel? Leave Sci-fi as a hobby. Don't make the rest of society pay for actual real science fiction through failed experimentation. We are not in The Matrix, we are not in a simulation, don't get your hopes up. Life will be just as cruel tomorrow.

Tuesday, December 17, 2013

Allstate Drivewise. A huge failure in potential.

I've been meaning to get this one up for some time. For a short time I had signed up for Allstate's Drivewise program. Driving very few miles as compared to most other drivers and the fact that I don't drive like an idiot, I figured it was safe (nothing to lose). The problem is the data set that they've created their perfect driver rating system around are likely based on their payouts for accidents by type of braking, time of day, mileage, and excessive speed (80+ mph). Okay sounds good so far. I kind of figured this going in.

Overwhelmingly statistics apparently aren't on my side when they're being applied by Allstate.

When you dig into the Drivewise data after receiving your first set of "grades" you'll see 4 nice looking graphics in the interface. One for mileage, one for braking, one for time of day, one for speed in excess of 80mph.

Mileage

They go on to tell you that the mileage is a calculated projection of how many miles they think you'll drive based on your daily driving. If you're sticking to what they expect this shouldn't be a problem. I don't have a problem with mileage from the device because it coincides with the mileage from my odometer (which they already had on file). Spoiler, if you tell them you only drive 7,000 miles a year, and you really drive 50,000 the program will not give you a discount and the agent will have access to your actual mileage and will likely raise your premiums accordingly.

Braking

The braking section of the graphics show two things. "Hard Braking" and "Extreme Braking" are the two categories. According to Drivewise, hard braking is when you decelerate by 8mph in less than 1 second. If you're following a bus that makes frequent stops and you can not change lanes, depending on the bus driver's performance and lack of indication you will have a hard braking event (or two or four), someone walks out in front of you, a dog in the road, you get the idea. Extreme braking is when you decelerate by 10mph in less than 1 second. So if you come up to a short traffic light that has a 3 second yellow (these do exist) from 50mph and begin decelerating, you will likely have an extreme braking event. When you have 4 hard braking events and 1 extreme braking event over the course of 3 weeks this erodes any discount you would expect to receive from the program. I do mean ANY and ALL discounts.

Time of day

The time of day expectations for the program are really optimistic for Allstate. They have 4 categories. They've said that the "Lowest Risk" for accidents is on weekends between 5am and 11pm. The same time that most teens are out driving to work or shopping, etc. The "Low Risk" time is from 4am-12pm on weekdays (When teens are driving to school.). "Moderate Risk" is from 12pm to 11pm on Weekdays (When teens are on their way home from school or on their way to work at night). If you're out past 11pm you are driving at the "High Risk" time which is from 11pm-4am on weekdays and from 11pm-5am on weekends (Drunk people dodging, but luckily most teens are at home curiously enough).

Speed >=80

This one is pretty straight forward however they give a whole range of grading here where you basically stay below 80 or you don't. I don't understand this one at all because if you go above 80 you should be in the very high-risk category for drivers. Go get a racing license and take it out on the track. Now if you're in a state like Florida or Montana where you may encounter a 75mph speed limit, then it's understandable that this may need to be changed, but they're not trying to hide anything from you here.

My Suggestions

If Allstate really wanted their Drivewise program to be highly successful for them and to actually reward people who are definitely driving safely they would look at a different set of parameters.

Speed

Since the device already knows how fast the driver is going, it should be able to tell whether they are one of those people who can't maintain a constant speed. If the driver accelerates extremely rapidly (0-60 in 10 seconds) then they should have a record of drag racing on file. This could be road rage (extremely risky) or someone not paying attention to their lane ending... this might also obviously be drag racing, but the risks are the same. If the driver is running 65 and catches someone doing 45 and does not overtake or switch lanes, then they are not paying attention. Also if they decelerate by this much and maintain the speed it means that they either entered a construction zone, or they slowed down to the flow of traffic. If the device sees people accelerating and decelerating regularly it should know that the person is in a stop and start traffic jam. It already knows the time of day, so if the person is in rush hour stop and start traffic then it should know and place them in a higher risk category (for a low impact collision).

Location vs Speed

The device already has the ability to track vehicle location because it's transmitting on a cellular signal. If Google and most GPS systems can tell how long it will take to get to a destination, the device should be able to do this as well. This means if the driver is speeding and the device knows it, then they are risky and should not receive the discount. Something like 65 in-town in a school zone and they should put the driver into the high-risk category as well (for a high impact collision).

Crazy Driving

Add a couple of accelerometers to the device and now you can actually find the people who are weaving at risky times (drunks) and the people who are weaving on their daily commute (food-eating, texting, doing their make-up, you know... people who are exhibiting risky behavior). Also you can find the people who are insanely driving and weaving in and out of traffic with fast bursts of acceleration. Like the yellow semi truck that didn't like my Chicago Black Hawks tag.

Time of day

The Allstate Drivewise program needs to get a realistic idea of when people drive and when people don't drive in order to be successful. If I'm driving safely at a time of day when there is nobody on the road but me, then I shouldn't be in the high-risk category. If I'm driving when there are fewer people who are on the road like an afternoon after rush hour, I shouldn't be in a moderate risk category. If I'm driving when EVERYONE is off of work at the same time as they are on a Weekend, then I should be in a very high risk category (more people on the road=greater chance of an accident). If I'm driving when people are trying to get to work on time or they're trying to rush home after a bad day at work those are risky times as well.

I think if Allstate had actually taken the time to utilize the system instead of cutting corners, they could actually reward the people who are indeed safe drivers and profit from the people who aren't.

Suggested Upgrades

Add a couple of wireless cameras to the device. Let's put one in the front and in the back. Let's actually get some documentation on why someone is stopping abruptly. Don't outsource the research to a country overseas. Now you can have people in the US work from home and analyze the footage. It would help keep people off of the streets and off of the roads and it would also help with those fender benders that don't get reported. Not to mention auto theft, erratic driving and whether someone's towing a trailer at high speed. Yeah I'm talking about the people in the fast lane running 80mph towing the trailer that's rated for 45mph max.

Make the device aware on its surroundings. Add a hygrometer. Let's see if people are driving in the rain or driving when it's dry. Let's take some barometric pressure readings on the Drivewise device. Zero visibility thunderstorm, do you slow down? They should know. Let's add a thermometer. Driving on ice? The device should know. If you drive excellent on ice, then you should be rewarded. If you're more like a skating star doing twirls, whirls, and 720 degree spins, you should be penalized.

Make it driver aware, add something to the keychain so when a certain driver is in a closer proximity it knows who is driving the car. Sure you could swap keys, but this would definitely help if you had teens driving the car. That way they could tell who was a safe driver and who wasn't. Want the discount back, don't let junior drive your car.

All-in-all I'm 100% positive about making the roads a safer place.

What is the Allstate Drivewise really about?

The Allstate Drivewise device is not out to make the roads safer. In actuality if we look at the device from a completely different approach, it's a now a gimmick that invades the privacy of the driver. Allstate isn't interested in whether someone is a safe driver. They're interested in finding ways to make you pay high premiums. The higher risk they can make you in their book, the better off they are (monetarily). My agent seemed disappointed that I was healthy when applying for Life Insurance... gee, I wonder why that is? The same logic applies to car insurance. If you're a truly bad driver, the system will punish you, but if you're a good driver, then it's up to the insurance company to make up for the loss.

Realistically if we look at the stats from the US National Transportation Safety Board, most people aren't at risk of getting into a major accident on the road statistically. Only the select few. If we can keep those people off of the street, then sign me up. Until then, I'm keeping the Allstate Drivewise out of my vehicle because it makes me think about something when I'm behind the wheel that isn't related to my driving performance at all, and that's whether or not I'm going to be financially penalized about something out of my control. And when they do penalize me for something which is not a risk at all, I appear to them to be an "unsafe" driver, which helps them to justify charging more.

Get Wise Allstate.


A note on the edits
Originally I had mentioned that it might have been up to the device programmers, but that's not really fair. Once a product like this passes enough scrutiny panels in the production phase, good intentions of the designers are left in a pile for the sake of a little bit of savings. As long as the device gives a plausible illusion of savings, then the company will proceed.

Displaying measurements and SEO

Quote marks and other symbols vs. Abbreviations

In short, use an abbreviation that is regularly used and makes sense. (Eg. 1inch or 2mm)

How I've come to this conclusion
I had an epiphany today while working on SEO (Search Engine Optimization) for a site for one of my clients. We deal with measurements in American Standard, Imperial, US, NPS (nominal pipe size), and DN (diametre nominel) all the time. The measurements are all over the place depending on the age of the documents, the sizes and classifications are everywhere between really tiny .125in and massive 120in because of the range of products. It’s not easy to appease all of the people all of the time when they’re trying to find the right information because they all use the language that they were taught. Some people look for one measurement, while others (possibly in a metric-only shop) might look for something altogether different. The field and communicating in the field is challenging to say the least.

The main problem is when we’re displaying the content we use [inherited] copy that works for our American locations, but doesn’t translate well overseas. I mention this because some countries use an entirely different set of marks for quotation.  An example might be a range for the measurements 1"-3". My word processor (as I’m writing this) changed the typewriter double quote marks (double ditto marks) used for inches to “smart” [curly] quotes (right [side] double quotes). Quote marks and apostrophes are used in context in body copy and in the HTML code as well. So when you place a quote mark on the page, the search engine has to determine whether the quotation mark is in context as a quote, as a prime, as ditto marks, or closing (or opening) coded statement, then if it is used in context they have to determine whether you’re using it as a quote mark or as an indicator for a measurement, in this case inches.

 The search engine has to decipher the meaning of all apostrophes, double quotes, primes, and ditto marks on the coded page to decipher meaning. When someone writes 1' 2.75" they’re really saying one foot, two and three quarter inches. Since it’s not as easy to spell that all out and have people glance at a table in context, we need to write 1ft 2.75in (for consistency) so the search engines can tell what we’re talking about. Likewise, the range I mentioned before should be written as 1in-3in

To gain a different perspective, this document contains a variety of abbreviations and symbols for US including measurements (and they don't show quote marks and primes). You can see how using abbreviations inconsistently can lead to chaos (or in my case minute loss of page rank).


Now if we could only teach the search engines how to calculate ranges and relate to content for said ranges, as well as do calculations on the fly in context that would be awesome.

Friday, December 13, 2013

Why is Apple showing Nelson Mandela on their home page?

I was thinking about looking at Apple products again. Money in hand I went to their website to look for ideas for things to get because I'm a sucker like the rest of society... then I saw this on their homepage. Yes that's Nelson Mandela.

Screenshot of the Apple homepage 12/13/2013

Now to say Nelson Mandela was a great leader of men is an understatement. A super strong figure in the world who stood up for people other than himself. He effected great change in millions while he was alive. He changed the way people think about other people, and he brought into the light a huge issue in his country... an issue he helped to fix. You can read more about his awesomeness in the link.

Apple on the other hand has been shown time and time again their short comings in terms of how they quite oppositely create great divides in people. Apple perpetuates classes, not only in the markets they sell to, but in the workforces that make Apple products. This next year Apple will have to prove that they aren't using slave labor.

So back to my original question, why is Apple showing Nelson Mandela on their home page?

Well for that, the answer is simple. He's an excellent graphic hook. That's pretty much it. Oh, and by showing him on their homepage it makes them appear to be not as bad. No links to anything talking about how great of a man he was that might distract from the almighty marketing machine that IS APPLE however.  Additionally, research shows that a smiling handsome man who reminds everyone of their elders and all that is good in the world makes people happy. It makes us warm inside. It makes people a little more likely to let lose of their tight grip on the money needed to buy a phone or some other device.

Well this holiday season while the people at the Apple store work for their low wages, and the people in China and other countries where they make the components that go into the Apple devices will not be with their loved ones, remember that marketing is everywhere. Marketing changes who we are. It changes what we do. It changes our perceptions of everything. Marketing is the art of fooling the masses. And just because a company like Apple shows Nelson Mandela on their homepage doesn't mean they've changed their ways. It just means they've found another way to fool people into not seeing them for what they are... a company focused on their bottom line.

I work in marketing and I think what they've done here is wrong. Apple has gone too far. Share this if you agree.

Thursday, November 14, 2013

How to Protect your Google Accounts

I went into Google a few months back and viewed the plethora of information they have collected about me over time using information from domain names, social media accounts, browsers, email clients, and public record. It was alarming (but understandable). Use the internet for 20 years and you’ll have a backlog of info you’ve provided too. Two days ago I had to add yet another Google Analytics profile to my account. When I logged in, I saw the scrolling list of domains, and then it occurred to me that while I take precautions, many of my clients (who I also provide access to their sites via Google) might not take the same safety measures when it comes to protecting their Google accounts.


The method I use is called Two-Step verification. In short, to add another device (computers, phones, etc) I have an app [Google Authenticator] where I’ve already been authenticated. That app gives me a code that changes every 30-40 seconds, when I’m adding another system I simply open my phone, enter the code, then I’m verified. If I don’t have my phone on me, or someone else is trying to gain access to my accounts, then Google can prevent the access from that machine (if I’ve not yet used it).


Food for thought
One of my clients had to send out an email today along the lines of “Please ignore the last email to you from my email accounts, it was someone else.” That’s scary. While most other people might use Google Apps or G-mail, Google is striving more and more daily to make their accounts all seem fluid. So if you’re writing on Blogger, you’re using the same account you check your e-mail with. If you’re shopping with Google, then it’s the same account. If you’re posting messages on Google+, then its the same account… domain contacts, corporate email management, Google Analytics, and Webmasters access… not to mention anywhere you’ve logged in with your Google account as an OpenID. It’s sort of like making your Google account the holy grail of all things to hack. Google has the infrastructure to protect a lot of the attempts, but if you don’t take the time to enable the provided features (like two-step verification), you may find yourself the victim of more than identity theft.

Tuesday, November 5, 2013

What if a black hole isn't a hole?

So I had this idea earlier today after watching a couple of videos on Quantum Physics and Quantum Mechanics yesterday. The idea that there could be something with so much gravity as hypothesized in a black hole seems a bit confusing to me since nothing would be able to escape its gravity. What if, however a black hole isn’t really a hole at all? What else could it be?


Then I started thinking about liquid hydrogen. We know from videos taken inside of orbiting space craft that liquid matter returns to a spherical state in space. For instance water in a vacuum will create little spheres. When light hits a sphere of water or any other clear liquid the light is refracted in a different way than a solid. When we look at a black hole, couldn’t we really be looking at the refracted dark space on the other side of a sphere of liquid hydrogen or helium, or even water?

Image from NASA



This would explain the gas based stars that appear to be emanating from the black holes. It would also explain why there appears to be a lot of gravity at the center of the black hole. An object large enough to be visible from this far away in space would definitely generate a lot of gravity. If the sphere isn’t so far away though, it wouldn’t have stars revolving around it at all.

It seems a bit odd to me though that with all of the spheres, from small sub atomic particles  all the way up to gas giant stars, that within the vacuum of the universe we would naturally have an object that isn’t a sphere at all. Just a thought.

Image from Black Holes on Wikipedia.org





Monday, September 30, 2013

Hughes Satellite Internet

And why it’s not as modern as its marketing suggests.


I recently found myself in a situation where I needed a faster internet connection in a remote location (other than an ADSL line) and had to look at options. The DSL line is 1.5mb and is being used as a comparison to the satellite. I weighed the various different satellite internet subscription options, and after reading hundreds if not thousands of reviews decided on HughesNet’s Gen4 PowerMax plan.

Speed

First off, the speed when it’s available is quite fast (1gb in 16 minutes) compared to ADSL 3mbps lines. During daily summer thunder storms it’s not completely off, but it does have quite a degraded signal. The internet satellite providers such as Hughes limit the speed of their connection. These limits come in a couple of forms, one of which is the physical limitations set on the unit itself. Having a 10/100 ethernet port, this would be the expected limit. The connection marketing suggests that the limits are set to 15mb/s however in our tests we’ve noticed around 10mb/s the unit shuts itself off and then synchronizes after about 30 seconds. If the connection doesn’t approach the system speed limits, it appears stable for a time.

Allowances

I’m not entirely sure about their reasoning, but Hughes sets a limit on the amount of bandwidth with what they call “allowances.” These allowances are really the downfall of the system. The timeframes for the allowances are between 2am and 10am for businesses with 20gb available during normal business hours and 25gb after hours. Any real business is not going to wait for the late night download period once they’ve exhausted their bandwidth for the day. For personal accounts 2am and 8am are the bonus hours and 8am to 2am regular hours with limits of 20gb each. The units do not provide consistent connections during “bonus hours.” This suggests that the system infrastructure needs to be upgraded to handle all of the requests for downloads overnight… that or Hughes knows this and is causing this limit on purpose. Either way it’s bad. In contrast, most high-speed internet connections such as DSL, Cable, and Fiber have much higher limits on the amount of suggested bandwidth.

A limit of 40gb per month seems as though someone could download quite a bit of data. The internet has changed however, so modern users who download multi-gig software updates and service packs and use something like IMAP to access their e-mail accounts on multiple devices (iPads, iPhones, computer e-mail, etc), will find themselves running out of bandwidth rather quickly. Most electronics do not allow downloads to happen overnight. While some plugins exist to schedule downloads overnight for PCs, in our tests we’ve noticed the system becomes unstable overnight and never completes a download (or upload) over 500mb after 2am. If IMAP is downloading the same e-mail on multiple devices at once there is a significant hit. Additionally sending an e-mail is also a multiple hit because the bandwidth limits are imposed for the SMTP transmission of the message to the mail server, the copying of the sent message to the IMAP folder, and the downloading of the sent message onto other devices on the connection. Web mail seems to be a good solution for this if you a strong web mail client is installed on the mail server.

Additionally during our first month we realized that poorly programmed web pages that constantly refresh data with meta refreshes and AJAX consume the bandwidth at an astronomical rate. Internet videos such as YouTube and Lydia.com seem to download reasonably well by downloading a cache and then playing. Streaming connections such as VoIP and Video conferencing dos not work well at all and additionally obliterate the monthly allowance for the connection. VoIP communications are clear in-bound but sound like "someone talking through a fan" on the other end.

DNS, caching, and other smoke and mirrors tricks

In the remote location I’ve resorted to running a DNS caching server over the ADSL line because the initial 30 second delay between typing in a domain name and waiting for the HughesNet Gen4 system to return an answer was painfully slow. Also this connection’s caching server has to run over the ADSL in my situation because any requests to the Root Servers from the system are blocked by the satellite’s firewall unless you use the service's default DNS servers. Additionally things like SSH and PPTP protocols also appear to be filtered and blocked. Any attempts at SMTP over port 25 are also blocked, but SMTP over SSH or TLS work. The Gen4 uses its own caching system to deliver pages seemingly quickly, however these pages aren’t always up-to-date, so they may need to be reloaded (requiring twice the bandwidth because they still penalize for the serving of the cached page).

IP address changing

Having to connect with the unit to any website or connection where a whitelisted IP address is required is excruciating. Over the course of one attempt at communication with a remote server I received no less than 4 different IP addresses in different ranges from my unit. These 4 ranges appear to be the usual addresses for the connection and have not changed over the course of a month. Also by whitelisting the reverse look-up domain name I was able to save myself some trouble, however this opened up the servers where the whitelists exist to worse security for the duration of the connection.

Tech Support

HughesNet’s extremely polite telephone tech support staff are not familiar with the units at all. The person we spoke with on the phone was not familiar with what a satellite dish was, much less the setup. They were only reading from a script. In our situation someone accidentally moved the dish, which prompted a Saturday afternoon call to their support staff (in India). Upon the initial interchange it was made clear to the satellite support staff that the dish had been moved and that we simply needed the angles and settings for realigning the dish. The support staff after 45 minutes of trying to determine why they couldn’t access the device remotely suggested that we enter a hidden interface on the unit ourselves (192.168.0.1 click the little gray “i” in the header) to retrieve the information from the unit. After seeing this information I was able to use the tools on the transmitter itself for realigning the dish (with physical socket wrenches)… which took another 45 minutes. Connecting to the transmitter using a cell phone over wifi made the process of dish alignment much simpler.

Final Notes


Overall, my experiences with the satellite have been somewhat different from most of the users who have a satellite alone since I am able to rely on the DSL in the event the satellite is down. Since I quickly exhausted my monthly allowance, had I not had a DSL line at my disposal I would have been dead in the water. I would suggest HughesNet Gen4 only as a last resort when all other options (cellular included) have been exhausted. For the brave, the speeds are immense and there is some level of anonymity since it’s not directly tied to a physical location.

Tuesday, September 24, 2013

Apple's new iOS7 - Location Services Required

Apple’s new iOS 7 is bundled with so many features it’s not even funny. One of the features, (and I’m not laughing about this one),  is rather than using the NTP servers to get Network Time based on preferred TimeZone, the geniuses at Apple decided to tie this to Location Services. If you don’t enable location services because you’re using multiple networks at once (like Satellite and DSL) or because you’re using a proxy server around the world, Apple is kind enough to set your location to the Pacific TimeZone. So now if you’re in Eastern Time and use your new iOS 7 alarm clock without noticing the change in TimeZone, you can get 3 more hours of much needed (but unexpected) sleep. 

To fix this feature, you can either enable Location Services (bad idea), or you can go into:
Settings > General > Date and Time

I’m a little bit of conspiracy theorist because of the things I notice in my job. One of the things I’m not in favor of is when a company, in this case Apple, tries to force me to use their data collection engine to get what once were standard features. In their new “operating system” iOS 7, when I enable Location Services I’m allowing Apple to track my every move and I’m sharing that information back with them. They can tell what my habits are, where I’m going, where I’ve been, and at some point this will all be tied to advertising and marketing (if it’s not already being used to pick locations for their new Apple stores).  Additionally, I don’t want Apple knowing that we’re running 3 WiFi networks and how many machines (and what types) are on our networks.

Location-Based Services

To provide location-based services on Apple products, Apple and our partners and licensees may collect, use, and share precise location data, including the real-time geographic location of your Apple computer or device. This location data is collected anonymously in a form that does not personally identify you and is used by Apple and our partners and licensees to provide and improve location-based products and services. For example, we may share geographic location with application providers when you opt in to their location services.
Some location-based services offered by Apple, such as the “Find My iPhone” feature, require your personal information for the feature to work.
I see a problem with this legal distinction because the data is not collected anonymously. They know who I am and what I do. I have a single device that contains everything from email accounts to passwords for services to all of the locations I've been at where I was around either a wireless network or a cellular signal. It's only shared with their servers anonymously (or so we're told)... unless you opt into cloud services. 

We may also disclose information about you if we determine that disclosure is reasonably necessary to enforce our terms and conditions or protect our operations or users. Additionally, in the event of a reorganization, merger, or sale we may transfer any and all personal information we collect to the relevant third party.
It's all getting foggy
Another trend with all of the newer services is the integration with the “Cloud.” Cloud is another way of saying “we’re storing your information on a server on the internet… somewhere.” If you want all of your contacts, photos, text messages, and e-mails stored on some server that is THE server people want to get into to collect all of the information in the world, then store everything on the cloud. After all there is safety in numbers, right? 


I myself am not in favor of cloud services because I like to control my own data, and I like to control who has access to it. If I enable cloud services and rely on them, now not only can I be charged to get to my own data, but at some point when I need this data if I’m not willing to pay I can lose all of my data. Also in the fine print, sometimes when I share data with companies they can share my data with their partners or use it to offer me better services. I know this because my clients frequently ask “how can we get more information or feedback from our customers?” What happens when someone unexpected gets this data? #ios7

Tuesday, August 6, 2013

DSL not for VOIP

I recently (temporarily) switched from high-speed cable to DSL. The local cable companies do not service my present address. I've been using VOIP (Voice-over-IP) phones for the longest time without issue. I recently upgraded my phone system and on ADSL the voice quality is horrible.

I suspect the reason for this is that the local phone company doesn't want you to use VOIP because if you do, then you're not paying for their local phone service. Additionally, they're likely trying to filter any attempts at people streaming any content from their local networks. To combat this, by default in most of the DSL modems the phone companies, by default, have configured network communications to work with UBR (Unspecified bit rate), which means that the line variably bursts with a *potential* high-speed of your maximum internet speed.

For Voice-Over-IP communications the setting would need to be CBR (Constant bit rate) since you're streaming audio over the internet. So for anyone considering using Vonage or Magic Jack on a DSL system, you're probably better off paying your local monopoly to have a POTS (Plain Old Telephone System)... until of course they upgrade your circuit to fiber.

Sunday, June 16, 2013

Massive fail in terms of User Experience and User Interface on Tivo Premiere 4.

As subscribers to Tivo's service we recently "upgraded" our 4 year old Tivo HD DVR to a Series 4 Premiere. Upgraded is their phrase not mine. Having done computer upgrades for years and even cross platform upgrades between Windows and Macs “upgrade" is not really the word you're looking for. More appropriately, buying a new Tivo and transferring existing service to it is a great way to waste a couple of days of your weekend.

A rough transfer to a Tivo Premiere:

  1. If you have shows recorded on your old DVR and "upgrade,” you have to maintain service on the old DVR in order to access the shows on the old DVR over your local network.
  2. If you attempt to maintain service for the month so you can transfer recordings to your new DVR expect to sign-up for a contract. Tivo will not allow you to extend service as a gesture of good will. In fact Tivo’s sales and support staff laugh when you mention to them that you would like to maintain both DVRs on the account for 6 hours.
  3. My new Blu-Ray player has a USB port and I can watch movies on it. My Tivo Series 4 has a USB port and they've disabled any functionality outside of extending user input interfaces... meaning if you copy all of your old shows (in Tivo's proprietary format) to an external hard drive via Tivo Desktop on a computer, when you plug the drive into another unit you can not access those shows.
  4. Their new user interface is abysmal... if they didn't provide you with the option to use the old interface you would likely throw your Tivo into an abyss.

When the new Tivo arrived I had already researched the massive failure on Tivo's part (I've been a long time user of Tivo so I've come to expect failure). I had a few major hurdles to overcome.

  1. I had to transfer my cable card from my existing Tivo to the new Tivo. Since it's locked by the cable company to prevent theft or copying I had to contact them and walk through the painful transfer while I tried to explain what we were doing.
  2. I unplugged my old Tivo from the ethernet connection so it would not call the mothership to find out it was no longer loved. I thought I had it fooled. When I removed the cable card the machine went into a panicked state and did everything short of mandating that I contact the Tivo service to reset the Tivo. Luckily I restarted the Tivo by unplugging it and the behavior went away. It of course was unable to watch live TV because of the lack of a cable card, but I could at least get to the main navigation again and watch the already recorded shows.
  3. Since I had unplugged the old Tivo I was able to plug it into a local network (not connected to the internet) and access it with Tivo Desktop. I was after several hours of experimenting able to finally get the Tivo Desktop Service to successfully copy all of my recordings from the old Tivo.
  4. After copying my old recordings from the old Tivo I figured I would show them on the New Tivo by streaming them over the local network. For about 20 seconds my workstation showed in the "Now Playing" list on the new Tivo. I could browse the list of videos from the old Tivo. When I selected a video the Tivo said "This video is no longer available." Then the server disappeared from the New Tivo's interface. I have not successfully connected the Tivo Series 4 Premiere to the Tivo Desktop Plus application since.
  5. Online during my research I noticed on the Tivo website there was a feature to transfer the Wishlists and the Season Passes. This data is not “saved” on their website, it’s simply read from an activated unit. At the point a unit becomes deactivated, all data on the Tivo website is removed.

Since I’m a member of the Tivo Advisors committee, they never really give us a chance to indicate what we want from their service, they’re only interested in what type of car I want to buy, or what type of movie I might be going out to see. So here are my recommendations to make the user experience for Tivo customers much better.


  1. When you offer users an “account” that they can use online to schedule their programs, keep a copy of this data. Since it pertains to your users, this account info should NOT disappear or be tied directly to a unit.
  2. Users should be able to setup profiles. Most users expect that they can record things. If a family of three has three different people, they likely have 3 different interests. It’s in Tivo’s best interest to maintain information on these 3 interests so they can use the demographics for their marketing practices.
  3. Stop locking your machines. It’s running a crippled version of Linux. Everyone knows this. Let people use the devices. You’re much more likely to have happy customers if they can actually use the machines they’re purchasing rather than having them set like bricks when a new Tivo comes out. No, I will not punish someone else I know and give them my old Tivo.
  4. Learn about who your users are. If most of your users are using Apple products, then make your units work like Apple products do. Make it easy for them to download updates at their wish. Make it easy for them to connect to the machines on their local network to get stream their local content.
  5. Stop putting ads in the main list. The whole reason I have a Tivo in the first place is so I can filter out ads. The last thing I want is MORE ADS.
  6. The Tivo recommendations are bad. Take a hint from Netflix. Let people provide you with their likes, then you can suggest shows that way based on shows that other people like similarly. Don’t recommend something to someone because a company pays you to. This comes across as less helpful, and more advertising.

Friday, March 15, 2013

Web Form Security: Reasons behind online attacks

Why am I being hacked?

To really know what you're dealing with you have to get inside the head of a script kiddie or a hacker if you want to actually "secure" your systems. Since there are so many factors, many of which that are usually out of the control of most individuals, I'm using the phrase "secure" loosely. From a web or online security standpoint I've worked with several companies over the years, usually in a post-attack analysis, trying to determine what happened, how to recover (if possible) and how to harden against the attack again. Companies often do not spend money on security before an attack and say things like "It's never happened before." or "Why would they target us?" or "No we haven't been hacked." when in actuality they have.

There are several reasons why someone or a group might want to take over a webpage, a blog, a webserver, or a MySQL database server. Here are a few of the reasons I've experienced myself for why someone would exploit a site or page.

Web Real Estate

Mission critical systems that rely on a database need to be secured. Not only is there the risk of someone data mining a database of personal data, but there are also risks for the database server that contains the database and/or the website servers that host the site receiving or displaying the data. One of the ways people can cause havoc on a server is by using an SQL Injection Attack. In November of last year I wrote a post about SQL Injection Attack Precautions. It talks about who's ultimately responsible in terms of securing a system since usually in most cases the blame for an attack is spread across several people.

How could web real estate be at risk? If someone looks at a form for a search, they can assume that it is connected to some sort of database. Blindly hacking at the form, they will not be able to tell if the database is a PHP array, an SQL database, or an XML file until they receive an [un]intended response. Through passing unexpected characters into the form they can potentially break the form, cause a stack overflow on the server (effectively crashing it), or break the application that is handling the form. Something like putting a server into an endless loop can bring a server to its digital knees. This usually involves  passing escape characters to add extra slashes, closing quotes (single and double), programming language terminations, or by passing HTML code into the form. Passing empty form fields can break some forms, while others can be broken by simply disabling Javascript.

When a web form is broken it returns valuable information to an attacker about the structure of the system, the type of server services running, and the quality of the code on the system itself. In my experience most websites with easily hackable code are frequented more heavily by would-be attackers and script kiddies than sites that return no errors or information to an attacker. Since most modern web servers are hosted in server farms with high bandwidth connections, to outside attackers it will more than likely be the same payoff for hacking a sophisticated site versus a simple site. They both offer the similar  bandwidth and server resources and they are usually designed to be managed remotely so there is little chance the Administrator will spot the attack. If an attacker sees an increased level of security, they're less likely to attack a server simply because their efforts will be undone much more quickly or they'll be caught because they will have to try harder.

Web "Street Cred"

Just like the real world, online hackers need notoriety. That being said, there are individuals in the hacking community who love a challenge. Some websites such as tech blogs, newspapers, social media accounts, video streaming websites and social networks are going to be more at risk for someone trying to replace content or services simply to make a name for themselves. There are far more people looking to become famous from a hacking attempt than there are people looking to steal information and sell it on some black market. The skill sets required for guessing a password to take over a page vs. actually deriving unencrypted usable data that can be sold are night and day different. There are quite a few apps in the open that will crack or guess a password. There aren't very many individuals that can successfully write a root kit. Sometimes an attacker can simply guess the password to get in and look at the code. The guys who do it for a living will not be bragging about it unless they're making a sales pitch for paying work behind closed doors. You will see script kiddies doing it so they can make a name for themselves (think Anonymous).

Political Reasons

Some "groups" like Anonymous take pride in bringing down sites and exploiting pages and accounts with opposing views or showing companies and corporate conglomerates that they have glaringly open holes in their security. Search for "Anonymous Hacks Burger King Twitter" on Google. While there likely are real hackers that operate under the "Anonymous" moniker, most of the exploits I've seen are pretty amateurish. If Anonymous were really a serious group there would more than likely be now more online trading (or stock market for that matter).

Bad SEO

Some people just want more links for their own sites. These people can be spammers and sometimes they're legitimate businesses that have paid for a service that they themselves weren't quite sure on. In the past there was a practice of spamdexing where a website listed in major directories or topics pertaining to the contents of the site would be picked up and rewarded by the search engines. Fake sites and phishing sites soon caught onto this. The search engines changed their policies, but sometimes in countries throughout the world word doesn't travel so fast through translation. Many "SEO specialists" mention that they can get a site listed through link sharing. This is more than likely how if they are overseas.

An example of spam-dexing from the Search Engine Journal (3/12/2013)
"There are many sites with spam on their sites that can’t see the links that they are showing where you couldn’t see unless you went into the code.  Google bot shows that a Top 50 University has “cheap viagra pills” on their main page."
To find out which one you can search for University Viagra on Google.

Data Capturing including Credit Cards and Social Security Numbers

Some people are a little more secretive about their exploits and they will hide code on a system to take advantage of web visitors and traffic. This may take the form of database copying or replication (if the site is storing e-mail addresses, credit card numbers, or sensitive data). The attackers may send copies of the real submissions to their own server. They may monitor statistics from the site (for a competitor). Some attackers inject malware into the code so they can infect user computers. In a previous post I talk about the hacking of clothing manufacturer Calvin Klein and how I started receiving SPAM from the newly created e-mail address I used for them the day I signed up. Calvin Klein of course denied any knowledge of this or interest in rectifying the issue.

Additionally when someone is actually capturing all information to a system on the system itself, any information passed is vulnerable. This includes Social Security Numbers, Credit Card numbers, and anything else that may be submitted (student ID numbers). Depending on the type of site, this is a huge risk to clients, customers, and worse... the brand in terms of PR backlash.

Bot Net  

Web servers can be powerful, plentiful machines just ripe for harvesting. Located on massive connections there is very little that can be done to track multiple machines requesting orders from the controlling system (the requests can look like normal web traffic in a packet filter). In numbers, compromised machines can become a powerful collective. Why not run an application in the background on someone else's web server to make it control countless drones while it goes on serving a webpage? This does actually happen. Usually the attacker will install something called a "root kit" which is an app or framework that is undetectable that runs in the background. This allows them to control the server and exploit the bandwidth and resources available to the server. The web page may be up and running and unchanged, so the owner usually won't find out until there is a knock at the door because the machine was used to exploit someone else's, it was controlling countless other machines or worse the website goes down because the ISP pulled the plug at the request of a government or after their own inspection and determination of high traffic. Once a root kit is installed it is easier to use a new machine than it is to clean off the root kit. Without examination the exploit the attacker used may still be in place. It would only be a matter of time before the attack exploited the machine again.

So what are the real risks?

Most of the time the attacks come down to bad password management policies, or use of an unsafe network by someone to log into a website control panel or administration panel (think Starbucks). Every once in a while someone is hit with an XSS attack or a/an [My]SQL injection attack, but this requires someone actually trying to hack the server. Passwords can be captured in open places like airports, coffee shops, hotels, vacation resorts, cruise ships, and on any other unsecured WiFi networks with free applications on the web. Be smart and use strong passwords longer than 10 characters in safe / secure locations and more than likely there will be no issues.

Web Form Security: Stopping Spam

Sites are often hacked through poor
web form implementation.
This is the first of a multipart series on securing web forms. One of the best ways to approach web form security is thinking about the form through the eyes (or the mind) of the attacker. What do they think they can gain from the form? In this first posting I'll talk about SPAM and why it happens and potentially how to stop it.

SPAM (non-solicited e-mail) has been a problem almost since the beginning of the Internet. It costs companies billions of dollars in wasted bandwidth and resources such as anti-spam firewalls, hardware spam filters, anti-spam e-mail filtering services, and lost employee time. Since 1995 [the] HTML [language] has allowed for an input tag and web forms for uploading images, files, and supplying different types of input. As web bots or crawlers became more prolific around the beginning of the century companies and private individuals began turning to HTML [web] forms to reduce or cut the amount of SPAM (unsolicited e-mail) they were receiving from web-posted e-mails (e-mail addresses that were actually visible to a browsing visitor).

An e-mail address in text on a website is surely to be added to hundreds if not thousands of spam e-mail queues.

Initially having a form that would post to your e-mail account was enough to stop a lot of the spam, but as data-mining became much more invasive, form elements containing email addresses were being mined specifically for those addresses and more spam continued (e-mail addresses in web forms are available as text to anything that can parse HTML). Then there were exploits of people injecting information into web forms and relaying messages through Perl server-side CGI scripts. Once exploited, someone could easily send e-mail from a webserver and have it look like it actually came from the company hosting the form. Most modern web forms are processed by a server-side component [language] such as PHP, ASP, ASP .Net, Ruby, Python, and so forth. Even though with server-side processing it is much easier to filter the information coming in through a form, many times a "spammer" can beat a form by simply completing the form as a person would.

E-mail Addresses
Some web forms still use outdated non-industry standard code to submit a message from a website to an e-mail address using the "mailto" option. These forms will typically open a default e-mail client (such as Outlook or Mac Mail) upon the user selecting the submit button. The reason this needs to be avoided:

  1. The e-mail address is visible to anything on the internet.
  2. Some people use webmail and this opens a program that may confuse or irritate them.
  3. This e-mail address can be added to a SPAM list or be set as the recipient of e-mail bounce backs from SPAM or spoofed e-mails.
  4. E-mail gathered from websites is sometimes sold in online mailing lists to people who believe they are receiving a list of targeted e-mails when in actuality, they are using mined lists.

Steps to reduce spam from the web


  1. Remove all text-readable e-mail addresses from your website. To check this, open a webpage in a browser and "view source." If you search for the @ symbol in the code and find it, make sure it is not in an e-mail address. If you're responsible for programming the site, replace this option with something else. If you're not responsible for maintaining the code, contact the programmer and ask them to go about creating a form for e-mails from the website. If someone needs to contact you they will find a way. If you are a business, do not rely solely on your [web form] e-mail as your main point of contact.
  2. Provide a working form that can filter your messages from the site. Most [web] hosting plans come with a server-side language component that can be used to filter the messages. This language may be PHP, ASP, ASP .Net, or Perl. Servers installed by companies internally where a website is hosted locally also come with these options already available by default.
  3. Search the internet for company listings that contain your e-mail address. Sometimes this may include corporate directories, trade publications, web domain registries, and message boards. You can ask them to remove the e-mail address and replace it with a link to the web form.

Reducing other spam

Many of my clients publish their e-mail addresses in print on a variety of materials. These e-mail addresses more often than not go to some mass distribution group in their e-mail server. When a spammer sends a spam message to this e-mail account it is routed to more than one person. For every person the message is sent to there is a copy in their inbox (or Spam folder) on the mail server, not to mention potentially in a sent box from the distribution group, or in the inbox of the distribution group if it is setup as an individual that forwards rather than a forward-only box. If the person forwards to their phone and doesn't use a connector like IMAP, then there may be multiple copies of the e-mail per person as well. All of these messages [usually] take a tiny amount of room, but in greater numbers they can take a lot of space on a server or a local workstation (IT Real estate).
  1. Try to limit the e-mail recipients for addresses in print to only the people who maintain the list. Obviously business cards will need to have an e-mail address, so I'm talking about brochures, flyers, forms, letterheads, envelopes, and advertisements.
  2. Make a group-accessible mailbox for any inbound e-mails rather than distributing them through the mail system. This way any person in the group can delete the e-mails from the single location. Back this up in the event of accidental deletion.
  3. When printed documents are available online in PDF form they can be mined for e-mail addresses just as easily as a web page.
  4. Follow-up e-mails to submissions should come from a no-reply box or something that can be checked for mail submission, but not a distribution group.

Stopping in-bound spam with a web form

When securing a web form, there are a few things to consider.
  1. The person completing the form may not be a person at all. It is possible to submit information to a webserver via the POST and GET methods without using a web browser.
  2. Where is the submission of the form ultimately going? CRM, e-mail only, a database?
  3. If a [real] person can't complete your form because it is too complicated there is not point in actually having the form. They won't use it or worse, they'll go somewhere else.
  4. If your form relies heavily on a client-side filtering (eg. Javascript), that scripting language can be [more than likely] disabled. If it is disabled the filtering may no longer happen. Can they still complete the web form?
  5. Non-filtered web forms are [some of] the biggest risks to databases and corporate infrastructures. 
There are a variety of things that can be done with PHP, JSP, ASP, and ASP .Net (commonly found on Microsoft Web Servers) that can dramatically reduce the amount of Spam you receive from a form.  

K.I.S.S.

I was unfamiliar with this phrase, but one of my clients said "KISS... Keep It Simple Stupid," in response to their bad web application I've been repairing (previous vendor). I always try to setup a form to be simple to use for the end user and the recipient of the form details. Bad web forms [overall] cost companies millions [maybe billions] every year. If people can't complete a form, then sales can be lost, searches can't be made, potential customers feel the bad service is already starting and they are not even a customer yet.

Hack it.

I test the forms heavily to make sure they work. I try to type in incorrect information: I misspell things, omit fields, forget to put the @ symbol in e-mail addresses, and fill the forms with data in the wrong fields. Usually I weigh the feedback from the form to see if they are purposely entering misinformation or if they enter it humanly impossible. Then I check to see if the form was actually submitted. If it wasn't, then why?

Autofill

I use the auto-fill features to complete the form and submit. I see if the auto-fill features of my browser actually complete the form. Most people do not have a lot of time to fill out forms, so if you can make it present them with standard fields they're accustomed with they can complete the process more quickly. Auto-fill works by using some normal field names to gather information, then when those field names are used again, the auto-fill component of the browser(if enabled) will present the user with their past responses.

Don't reinvent the wheel

It's a web form. People are used to completing things in a certain way. Present the information in an intuitive method for the audience. If you have clients from all over the world, don't mandate a state name, or a county. If they're only supposed to be from one country, then you can omit the country field. Use words that translate into other languages easily. To see some of the field names for common web forms check out sites that people will use on a regular basis. Examples include sign-up forms on sites like UPS, postal services, Facebook, Twitter, and LinkedIn. Use your browser to "View Source" on the forms for those pages and see what the fields are named. If you name your field "client-email" it will probably not auto-fill, but if you name it "email" or use the HTML5 field type of "email" then it should work without issue when it comes to using auto-complete or auto-fill.

Avoid client-side language filtering

Because they can be easily disabled I recommend avoiding languages like Javascript in the web form. I've seen forms that have interactive elements that tell you whether the different components of the form pass a test before they can be submitted. The drawbacks are that not everyone has Javascript enabled, sometimes these things become annoying by removing elements from my message and telling me I've completed something inaccurately before I'm done with my submission. Also browsers implement Javascript components in different ways. For instance a company with a policy of using older versions of Internet Explorer on their workstations rely on ActiveX controls for AJAX( the scripting interface for dynamically checking a field without someone submitting the form with Javascript). These non-signed ActiveX scripting components are disabled by default for security reasons. The people who would use your form to submit their message may not be able to complete the form if it relies on Javascript or AJAX.

If you do decide to use AJAX, remember that the handler for the AJAX is available in the code. Anyone who wants to take over your server may use this as their point of attack. By submitting to the handler directly and bypassing the form altogether they can potentially find weaknesses in your application, server, or code rather quickly if they're using a bot net (group of compromised machines).

With that being said, avoid Flash as a web form. It uses a client-side scripting language based on ECMAScript (the basis for Javascript), not everyone has it installed, it doesn't work on all mobile devices. Flash is buggy at best, and Flash can easily crash a browser on its own. There is no reason to use Flash as a web form. Also there are ways to beat whatever filtering the Flash application is doing prior to sending to the server, meaning Flash might actually the Achilles' Heel of the safety of your server. Just as someone can see what the AJAX web component does without using their browser, someone can download a Flash decompiler, use a header checker in a browser like Firefox on a Flash web form, use a web debugger like FireBug to watch information transferred, or open packet filter like WireShark to see what information is being transferred to the server (if you're using a stand-alone Flash application on a DVD).

Serverside language filtering

When you're filtering server side, be careful what information you give back to a potential attacker. Do not allow them to enter code into the form and the give it to them as an attempt to have them correct it. Also it's good practice not to force someone to review the contents of their message.

PHP (one of my favorites) comes with the various functions to take advantage of Regular Expressions (another language for searching and filtering). With regular expressions or RegEx, some new programmers who are given the task of hardening a web form may see this as a viable option for screening the fields massively. This isn't a bad mentality, but it does depend on what you do with a failed response. Bounce someone for an incorrect field entry and you may lose a client or potential customer. If for instance the user enters the information in their own language (eg. Chinese) and the programmers assumes their own language (eg. English) for filtering requiring English characters, then the potential user may become a false-positive as an exploit attempt.

When I check fields I try to make it something a little more obvious. Here are a few things I check for:
  1. Do "name" field submissions contain numbers? (not typically, even for Edward II they use Roman Numerals)
  2. Do "e-mail" fields have "@" symbols and at least one period after the symbol? (a necessity)
  3. Do phone numbers only contain numbers? (What about +, - , Ext, Extension, x, '.', They might contain any of those) 
  4. Does the address contain a space character? (A necessity)
  5. Is an address really required? (If it's not, don't mark it as such, and don't force someone back to complete it.)
  6. If something is not required, but entered, does it still conform. (eg. Address isn't required, but they filled it with random garbage... they're probably a spammer.)
  7. Do the comments contain URLs? (Maybe a spammer? They might be telling you about a problem on your website.)
  8. Am I expecting BBCode(something usually found in web forums) in the comments? (Probably not, this is more than likely a spammer)
  9. Did they put a space in their name? If they didn't, is it okay to accept information on a first name only basis?
  10. Did they include anything that is obviously an SQL injection attempt? (eg. "; delete from users where 1=1")

Are they real?

This is a huge question in terms of securing a web form. If the attacker is using a program to auto-complete the form to submit Spam, or if they're using a group of computers (bot net) to attack the form and bring down the server, then how can you stop it? Simple. See if they are real.

Typical Captcha

CAPTCHA is a bad thing in many ways: It makes people angry. It's hard to complete. It's not always readable. It's a complete waste of time. Captcha stands for "Completely Automated Public Turing test to tell Computers and Humans Apart." It's basically a quick fix for trying to guess whether someone is human. International users? Avoid Captcha.

Hashing and tests for human skills

Just as CAPTCHA makes an attempt to make people prove they're human, there are a couple of things you can do behind the scenes to see if someone is a real person.
  1. Did they completely the form in a humanly possible time?
    Depending on the required information from the form, and the type of information expected, run a few timed tests. Use things like auto-fill and auto-complete to try and beat your human times. In my experience, most bots will on first attempt complete the form in less than 2 seconds.
  2. Did they complete the form using two different IP addresses?
    Simple... check their ip address and, pass their IP to the handler page. Check it on the next page for a change. Oh, but what if they modify this IP address you're using.
  3. Did they even use the form? Check this with simple hashing.
Most programming languages include a function called hashing. I use hashing as a test to see if someone is altering my expected information or if they're modifying anything that I'm setting myself. If they are, then chances are they're not using my web form unless they're using a browser plugin that lets them rewrite the HTML components.

Some of the tricks I do on the form:
  1. Pull their IP address. Include this in the info headed to the form. (If their location changes they're using a proxy or they're not using the form.)
  2. Pull the timestamp for the viewing of the page. Include this in the info sent to the handler. (If it's too short, then they're not real. If it's too long, then they're not real.)
  3. Pull the browser's User Agent. Include this in the info sent to the handler. (Does it look like a real browser? Does it say cURL?)
  4. Pull a random number and hash it also. (This will not be used at all.)
  5. Don't label these things in an intuitive way, but rather place comments in the serverside code that indicate what you're expecting.
Hash these three things together with something only known to you (a special word or phrase) and submit them in a different method than the rest of the form, meaning if you're using POST variables for submitting the contents of the form, then submit the Hash with the GET method variables.

Some tricks on the handler:
  1. Pull their IP again. Does it match what was submitted?
  2. Pull the new timestamp. Minus the old timestamp and see how long it took.
  3. Check the user agent. Is it a real browser? Does it have a keyword in it like Bot or cURL? If so, then it's more than likely not a real person.
  4. Do the new hash with the info passed from the original form. If it matches, then you know the form information wasn't altered. If it doesn't match, stop processing the form.

What happens if I don't hash my submissions?

I did this myself when I was first trying to beat the spammers. I started getting spam emails that were submitted 3 years prior or 100 years prior or 2 years into the future. Investigating, I noticed (in my custom statistics app) that spammers were reloading the form over and over again (likely in view source mode). They would alter the values of the timestamp and resubmit the form. Then they would do it over and over again filling my inbox. When I viewed the source and did this myself I noticed that they were watching the hidden values in the forms. They were seeing if they changed and if they were timestamps or whatever. If something didn't change it was a straight hash of something provided before the form was submitted (IP Address or User Agent). If it did change then it was a timestamp or a random number. I tested by hashing timestamps initially. This led to the same results. The spammers were guessing my hashing method and hashing the timestamps and presenting them to me, edited, in mass. I started hashing random numbers as a salt (cryptography meaning) with MD5 and I noticed that the spammers stopped filling out the form for a while. They couldn't figure it out, so rather than trying they would go elsewhere. Some still tried. Ultimately the only way to beat it was to complete the form as a human being. Some still do.

About 98% of my web form spam stopped when I started hashing my results and testing thoroughly. I capture the failed attempts in a text file (for logging, false positives, and form hardening stats) and pull the country from their submissions based on IP address. Most are from India, China, and Russia,with a few from the middle-east.

The whole picture

The best method I've found for beating spam with some of my corporate clients is the hashing method I've described, and I use a scoring system to see how bad a potential spammer might be (through filtering). If they don't complete 2 or 3 of the form fields correctly then they get a likely spam score. Certain things are a dead give-away... no @ symbol and they're a likely spammer. "Viagra" in all incarnations... (\/iagra,viaGra,v!agra, etc). You have to be careful if you're blocking words. "Cialis" is in the word "speCialist." I include the results of my spam scoring in my text-only files and copies sent to my clients. We occasionally see a false positive in a foreign language, but for the most part the Spam scoring is dead on.

In the next segment I talk about Why people attack sites online.

Check back for more posts. I'll update this entry when I add more to the series.

Until later,
-Chris

Tuesday, January 29, 2013

Technology's advancement requires competition.

I'm 100% in favor of a company starting out small with an innovative idea and then expanding. If this happens in a way so that little competition happens for the innovative company it's something magical to behold. Everyone benefits from competition however.

Without competition a company can set prices to whatever they want. If an innovative company has a new product and no other companies step up to compete, it could be because other companies lack funding, the knowledge, or they may see no benefit in competing at all (for a product they do not believe will be successful).

More and more in this day and age companies aren't getting ahead because they have more superior products or because the people buying the products have a heightened sense of brand awareness, but rather because a company benefits from information or services illegally or they pay other companies not to use the competition.

When a person knowingly supports a company that has ill-gotten gains, this helps and endorses the company to keep doing what they were doing. By having an unfair advantage a company can put all competition out of business and then set prices to control a market. If the item in question is technology, they can control all prices globally. Also without competition, technological advancement is in the hands of the only company left standing. If the company decides not to advance because it's not in the best financial interest of shareholders, then the results could be detrimental to a product line, a piece of technology, even society itself (just think if one company controlled the water supply... see Wikipedia for Water Privatization).

AMD Advanced Micro Devices and why you should not buy Nvidia or Intel (at the moment)
Many companies benefit when they hire a disgruntled employee from a competitor. They receive inside knowledge of the inner workings of the competition. They also benefit from any project the employee might have had knowledge about, not to a degree that the company can copy the technology entirely or beat the original company to a patent (unethical), but they can prepare for the competing technology, software, or product to be on the market and find ways to innovate and compete ethically and legally. This is the reason that companies have employees hire a non-compete and also clauses that state that anything you work on during employment with the company is the property of the company done as "work for hire." I myself feel non-compete clauses should be illegal themselves, but in most cases a company will be hard pressed to keep an ex-employee from obtaining gainful employment in their field of expertise. Work for hire is something that should be allowed if the company is funding the research, but if the company can show no receipts for the time the employee came up with the idea, then it should belong to the employee.

Sometimes however companies don't receive information legally, but instead pay recruiters to tempt employees of the competition into selling inside information before the employees have left the company. Insider theft and espionage not only cost companies billions, it can put a company out of business and even hurt everyone involved.

AMD Advanced Micro Devices stock values 1/28/2013.

Two cases have come to light in the past years involving AMD and unfair practices against their business. In a lawsuit filed 1/14/2013 - AMD vs. Feldstein, Desai, Kociuk, and Hagen - AMD is seeking damages and injunctions against the 4 people involved who allegedly sold inside information and collected data from the AMD database to AMD's main competitor in the graphics card market, NVidia. If only one person had sold the information to NVidia, or attempted to sell the information, then there might be the case that NVidia had nothing to do with the case and the person selling the information might have been opportunistic. Since four people sold information, it looks more like NVidia might be paying these people (and recruiting) information for ill-gotten gains.

The second case that comes to mind is an Antitrust issue between Intel and AMD. There was a "complaint" filed for NY vs. Intel where they go into detail about Intel suggesting to their clients that they stop using AMD chips. In the EU there was an Antitrust case filed against Intel in 2009 where the courts ruled in favor of payment to AMD. Intel's counter "Intel takes strong exception to this decision. We believe the decision is wrong and ignores the reality of a highly competitive microprocessor marketplace..."

In short, No, it is not innovative to pay off the market and keep companies from purchasing from your competitors products.

Is it okay to buy anything Apple branded?
While I definitely like the road Apple has taken with their machines recently in terms of speed, I give second, third, and even tenth thoughts to buying Apple products. Apple has become a company that ignores human rights when it comes to building their portable devices. Another reason is that Apple exclusively uses Intel chips in their machines and do not allow installation of their operating system on any other platform (including AMD). From Apple's EULA for Snow Leopard:

"You agree not to install, use or run the Apple Software on any non-Apple-branded computer, or to enable others to do so."

When companies (Psystar and PearC) selling hardware of their own branding with the Mac OS operating system installed were sued by Apple, the supreme court found that the use of the Apple Operating System on Non-Apple hardware was a violation of the DCMA. Meaning it's illegal. This makes me wonder if the Librarian at the Library of Congress has received any compensation for helping Apple to become a monopoly in this regard since the Library of Congress controls the DCMA (Digital Copyright Millenium Act). Because of this I have only purchased low-end Macs for checking email, but maintain an AMD 12-core server as my primary workstation.

Wednesday, January 23, 2013

Hybrid Postal Delivery Services: How they destroy brands

I recently ordered an upgrade to one of my workstations from a “local” vendor. They’re about 60 miles from my present location, just outside of Chicago. Most packages in the greater Chicago area being sent through the United States Postal Service take a maximum of 3 days from the time they’re sent, in my experience. This usually involves going from a local post office, to a main sorting facility, back to the destination post office, and into the hands of the postal carrier. Three days is on the high-end, as it is usually only takes two. This all depends on whether the address is handwritten or if the sender printed a barcode with all of the CASS-Certified presort information detail on the label. (Hand reading and sorting adds time to delivery.)

When I was making my purchase from the website, (I'm giving them a second chance hence the failure to mention them directly), I was presented with a couple of options: FedEx 2-day which would cost me an additional $15, FedEx Overnight Air $30 (no air involved for a local delivery), and several other highly expensive services. I trust the Postal Service very little, but rather than paying for extra non-essential services when my package could be delivered in two days using the normal postal system, I elected to use the “free” service which guaranteed 2-3 days.

When I received my receipt the vendor indicated the 2-3 day delivery and two and now three days have come and gone. My dilemma is that the people I ordered my package from, rather than using the standard United States Postal Service in a local, traceable method, decided to use one of the new hybrid services, in this case UPS SurePost 'Saver.' I HATE seeing this as the free option for local shipping because it almost definitely means that the package is going to be lost and take an extra few days. FedEx has a similar service call FedEx SmartPost… equally as bad (if not worse). When I use either of these services I end up seeing my package within 2-3 miles of the office for 2-3 days before it is finally delivered. Something about the process makes the postal service or the shipping service delay the final delivery.

After looking at the tracking detail last night and expecting my package to arrive today, I went down and met my postal carrier at the box and surprise, surprise... no package. He looks at me rather puzzled. I look at him rather puzzled and bid him a good day. He’s a nice guy, so are my local UPS drivers... it's not their faults... it's the logistics.

Upon returning to the office I go in to check the tracking detail. Apparently my package was “ROUTED TO WRONG LOCAL POST OFFICE. PACKAGE WILL BE TRANSFERRED TO CORRECT POST OFFICE FOR DELIVERY,” according to the UPS website. When I called UPS, rather eager to pick-up my package in person (because I’m tired of waiting), the person on the phone told me that my package would be delivered either today or tomorrow and that they were on top of it. When I asked if I could pick up the package, they said they weren’t sure where the package was exactly... a breakdown in the tracking detail between both services involved, in this case UPS and USPS.

So this brings to light several reasons why these services DO NOT NEED TO EXIST AT ALL. There are no savings using this model for anyone: shipper, receiver, or the shipping service(s). When a company loses a package or misdelivers a package due to the complexity of the shipping logistics it has the potential to smear all of the brands involved. That costs companies money (think Billions). In fact, here I am smearing their brands, DO NOT USE UPS SurePost or FedEx SmartPost ‘Saver’ Services for delivering packages to your customers or clients. They will find other vendors. Offer simple, yet-traceable delivery services. I may not purchase anything else from the original company for fear of not receiving it on time (or at all). I will avoid the UPS SurePost ‘Saver’ delivery service, like the plague, and try to find another vendor that will simply send my package to me, timely without added expense and patience required on my part.

If one were to go in and read the countless reviews on Amazon.com, Newegg.com, ebay.com or several other websites where reviews abound, they will notice a pattern of people who give an item a low rating simply because of a shipping delay. This not only hurts the success of the product (manufacturer's brand) that they are berating, but also the reputation of the company (seller's brand) that is selling the product. This is no doubt because the people doing the ratings have no concept of what they are doing, nevertheless it happens and is also costly.

When a package that should normally only touch two local post offices and a main sorting facility, bounces through three UPS sorting facilities, a local UPS branch, and two local United States Post Offices, and multiple mail carriers there is an increased risk of the package being mishandled, misdelivered, lost, stolen, and/or destroyed.

My recommendation if you’re UPS, USPS, FedEx, or Any Company that wants to have customers that spread good words of mouth about your products and services, then DO NOT use any of the hybrid sending services (or provide them) because unlike the normal services customers have come to love and expect, these complexities to the rather simple purchase and delivery model are a risk to all.


That's all for now.
-Chris