Josh Breckman worked for a company that landed a contract to develop a content management system for a fairly large government website. Much of the project involved developing a content management system so that employees would be able to build and maintain the ever-changing content for their site.
Because they already had an existing website with a lot of content, the customer wanted to take the opportunity to reorganize and upload all the content into the new site before it went live. As you might imagine, this was a fairly time consuming process. But after a few months, they had finally put all the content into the system and opened it up to the Internet.
Things went pretty well for a few days after going live. But, on day six, things went not-so-well: all of the content on the website had completely vanished and all pages led to the default "please enter content" page. Whoops.
Josh was called in to investigate and noticed that one particularly troublesome external IP had gone in and deleted *all* of the content on the system. The IP didn't belong to some overseas hacker bent on destroying helpful government information. It resolved to googlebot.com, Google's very own web crawling spider. Whoops.
After quite a bit of research (and scrambling around to find a non-corrupt backup), Josh found the problem. A user copied and pasted some content from one page to another, including an "edit" hyperlink to edit the content on the page. Normally, this wouldn't be an issue, since an outside user would need to enter a name and password. But, the CMS authentication subsystem didn't take into account the sophisticated hacking techniques of Google's spider. Whoops.
As it turns out, Google's spider doesn't use cookies, which means that it can easily bypass a check for the "isLoggedOn" cookie to be "false". It also doesn't pay attention to Javascript, which would normally prompt and redirect users who are not logged on. It does, however, follow every hyperlink on every page it finds, including those with "Delete Page" in the title. Whoops.
After all was said and done, Josh was able to restore a fairly older version of the site from backups. He brought up the root cause -- that security could be beaten by disabiling cookies and javascript -- but management didn't quite see what was wrong with that. Instead, they told the client to NEVER copy paste content from other pages.
^^^ Nope, I'm not either. Dear god, what CMS would let them do that though?!?!?
Sabre (Julian) 92.5% Stock 04 STI
Good choice putting $4,000 rims on your 1990 Honda Civic. That's like Betty White going out and getting her tits done.
Just to let you know, I am petitioning to make ^^^^ the failure page for problems here at FannieMae!
Sabre (Julian) 92.5% Stock 04 STI
Good choice putting $4,000 rims on your 1990 Honda Civic. That's like Betty White going out and getting her tits done.
If you see like 30+ hits from fanniemae.com on the website (although, I guess you all wouldn't have access to the logs, lol) you'll know why now
Sabre (Julian) 92.5% Stock 04 STI
Good choice putting $4,000 rims on your 1990 Honda Civic. That's like Betty White going out and getting her tits done.
sabre wrote:^^^ Nope, I'm not either. Dear god, what CMS would let them do that though?!?!?
I did ColdFusion web development for about a year in college, and have some time doing PHP... its so easy to get wrapped around the axle developing a site and forget the very simple security elements involved. I wrote an online voting site for student government. I had a couple people do everything they could to trick the system. Back buttons, turning off cookies, turning off javascript, entering in urls, etc. Slowly I was able to fix all of the security issues, but that process took almost as long as it took to develop the 'insecure' system in the first place. You'd think that a government contractor could do something better than a 20 year old getting paid $10/hour.
Jason "El Zorro" Fox '17 Subaru Forester 2.0XT
DCAWD - old coots in fast scoots.