Well for the past couple weeks it seems like we are getting more spam. I mean we've always had it but it seems more frequent like someone might have passed out our info? I don't think it's anything we can't handle though.
I don't think the two are related.
The attacks on the forums exploited a vulnerability in vBulletin that I discovered only after our site had been hit. vBulletin posted about the potential threat on their announcement forum, but did not notify customers by e-mail. More on that later in this post.
Problem is, that the latest "oddities" have made the forums unusable. Is there another communication tool (Steam, Twitter, etc) that we should use in such cases?
Yes. As Lloren mentioned, we have a Twitter account, which we would normally use to post updates about planned and emergency outages.
Unfortunately, I haven't memorized the password for the @cgalliance Twitter account because (1) it's a strong password and (2) there's no space left in my brain for passwords.

(Seriously, you'd be amazed at how many passwords an online community leader has.)
Because I didn't know the password, I couldn't log in from work. And because the attack took place while I was at work, I couldn't access the Twitter account.
But more on that later in the post.
It seems like you guys have really been on the ball and getting the forums restored in a VERY timely fashion (and we are very appreciative of that!)
Thank you. It is good to know our work is appreciated (though Lloren and Gerbil deserve the bulk of the credit for keeping the forums running as smoothly as they do).
- but something like Steam or Twitter could be a way for the admins to let us know "we're aware of the problem and are working on it" or something.... Just a thought...
Agreed.
In the interest of fairness, I did post a placeholder page to the /forums directory yesterday with a statement to that effect. Unfortunately, anyone access sub-forums directly wouldn't see the notice and there was no way to get to the admin control panel to turn the forums off (as we typically do during planned maintenance and upgrades).
The problem there is that there are those of us who use no social media. Patience is then necessary.,
Oh, but what fun is that?
Fortunately, you don't need to sign up for a Twitter account to view the @cgalliance Twitter feed. You can just click
this link and keep it handy for future reference.
Thanks for the hard work you guys are putting in dealing with what amounts to some idiot attacking the site. I was in withdrawal yesterday trying to get in but I know what a great big headache this is for you guys. Will be praying for ya, as well as a newer, tamper resistant version of vBulletin.
You are welcome, sir. It's good to know that these forums matter enough to people that they're bothered when they're unavailable and it's encouraging that others recognize the time and effort it takes to not only secure the forums but restore them after an attack.
Yes, we are on Twitter. When the forums go down you should see something here:
https://twitter.com/cgalliance
We'll be more diligent in keeping this updated as things come up.
And that was all on me. Lloren was unavailable when the attack took place and I was scrambling to get the forums back online--or at least close any security holes--all while trying to repair 7 or computers on a workbench.
Not gonna lie. Yesterday was a bad day. (It didn't help that my cold was getting worse; today was even worse in that regard.) But the attack taught us some important lessons:
We need a plan and we need better communication. Yesterday was a mad scramble. I didn't have the tools I needed to properly address the situation while at work. Lloren was unavailable. A plan wouldn't prepare us for everything, but it would have to be an improvement on yesterday. Even something as simple as, "put up a prepared placeholder page, post something to the effect of, 'Yes, we know there's a problem and we're addressing it,'" on the Twitter feed, and post again when the situation is resolved, would give us a clear process regarding how to react to attacks and other unforeseen issues.
We need a backup server operator. Lloren is awesome at what he does. And honestly, much of yesterday's troubles (at least, my troubles) could have been avoided if I'd just shut up, stepped back, and waited until Lloren could resolve the issue. But I'm not that patient and I tend to fixate on, well, fixing things. That tendency often causes me trouble, but it's also turned out to be handy in my line of work (information technology).
In my role as Tribe of Judah President, I've made a focused effort to get away from game server operations. I no longer work on ToJ's Team Fortress 2 server (even when Valve drops a new patch right before or during our TF2sday game night and I'm sorely tempted to intervene) and I'm planning on stepping away from working on our unofficial Natural Selection 2 server as well (though that's a work in progress; I updated the server last night when an update dropped 30 minutes after our Thursday game night started).
I plan to take the same approach with the CGA: Delegating server operations to others. And after yesterday's attack (where I almost ended up doing more damage than the hackers, but more on that another time perhaps), I probably shouldn't be allowed to work on the CGA box.
Since I'm stepping away from server operations, we most definitely need a backup server op for when Lloren is unavailable. We'll need someone who's a Linux expert, someone trustworthy, and someone who can respond quickly and reliably.
We need multiple remote backup sites. Expect a call for donations to go toward an Amazon S3 (or something similar) account before long.
I'll also be coordinating with Lloren to set up, test, and maintain connections with remote servers where we can backup forums data.
After I got home from work yesterday, I created a backup of the forums, downloaded it to my computer, downloaded the daily backup as well, then dropped the tables from the forums database. When I first opened the daily backup, I scrolled to the end of the file and my heart almost stopped when I saw complete gibberish. When I opened the manual backup, the
entire file looked like gibberish.
It felt like someone had punched me in the gut. Two backups, both corrupt? And I panicked. And panic was a perfectly reasonable response. TEN YEARS OF POSTS AND NO VIABLE BACKUP?
Fortunately, neither the daily backup (which had not yet finished loading, so when I scrolled to the bottom of what had loaded and saw image code, I thought it was corrupt) nor the manual backup (which was actually a .gz file but I had saved as a .sql file) were corrupt. I was able to restore the daily backup and get the site rolled back to before the attack. I had previously closed the exploit vector, so all was well.
All said and done, we only lost about 8 hours of activity (and that during a non-peak time period).
But the scare probably took a few days off my life.
And while we have a scheduled backup in place, we need redundancy. Lloren and I will be working to ensure we are prepared for a worst case scenario.
So yes, yesterday was indeed a mess, but it could have been far worse. A 9-hour outage and 8 hours of posts is really, really tiny next to the possibility of losing a decade's records of community activity.
And as odd as it may sound, the attack may have been a blessing in disguise. I sincerely doubt it was the attackers' purpose, but they've helped us by highlighting some things we can improve upon, both in terms of disaster preparedness and communication.
I'm not quite prepared to thank them for their accidental help (yesterday was still awful, after all), but it helps me calm down when my flesh wants to erupt in rage.
So there you have it, folks. Attackers attacked, I broke things then fixed them, and we're better prepared for whatever may come next.
