Archive for category System Administration

A low tech way to get a mail server blacklisted using victim’s own forums

As they say in the military, “If it’s stupid and it works, it isn’t stupid”.

This is a low-tech, labor-intensive way to get a victim’s email server blacklisted at a major public email service, using the victim’s own public forums. The email provider was very helpful in getting this sorted out, and it’s not clear that this “attack” is specific to them.

(This situation can also happen “accidentally” if a number of users subscribe to your forums,  change their minds and then report the notices as SPAM instead of unsubscribing from the forums. That doesn’t seem to be the case in this instance.)

  1. Sign up for a few free email accounts with a public email provider. Get as many as you can, perhaps at least 20. Get some friends to help you. More is better.
  2. Go to the victim’s public forum servers and use each email account to sign up for one (or in some cases more than one) forum account per public email account. This gives you 20-100 forum accounts. Let’s use 20 as the lower bound and 100 as the practical upper limit.
  3. As an alternative, if the forum doesn’t use opt-in confirmation, just subscribe a few hundred random people to get the forum notifications. Let them do the work for you.
  4. Set each forum account to send an email notification for every forum update, or as many as possible. Some forum systems allow you to “watch” individual threads, some allow you to “watch” the entire forum system, getting one email for every other users’ post.
  5. In a moderately large-ish forum system, there could be perhaps 1 update per minute, so 60 per hour – that’s now 60*20 accounts (1200) or even worst case 60*100 accounts (6000) emails per hour going out from the forums system, perhaps through the victim’s outbound SMTP server. Either way, the target public email system is seeing a lot of email coming from one domain or IP range very quickly.
  6. If the rate alone isn’t enough to get the forum or SMTP server blacklisted, then go into each of the public email accounts and mark ALL the forum notifications as SPAM. Or if you subscribed a few hundred random people to the notifications, they’ll do the work for you!
  7. The combination of high email rate combined with the 1200-6000 SPAM use complaints should be enough to get either the forum server or the victim’s outbound SMTP server blacklisted.

Note that each and every part of this situation is working as intended. It’s only when they are combined that that you get problems. (Unless the forum doesn’t do email address opt-in verification, in which it’s all on you.)

This “attack” depends on these things:

  1. lots of manual labor, either by yourself or with some friends, or even some random victims
  2. a forum system that allows one user to cause the system to send lots of email based on the behavior of many people
  3. a moderately busy forum system
  4. a public email system that is biased more towards rate-based and user complaints than message content
  5. a public email system that the victim’s user base depends on, as in “must communicate with users in that public email system”

Fortunately, this is relatively labor-intensive, and not amenable to automation.

Countermeasures are left as an exercise for the reader 🙂

Advertisements

Leave a comment

fewer chat choices leads to stronger more collaborative global community

We reduced the number of choices for online chat and ended up with a more concentrated, focused and collaborative global community. This wasn’t planned, it happened organically, and it’s still in progress. But it shows the value of allowing and encouraging people to concentrate into common systems, even ignoring any financial considerations.

As recently as four years ago we had a multitude of separate group and direct chat applications. We had multiples of everything from private in-house Internet Relay Chat (IRC), Jabber and similar servers to several generations of Microsoft products. At one point I counted no fewer than 8 different chat servers that I needed to use myself, just to reach my customers and peers. For the record, that was two different Internet Relay Chat (IRC) servers, two different Jabber servers (the original open source XMPP server), two different Slack instances, Microsoft Lync, and Microsoft Office Communicator (which is now Skype for Business).

(This doesn’t even get into the multiple WebEx, Zoom, GoToMeeting and special conferencing systems.)

Completely ignoring the efforts and costs of running the in house IRC and Jabber servers, the major problem was people wasting time trying to remember (or figure out) which app or server they needed to use to reach a particular person or group. This led to lots of conversations like this…

“Does Susan use Slack?” “No, she’s on Microsoft.” “I don’t see her in Lync?” “She’s in Communicator”.

“Dear IT – I’m in Tokyo and I can’t reach UK R&D over Slack. Please fix.” “R&D is in the other Slack.” “What other Slack?”

This was bad enough for individuals, but finding out that perhaps 5 people who needed to collaborate were “homed” on three different platforms required getting them all accounts on a single platform just for that particular project or need. This led to an even worse explosion of per-user accounts, as now everyone had to have multiple accounts on multiple platforms.

This is similar to, but worse than the fracture in social media. While the “do I find you on Facebook or Google+ or Reddit or Twitter or Instagram or email or SMS?” question is annoying for individuals, having this in a global company is untenable in the long term.

Slack started as a grass roots effort among several of our internal communities. These groups “routed around” the IT-provided solutions and adopted Slack independently, and in most cases without knowing about each other. Obviously, this initially made the problem worse! But, as these groups found out about each others’ Slack instances, they began to talk about whether it would be good for them to merge their communities into fewer (or a single) Slack instance.

Importantly, all the IT organizations around the world realized what was happening and helped and encouraged the move. They made Slack a supported and preferred solution, instead of fighting it.

Over the past year some of our groups around the world have been organically moving their communities to Slack, retiring old services on their own. All the IRC servers and most of the Jabber servers have been decommissioned. Multiple Slack instances have been merged into a new “main Slack”, with more planned to move this year. More importantly several “new” chat systems have NOT been launched; those communities have agreed to adopt Slack instead.

This only worked because people wanted “one place” to gather and Slack offered a “good enough” experience. It’s not important that they selected Slack, it’s important that they all want to be in “one place”, and that “they” selected Slack, it wasn’t imposed. And frankly Slack was better than almost all of the legacy systems.

It’s not clear that we’ll stay on Slack forever, as some of the promised Microsoft solutions may offer better integration with Active Directory, desktop voice and video chat, etc. But until those come, all our users have the option of a single place to collaborate.

Leave a comment

Why I killed our IPv6 project…

Seven years ago we started an “IPv6 project”. The goal was to deploy IPv6 throughout our internal game studio network. After two months of analysis, I approached my boss and recommended we kill it. At least as an “IPv6 project”. It was reborn as a “clean up our network architecture, and oh by the way, add IPv6 (and a bunch of other things)”.

Read the rest of this entry »

1 Comment

2018? Wait, what?

Wow, I’m behind. It was a busy year, and not a lot going on that I could really talk about publicly.

The recent meltdown and spectre bugs have brought back some memories from Orange Book days. I’ve also been spending a lot of time thinking about “IT transformation” and non-technical stuff. I’ve also been to the UK and Japan, twice, each, which may become the “new normal”.

Let’s see what happens in the next 12 months.

 

Leave a comment

Zabbix “became not supported” – solved

I think I’ve found one of the answers to a long-annoying Zabbix issue related to SNMP items “flapping” from “became supported” to “became not supported”.

TL;DR – using an SNMPv1 query against an SNMPv2 device will confuse Zabbix. You’ll see intermittent failures of different tests as the device flaps between OK and “unknown”. This can be hard to track down, as its’s not a hard repeatable failure. It’s not the only cause of this error, but fixing this will solve many of the issues.

Details:

While looking through our Zabbix server logs I found LOTS of these:

2031:20161027:111122.224 item "netapp-cluster.thuktun.com:netapp.disk.prefailed.count" became supported
 2028:20161027:112119.172 item "netapp-cluster.thuktun.com:netapp.disk.prefailed.count" became not supported: SNMP error: (noSuchName) There is no such variable name in this MIB.
 2030:20161027:120146.448 item "netapp-cluster.thuktun.com:netapp.disk.prefailed.count" became supported
 2028:20161027:122120.026 item "netapp-cluster.thuktun.com:netapp.disk.prefailed.count" became not supported: SNMP error: (noSuchName) There is no such variable name in this MIB.

All of these referred to a NetApp in cluster mode, but I found a few similar messages related to some “NetBot” cameras around as well. Additionally, the actual test item varied; there were about 6 different tests that were all failing intermittently. The failing tests were:

  • netapp.disk.prefailed.count
  • netapp.disk.cfe
  • netapp.disk.name
  • netapp.disk.version
  • netapp.disk.failed.count
  • netapp.disk.spare.count

A few Google searches returned some items related to this kind of issue, back to 2013

All of these are talking about Zabbix trapper vs Zabbix agent, that is using the wrong type of check for the test item, but no mention of SNMP.

Let’s look at the Zabbix configuration. Are we using the trapper or the agent for these test items?

screen-shot-2016-11-11-at-8-01-56-am

Note that the NetApp template doesn’t use the Zabbix trapper or agent, it uses SNMP. But, some tests are SNMPv1 and some are SNMPv2. This is likely due to the fact that some versions of NetApp have had varying support for v1 and v2 over the years, and whoever created the template originally started with just v1. Over the years, as more test items were exposed, new tests were added, but using SNMPv2 and the old tests were left at SNMPv1?

Interesting. All of the failing tests are using SNMPv1. Not all v1 tests are failing, but all failing tests are using v1. There’s nothing here about Zabbix trapper or the Zabbix agent, but there is a (potential) mismatch. This shouldn’t be a problem, but let’s find out.

Over the next few hours, as each failure showed up in the Zabbix logs, I switched that particular test to SNMPv2. After being changed, that test never again “flapped”.

It seems that the keys to solving this were:

  1. LenR’s comment from 2013 about incorrectly defined items (although he was mentioning the zabbix-sender, not SNMP)
  2. Realizing it wasn’t a problem with the trapper vs agent, or an incorrect item definition in the agent, but that it was a mismatch in the server’s definition in the test item.
  3. That SNMPv1 and V2 are being treated differently in the Zabbix server, and that usually doing a v1 test against a v2 device will usually work, but not always.
  4. The “soft” failure of the v1 test against the v2 device “presents” as a MIB problem (“SNMP error: (noSuchName) There is no such variable name in this MIB.”), not a protocol failure.

I changed all of the failing NetApp tests to SNMPv2 last week. Since then all the tests that were changed from SNMPv1 to SNMPv2 have been fine. There have been none of these errors in the logs for 5 days.

Next: What about those NetBotz? Or maybe Zabbix meets IPv6 🙂

 

1 Comment

IPv6 at AWS – Route53

Hooray! AWS users can now serve their DNS info over IPv6. You could serve AAAA records before, but only over IPv4.

This finally gives AWS customers a way to deal with IPv6-only customers (as are appearing in Asia), who might have otherwise had to depend on ISP proxies or CGN (Carrier Grade NAT), host their DNS elsewhere, or be unable to reach services hosted in AWS.

Leave a comment

IPv6 – now from COX (San Diego)!

As you recall, I’ve been lamenting the lack of direct IPv6 via my local ISP (COX) since 2013.

It seems that some time in the past 3 months, they silently enabled IPv6 in my area! I was preparing to reconfigure my tunnel from tunnelbroker and decided to “just check”. Cox is now correctly(!) serving IPv6.

I had to turn off my Hurricane Electric tunnel a few months ago, as Netflix began blocking as many tunnel services as they could, over geo-location “issues”.

I was able to set my Apple Airport Extreme to “auto configure” for IPv6, and I’ve got proper addresses, routers and even DNS over IPv6.

Thanks Cox!

 

 

Leave a comment

Register4Less now supports IPv6 DNS!

I got a great followup from my domain registrar Register4less today. A few weeks ago, I had asked about when their DNS would fully support IPv6.

They’ve allowed AAAA records in their hosted DNS for years, but they only accepted queries over IPv4 until this week.

This is just another reason that I love R4l’s support. When I had asked them about IPv6 DNS before, they said it was coming “soon”, but couldn’t give a for-sure date, but would let me know.

When they turned up IPV6 DNS this week, they proactively sent me an email letting me know that the service was available, answered a few questions (within literally 5 minutes!).

Register4less.com is the official DNS provider of UserFriendly.org. If you work IT, you should know this long-running webcomic.

 

Leave a comment

Recovering a compromised WordPress site – Part 4 (import into wordpress.com)

In parts 1, 2 and 3 the focus was on getting the blog data out of the old system, cleaning it up, and converting it to a modern format that can be imported into a modern WordPress site. At this step, you can either spin up your own WordPress install, or just put it into hosted WordPress.

One of my goals was to never have to admin WordPress again. I’m tired of constantly having to patch it, or deal with security issues in plug-ins. So I’m putting everything into WordPress.com.

After part 3. we’ve got a WordPress WXR (WordPress eXtended RSS) export/import file. We just need a place to import it into.

Create a wordpress.com account and empty site

Start here. Follow the instructions to create an account and create an empty wordpress.com blog.  Don’t worry about the theme, you can set that later.

Load your WXR file

Log in to the control panel for your blog. Go to “Tools -> Import” to get to the Importer Screen.  Select “WordPress” and follow the directions to upload your WXR file.

View your new blog

In the left menu panel, select “My Sites -> View Site” to see your new blog, with (hopefully) all your old content. Check the older entries, check embedded links. They *should* all be there. If they aren’t, you may have to go all the way back to Step 2, and re-do the editing, then Step 3 and Step 4! I got pretty lucky, or was thorough enough with my initial editing, that everything I needed was recovered completely.

Enjoy a Frosty Beverage to Celebrate

May I suggest a great California IPA?

 

Leave a comment

Recovering a compromised WordPress site – Part 3 (AWS, Bitnami)

At this point we’ve got a good MySQL dump of the compromised WordPress site. Now what?

To the cloud!

As I alluded to in the earlier parts, I’m going to load the MySQL dump from the ancient (compromised) site, then re-dump it out as WXR (WordPress backup) so that I can import the whole thing into WordPress.com.

I’ve got the database dump, now I need a WordPress instance to load it into.

In the olden days, I would have grabbed some hardware, loaded Linux, then mySQL, then Apache, then WordPress. I only need this for a few hours, so why spend a half day doing the basic installation? It turns out there’s a great alternative.

Bitnami has a pre-configured LAMP+WordPress image available from the Amazon Marketplace. I can use their image for only US$0.13/hour on a c1.medium AWS instance. or US$0.02/hour on a t1.tiny instance. I figure I need at least two-three hours of run time, and I don’t want to run into any size/space limitations of the t1.tiny instance. So I’ll gamble and use the c1.medium. That means I might spend up to a little over US$0.50 (c1.medium) if I need 4 hours instead of only US$0.08 for 4 hours if I use the t1.tiny. I’ll take that gamble 🙂

1. Spin up a WordPress instance using the Bitnami image

This was pretty easy. Just start from the Bitnami pre-configured image in the Marketplace, and then proceed to the launch area. You’ll see that there’s a m1.small instance type already selected. This is where you can decide to use a c1.medium, or take the m1.small default. Just proceed and spin up the instance. Then proceed to the AWS Console to get the DNS hostname.

2. Configure WordPress on the instance

At the bottom of the AWS console you’ll see a section labelled “AWS Marketplace Usage Instructions”.  This will lead you to the username and the password (which will be in the instance’s boot log file). From there you can log into the WordPress instance over SSH with the username “bitnami” and your AWS private key.

3. Load and check the database

Log into the WordPress instance and use the control panel to load your MySQL dump into WordPress. Switch to the site view, and start scrolling through the blog posts and other links.

In my case, I found about a dozen posts that were still broken. This sent me back to the raw database edit (see Part 2) to re-edit the database text file dump. I edited out the broken records, re-dumped the database, and started again at step 1 above.

Once you have a valid WordPress site in your AWS instance, it’s time to get that WXR file we need for the import into WordPress.com.

4. Export the valid WordPress blog

Jump into the WordPress control panel, and use “Tools -> Export” to create a WXR file and download it to your computer. Once you’ve done this, you can spin down the AWS instance using the AWS console. Use “Terminate” so the EBS volume will be released as well.

We’re almost done. Next time, creating and loading the site into WordPress.com.

 

 

1 Comment

%d bloggers like this: