Archive for category scale or die
As they say in the military, “If it’s stupid and it works, it isn’t stupid”.
This is a low-tech, labor-intensive way to get a victim’s email server blacklisted at a major public email service, using the victim’s own public forums. The email provider was very helpful in getting this sorted out, and it’s not clear that this “attack” is specific to them.
(This situation can also happen “accidentally” if a number of users subscribe to your forums, change their minds and then report the notices as SPAM instead of unsubscribing from the forums. That doesn’t seem to be the case in this instance.)
- Sign up for a few free email accounts with a public email provider. Get as many as you can, perhaps at least 20. Get some friends to help you. More is better.
- Go to the victim’s public forum servers and use each email account to sign up for one (or in some cases more than one) forum account per public email account. This gives you 20-100 forum accounts. Let’s use 20 as the lower bound and 100 as the practical upper limit.
- As an alternative, if the forum doesn’t use opt-in confirmation, just subscribe a few hundred random people to get the forum notifications. Let them do the work for you.
- Set each forum account to send an email notification for every forum update, or as many as possible. Some forum systems allow you to “watch” individual threads, some allow you to “watch” the entire forum system, getting one email for every other users’ post.
- In a moderately large-ish forum system, there could be perhaps 1 update per minute, so 60 per hour – that’s now 60*20 accounts (1200) or even worst case 60*100 accounts (6000) emails per hour going out from the forums system, perhaps through the victim’s outbound SMTP server. Either way, the target public email system is seeing a lot of email coming from one domain or IP range very quickly.
- If the rate alone isn’t enough to get the forum or SMTP server blacklisted, then go into each of the public email accounts and mark ALL the forum notifications as SPAM. Or if you subscribed a few hundred random people to the notifications, they’ll do the work for you!
- The combination of high email rate combined with the 1200-6000 SPAM use complaints should be enough to get either the forum server or the victim’s outbound SMTP server blacklisted.
Note that each and every part of this situation is working as intended. It’s only when they are combined that that you get problems. (Unless the forum doesn’t do email address opt-in verification, in which it’s all on you.)
This “attack” depends on these things:
- lots of manual labor, either by yourself or with some friends, or even some random victims
- a forum system that allows one user to cause the system to send lots of email based on the behavior of many people
- a moderately busy forum system
- a public email system that is biased more towards rate-based and user complaints than message content
- a public email system that the victim’s user base depends on, as in “must communicate with users in that public email system”
Fortunately, this is relatively labor-intensive, and not amenable to automation.
Countermeasures are left as an exercise for the reader 🙂
One of this morning’s keynotes at Gartner Datacenter Conference (#gartnerdc) was an on-stage interview with Scott Dillon, EVP, Head of Technology Infrastructure for Wells Fargo. He was interviewed about the Wells/Wachovia merger, and the challenges faced by the organization.
While the talk was full of sound bytes about scale, talk about merger strategies and budgets, the discussion came back to culture over and over.
On the scale and technology side, there were tidbits like these:
- Wells Fargo employs 1 in 500 in the US;
- IT had 10,000 change events per month, before the merger;
- They have a physical presence within 2 miles of 50% of the US population.
But it was on the management side that I found the most interesting information.
Before the merger, there were clear guidelines, such as “if we have two systems, A and B that are doing the same thing, we will pick the best, either A or B. No C options.” This was a merger of equals, at least in terms of the technology. They chose the best of the two orgs, then committed to making that the One True New System for everyone. They ended up with an almost 50/50 split of technology from the two companies.
But, no matter where the talk went in management and technology, it just kept coming back to culture. Building one culture from the best of both was a top management priority for the entire company. Just as they (IT) selected the best tech from each, they (Executive management) worked to take the best of the culture from both, to be the foundation moving forward. They had a great advantage, as both companies share almost all of their core values, so this was a little easier than merging the technology. But there was an explicit decision to do this, it wasn’t left to chance.
Management made “culture” a number one priority. They focused on merging the culture as much as they focused on merging the technology. They made building communications between the employees an early priority. Very early on, they even created a “culture group” to look at the two cultures and make specific decisions about how to foster the culture merger.
Part of their culture involves employee value. Every company does “exit interviews” when employees leave. Wells does “stay interviews” where they engage with employees to actually gather their concerns, let them know how much the company values and appreciates them. Isn’t that better, to find any issues before key people leave? To constantly work to make the work environment better, instead of waiting until it’s too late?
In IT we often get too focused on the technology, and we can claim that “the business” is too focused on profits, or stock price, or some other “business” area.
When was the last time you heard a business, a bank, even, put their culture as one of their highest priorities?
More importantly, as IT, when was the last time we put “culture” high on our priority list?
Last week I ranted about how much media I was consuming (or at least wading through) as opposed to writing. Part of that was the 436 RSS feeds I had in Google Reader.
As of this morning, I have 93 feeds. And the quality of what I’m seeing is higher.
I’m implementing a suggestion from one of the smartest people in my life. I’m crowd sourcing my RSS feed.
I did this by removing most of the source news sites themselves, and keeping (and adding) individuals. I’ve dropped everything from Pharyngula to Gizmodo, Engadget to Ars Technica. Goodbye NYT and CNN. I don’t need you anymore, at least not as RSS feeds.
You know why? Because there are a lot of smart people out there who read your content. Each individual doesn’t read all your content, every day, but enough of them see some portion of your content, and are moved to write about it (on their blogs) or “share” it themselves in Reader. By following those individuals one way or another, I get the best of your content, without wading through everything.
I’m giving these individuals editorial control of my daily news, instead of each of you. Why? Because they are interested in sharing what they think is the most important, or funny or interesting content each day. They don’t care how many hits you get, or your ad revenue. They care about sharing what they think is the very best of the Internet. They’re willing to attach their identity to their opinions about what is “good” and make recommendations about your content.
But this is a two-way street. I’m more careful, more targeted, more thoughtful about what I share. I don’t want to pollute your RSS feed (if you’re following me) with too much low quality dross to wade through.
So, what have I kept, or added to my own RSS feed reader?
I’ve kept all the LOPSA Member Blogs; individuals writing about the things they are most passionate about, especially system and network administration. I’ve kept a few specific humor sites that I enjoy, but I don’t even try to read everything, everyday. I’ve kept quite a few blogs by individuals on topics that I enjoy: computer security, system administration, writing, film noir, brewing, etc. I’ve kept all the blogs by the people in my life, friends and family.
I’m continuing to seek out individuals that post interesting things and follow them as individuals, to see what they’re writing, reading (and recommending).
I still have more source sites to prune, and more individuals to add, but this has already made a huge difference in my daily news reading. I made it through my entire RSS feed list in less than an hour Sunday morning, even though I hadn’t read anything for at least four days.
I no longer dread opening my Reader feed and seeing “everything that I’m going to miss”. I’m trusting that you will all find the best of the stories and bring them to my attention.
Thanks for reading teh Intarwebs for me, and sharing the very best. I’ll try to do the same for you.
About a month ago, Server Fault partnered with LOPSA to give 40 Server Fault members free LOPSA memberships based on who had provided the best technical information during the month (as measured by Server Fault reputation).
Server Fault and LOPSA have a lot in common. Both are communities of system administrators, and both are committed to advancing the state of the art in IT. Both are committed to system administration as a whole, not just “Linux admins”, “Windows admins”, “network admins”, etc.
I’ve only been a Server Fault member for a little while, but I have already gotten great value from the community there. I’ve learned some technical things (my Windows-fu really sucks), and most importantly, I’ve learned more about what I would call “new school” system administration and new ways to work with users and their community.
Kyle Brandt is one of the administrators who works behind the scenes to keep Server Fault up and running smoothly, and he also writes about his experiences at the Server Fault Blog.
Server Fault will be having a one day conference for system administrators and operations people this October called Scalability. Check out http://scalability.serverfault.com/ for details!
Kyle was kind enough to take some time from his busy schedule to answer some questions about what it is like to manage such a large and busy system, that serves a community that can be rather demanding at times.
IPv6 has been around since 1998, but has had almost no adoption in the United States. I’ve been aware of IPv6, but haven’t paid much attention to it. Until the last year or so, running v6 wasn’t a trivial task, with few OSes and few home networking products easily supporting it. Successful IPv6 at home required Linux (no problem) and custom home router firmware (still a minor inconvenience).
Then a friend sent me this link about the DoD pressuring network suppliers to demonstrate a commitment to IPv6 by (at the very least) providing v6 connectivity to their corporate web sites. Since the article mentioned an old friend, I called him to get some more info. As we started talking, he told me that his home has been v6 (via a tunnel) for over 3 years. He’s running all the usual OSes at home, and the initial hurdle had been home networking kit. Building his home v6 network would be easy today, as most home network vendors, including LinkSys and Apple have IPv6 capable products.
I started checking the blogs of LOPSA members and found a few that have made the leap. Here are a few posts:
With World IPv6 Day coming June 18, don’t forget to check your (and your ISP’s) readiness: http://test-ipv6.com/ipv6day.html
Haven’t seen much about World IPv6 Day, but the information is out there if you look for it.
Basically, some major Internet services including Google, Facebook, Yahoo!, Akamai and Limelight Networks will offer their content over IPv6 for 24-hours. The goal is to raise awareness about IPv6 and give companies and organizations information and experience that will help them prepare for IPv6 to ensure a successful transition as IPv4 addresses run out.
While all currently shipping operating systems (*NIX, Windows and MacOS) have IPv6 stacks, very few end users (at least in the United States) have actual IPv6 connectivity.
Since incorrectly configured “dual stack” systems can see DNS and connection timeouts, you should visit http://test-ipv6.com/ipv6day.html to see if you’ll see any problems on “IPv6 day”.
I’ve asked all the providers in my area, and none (Cox Cable, SpeakEasy DSL) can offer any dates by which they will offer native IPv6. I would have to build an IPv6 tunnel to get IPv6. Hopefully the “last mile” providers will sort this out over the next year.
Over the past few years, I’ve learned a great set of new words from my UK counterparts. Many unfamiliar terms used in the UK have a great history, and sometimes they’re just… so perfect for what we do as system administrators.
Bespoke is such a term. It has many meanings, essentially being “fully custom” or “hand built”, or “hand made”. But is also has deeper meanings, alluding to “exactly fitting your personal needs”, “crafted”, or “personal touch”. It comes of course, from the tailoring world, where it evokes a sense of old world craftsmanship, a very personal garment, hand measured and hand sewn, just for you.
Bespoke is at one end of the scalability spectrum. A very few garments, made exactly for a single person. Expensive and not very many will be made from any single pattern. Like hand-built systems.
As you move to larger scale production you see much lower costs, but many fewer options and features. Exactly like the IT world.
Bespoke is fully custom, exactly what you desire, no matter the cost. Options and extras. “High touch” support. At the other end of the spectrum you have high scale, mass production: what you need, no more, no less, but at a more reasonable price. It is a compromise solution to meet your most important requirements, but one you’ve decided to accept, usually for faster delivery or lower cost.
There is a time and place for bespoke, but increasingly we need to achieve high scalability, as we are increasingly pushed to do more with less.
There are some problems that just can’t be solved at the low end of the scale solution, or aren’t cost effective, or aren’t widely available. For example, High Performance Computing (HPC) clusters of commodity computers are a high-scale alternative. Commodity clusters brought supercomputing the masses, or at least to most research group, or smaller companies. These high-scale solutions are compromises, but good enough and more widely available and affordable.
These cluster are great alternatives to the bespoke supercomputers of the past, such as the early Cray machines. Our need to solve larger and larger problems, such as the HPCC Grand Challenge problems, eventually required more horsepower than a single, hand-built, bespoke machine could deliver at any affordable cost.
Moving away from bespoke supercomputers allowed us to scale in two ways: we are able to make very capable systems widely available (at a reasonable cost) and we can grow systems at the high end where cost is not as important, but we need ever-larger capabilities.
As we explore solutions we have to ask ourselves where we need to be on the “bespoke” scale. Most can’t really afford bespoke, and truly most won’t need it. Automation allows us to build high-scale systems that provide most (or at least enough) of the features of a bespoke solution, but at an affordable price.
A few weeks ago the “anti-social” bookmarking site Pinboard (http://pinboard.in/) made the news in a big way. The site experienced hyper-growth due to the news of the possible demise of Del.icio.us. Concerns about the future of Del.icio.us led tens of thousands of people to look for a new place to store and share their millions of bookmarks.
And quite a few of these people chose Pinboard! During one 30 hour period around December 18th, Pinboard received over 7 million new bookmarks, more than had been put into the system during its entire life.
I was able to catch up with Maciej for an interview via email. I wanted to find out more about how Pinboard was operated, and how this huge spike in load had affected administration of the site. Large-scale system administration isn’t always about hundred of systems, it can also be about tens or hundreds of thousands of users, or unexpected load spikes, or just how you plan for growth.