Archive for category the business of system administration

IPv6 – some interesting folks are making the move

I’m looking at some different ways to measure IPv6 reachability, and I’ve found some interesting sites that have already made the move.

I’ll have more details later, but I’ve been looking at the last 30 days of browsing history on my laptop.  I’m still crunching some numbers, but some interesting sites popped out.  Of course, all the Google properties, Facebook and the like have made the move, but some smaller sites are being more progressive than many of the usual suspects.

Here are some smaller and more interesting folks that have made the move. Some of these I found from RSS feeds aggregating gaming, beer and art.

NOTE: Some of these may not always come up in your browser via IPv6, especially if you have a Mac, which may suffer from “hampered eyeballs“.  Some of these appear to be in a testing phase, as they have AAAA records, but are not always reachable via IPv6 from all locations, or they may be behind broken load balancers.

Other than Tom, none of these people are involved in IPv6, but they’ve already made (or started to make) the move. I think it is encouraging that IPv6 has moved from the exclusive province of bleeding edge early adopters to the point that almost anyone can get on board with a little work.  As you can see, some of these seem to still be in transition but they’re heading in the right direction.

Leave a comment

Speaking as if you are being translated can help your native-language conversations

Tonight I was out after dinner with some of my colleagues from Japan. With the help of translators we were discussing both personal and work things and I noticed that the conversations were more focused than they might be when everyone speaks the same language. We have some absolutely wonderful bi-lingual folks in our offices, some of who are full-time translators, and some who often serve as translators, in addition to their regular jobs. Over the past years they’ve helped me become more adept at working with translators and be a more effective communicator.

Since as IT we are often working with our customers or users, you could almost say that we are always working with translation. The things that make working with a separate translator and a person who doesn’t speak your language will also help in your communications with others who speak your language, but not might be part of your “culture” (IT).

Having a translator in the conversation changes the way you listen, think and express your ideas. I believe that we could learn from this and improve our regular (non-translated) conversations.  When there is a translator, you especially learn to do four things: listen carefully, think about what you want to say before you say it, consider how the idea might be received by the listener and try to avoid ambiguous or unclear thoughts that might lead to is-understanding, and articulate your ideas concisely and directly.

Listening is key. You must focus not only on the words being spoken by the translator, but before that you also need to listen to the other speaker, while the translator is listening.  Watch and listen to the speaker, not the translator.  Understand the body and facial language of the speaker and get a sense from them about which ideas (in the sequence) are most important.  While they are speaking, pay most of your attention to them, not the translator.  When the translator begins speaking, pay attention to both the translator and the speaker, working to keep everyone involved in the conversation.

When it is time for you to respond, but before you speak, the most important thing is to make sure that you have a completely formed thought (or just a few) that you want to express. You need to think about the idea and how to communicate it clearly, before you open your mouth.  You shouldn’t be trying to expand on or complete your half-formed idea while you’re in the middle of a sentence. Before you speak, know what you want to say, and how you want to say it.

Now that you know what you want to say, you have to decide how to say it. Plan your sentences, plan the sequence of ideas, and consider how to avoid ambiguity or misunderstanding. This is where knowledge of the other person’s language, culture, (business) environment and your relationship with the other person is especially helpful. If I absolutely know a specific word in the other language that helps express the idea completely, I may use it to help in translation or understanding. If there is a term that I know has a special meaning or is used in the office or the company in a special way, I might want to use that word or term. If the other speaker and I have a common background, such as prior conversations or projects we’ve worked on together, I may reference those.

Finally, it is time to open your mouth. Be concise. Speak in reasonable-sized, self-contained “sound bites”. Don’t go on too long without stopping to a) give the translator time to translate and b) look for the other person to want to speak.  No long-winded sentences, no rambling thoughts. Don’t waste the translator’s efforts, don’t expect them to remember a complete five minute monologue with eight bullet points before they begin translating, and don’t make it impossible for the other speaker to interrupt if needed. While you are speaking, pay attention to the other speaker as much (or more) than the translator, looking for their reaction. This will help you understand if your ideas are being understood and how they are being accepted (or not).  All three of you are in the conversation, but it is primarily a conversation between the two speakers.

The things you need to do to effectively work with a translator can also improve your communications with other people speaking the same language: Listen well, form one or a few complete thoughts, think about how you want to say them, and express them concisely.

 

Leave a comment

IPvFox, my favorite new plug-in

Now that I have a functioning IPv6 network, I can actually “see” how much of the public Internet (or at least web sites) are IPv6. Before I had the home net on IPv6, I was limited to just using DNS queries for AAAA records (over IPv4).

My new favorite FireFox plug-in is IPvFox, which gives me IPv6/IPv4 information right in the URL “awesome bar”.  I can tell at a glance, if the current page’s data was served over IPv6, IPv4 or mixed.

Here are a few images showing which sites/pages are loaded via IPv6, IPv4, or both.

This first one is interesting, ipv6.google.com. As you can see from the image, the main page (URL) is IPv6 (big green “6”), but other parts of the page loaded via IPv4 (little red “4”). Clicking on the 6/4 image in the URL bar shows you which parts loaded which way. The main URL is IPv6, but the other parts of the page loaded over IPv4. Note that plus.google.com loads over IPv4.

This next one is ipv6-test.com. Again the main page loads via IPv6, but the other content on the page is loaded from a combination of other sites running IPv4 and IPv6.

Here’s another IPv6 test site, test-ipv6.com. This one uses IPv4 for the main site, and then pulls elements over IPv6 and IPv4.

As one of the newest of Google’s Internet properties, it is not unexpected that plus.google.com loads over IPv6, at least in this example. Go back and look at the first example, however, where it loaded over IPv4. Strange…. However, the “+1” system is still IPv4:

As I do my daily browsing, it’s interesting which sites come up over IPv6, and which don’t. I’m seeing more media and social sites on IPv6, and very few vendor sites. I had expected to see much more IPv6 from the big network kit vendors, but they are noticeably missing. Some of them “do” IPv6 on a separate host (ipv6.google.com, for example).

Not surprisingly, the main DREN web site is 100% IPv6.

Cisco, Juniper, IBM, Apple and Dell are all 100% IPv4. Many Mozilla sites, and a few US Government sites (Department of Education), and even Fark! are all solidly IPv6.

I wonder if the social media sites will lead the charge, or the vendors? Right now, I’m not seeing a lot of commitment from companies that I would hope have a lot more IPv6 experience.

They are going to want my company’s money for new network gear in the coming year, and I’m going to be asking hard questions about why they don’t have their own main sites running IPv6.

, , ,

2 Comments

Wells Fargo/Wachovia merger – culture is key

One of this morning’s keynotes at Gartner Datacenter Conference (#gartnerdc) was an on-stage interview with Scott Dillon, EVP, Head of Technology Infrastructure for Wells Fargo. He was interviewed about the Wells/Wachovia merger,  and the challenges faced by the organization.

While the talk was full of sound bytes about scale, talk about merger strategies and budgets, the discussion came back to culture over and over.

On the scale and technology side, there were tidbits like these:

  • Wells Fargo employs 1 in 500 in the US;
  • IT had 10,000 change events per month, before the merger;
  • They have a physical presence within 2 miles of 50% of the US population.

But it was on the management side that I found the most interesting information.

Before the merger, there were clear guidelines, such as “if we have two systems, A and B that are doing the same thing, we will pick the best, either A or B. No C options.” This was a merger of equals, at least in terms of the technology. They chose the best of the two orgs, then committed to making that the One True New System for everyone. They ended up with an almost 50/50 split of technology from the two companies.

But, no matter where the talk went in management and technology, it just kept coming back to culture. Building one culture from the best of both was a top management priority for the entire company. Just as they (IT) selected the best tech from each, they (Executive management) worked to take the best of the culture from both, to be the foundation moving forward. They had a great advantage, as both companies share almost all of their core values, so this was a little easier than merging the technology. But there was an explicit decision to do this, it wasn’t left to chance.

Management made “culture” a number one priority. They focused on merging the culture as much as they focused on merging the technology.  They made building communications between the employees an early priority. Very early on, they even created a “culture group” to look at the two cultures and make specific decisions about how to foster the culture merger.

Part of their culture involves employee value. Every company does “exit interviews” when employees leave. Wells does “stay interviews” where they engage with employees to actually gather their concerns, let them know how much the company values and appreciates them. Isn’t that better, to find any issues before key people leave? To constantly work to make the work environment better, instead of waiting until it’s too late?

In IT we often get too focused on the technology, and we can claim that “the business” is too focused on profits, or stock price, or some other “business” area.

When was the last time you heard a business, a bank, even, put their culture as one of their highest priorities?

More importantly, as IT, when was the last time we put “culture” high on our priority list?

, , , ,

Leave a comment

At Gartner Datacenter Conference this week…

I’ll be at the Gartner Datacenter conference in Las Vegas all this week. In my new role at work I’m no longer directly responsible for our US datacenters, but I will be helping to shape our world wide datacenter and networking strategies (among others). If the conference is anything like last year’s there will be LOT of “cloud” in addition to the core topic. It will be interesting to see updates on the major initiatives that large scale operations like Bank of America, eBay and others talked about last year.

The usual Twitter hashtag for the conference is #gartnerdc. If you’re interested in datacenters, “devops”, “green IT”, “orchestration” or “cloud”, I recommend that you follow the tag.

The IPv6 series will continue as usual next week with posts on Tuesday and Thursday.

Leave a comment

Rules for Outsourcing

Outsourcing IT, call centers, component manufacturing and other business functions is now a way of life. A few simple rules can make the difference between great success and terrible failure.

Some outsourcing has gone well, finding suppliers who provide a quality component or service at a better price, or a better service level. Some outsourcing has gone terribly wrong, leading to loss of data, poor quality of service, or even counterfeit aerospace parts.

Outsourcing is not a panacea. Outsourcing is not easy. Outsourcing something can initially be more difficult than doing it yourself. Outsourcing does not always save money, or time.

One of the people I’ve worked for over the last eight years is a pretty smart guy. Under his leadership we have looked at outsourcing carefully selected IT and QA functions throughout most of that time. Some things have been successfully outsourced, some we’ve decided to keep in-house and we have some new projects where it is too soon to tell. So far, we haven’t had any of the outsourcing failures that seem to be so common.

That’s probably due to the care and careful consideration that he has imposed on all outsourcing decisions.

Here are the outsourcing rules under which we operate:

1.    Strategic Benefit: The benefit must be strategically important to our company, so that outsourcing a function improves time to market for a product, avoids capital expenditure, or takes advantage of another company’s economy-of-scale.

2.    Not a core competency: The function that is outsourced must not be a core competency for our company.  For example, knowing how to manage a data center is a core competency, while actually performing the management with company staff is not a core competency.

3.    Contract: Our company must understand the function to a level of detail sufficient to write a complete management contract that will survive the authors.

4.    Relationship:  There must be a compelling reason for the outsourcing vendor to be a strategic supplier to our company, based either on the size of the contract, partial ownership, market sector dominance, beneficial publicity, or technology leadership.

Connoy, 2005

, ,

Leave a comment

Gartner: By 2012 20 percent of businesses will have no IT assets

This prediction was from January 2010, and of course predates the recent troubles at Amazon and other cloud providers.  Also, 2011 saw some re-evaluation of “the cloud” as a panacea for all IT ills.  And yes, some companies have made transitions to nearly 100% cloud operations.

Let’s take a closer look at this statement and dive a little deeper into some of the trends behind this prediction.  The key trends are virtualization, “X as a Service” and employee desktop management.

Virtualization is the easy one.  Everyone either has or is in the process of virtualizing wherever possible.  Whether that is virtualizing legacy services, or taking advantage of virtualization features for reliability or redundancy, it is a well-established strategy that has definable benefits.  One key idea from Gartner’s Datacenter and Cloud conference last year was internal virtualization as a required stepping stone to public or private cloud. Now that server, storage and networking virtualization are all solved problems, we’re seeing more interest in virtualizing the desktop and that will dovetail nicely with desktop management.

The next trend is “X as a Service”.  Whether you’re talking Infrastructure, Platform or Software, all of these are making good progress.  Let’s start with Software as a Service.  If you are a startup or smaller business, you could arguably perform most of your back office functions using hosted solutions.  Sales support, HR, payroll, ERP, email and other services are all available from “the cloud”.  More mature and larger organizations are also making more use of these, although perhaps at a slower pace.  Platform as a Service is now mainstream, with an ever-increasing list of offerings and companies making use if them.  Infrastructure as a Service is clearly here to stay, and many companies like NetFlix and Foursquare have “bet the ranch” on its viability.

All of the above trends were initially focused in servers and services.  Virtual desktops have been around, and coupled with a new trend will further decrease the ownership of IT assets.  The new trend is “employee owned desktops”.  In this model, employees are given a stipend, coupons or other ways to buy their own laptops and/or home computers, which are then used as the employee’s primary interface to corporate resources.  In some models IT still manages the entire machine, more commonly a standard “virtual desktop machine” is deployed and all company computing runs in the virtual machine.  In all cases, the hardware is owned by the employee, who is responsible for loss, damage and hardware failure.

So what might this all this mean for IT organizations in companies that do proceed down this path?

I believe that those business will  have about the same amount of IT staff, but (fewer or) no datacenters, networks or servers of their own.  Their IT staff will be managing virtual assets from Amazon, Rackspace, IBM, HP and other IaaS, Paas and SaaS vendors.  Their staff members will spend more time creating architectures, devising new solutions and creating new services, using services instead of hardware.  They will spend less (or zero) time racking and repairing hardware and more time creating solutions in their own private clouds, built from other peoples’ hardware infrastructures

There will always be local datacenters, especially for high-performance storage and internal-facing apps, and to host our most high-performance applications where control and provisioning of the network is critical.  Security will remain an important reason to not put everything in the claoud, but this will be an increasingly less important driver for non-cloud systems.  But we will all be increasingly integrating hosted solutions from vendors, designing our solutions to run on other peoples’ hardware in other peoples’ datacenters, and managing IT assets that we do not own or physically install.

Leave a comment

Netflix: fail constantly

[Sorry for the sporadic posting. I’ve had more travel in the past 7 weeks than the last 2 years.  I should be back to a more regular schedule soon.]

The “cloud” is still a new and curious beast for a lot of us, especially people who grew up in a more traditional hosting model.  We have several generations of IT workers who have learned everything about hosting on our own hardware and networks.  The flexibility of the cloud is a game-changer, and I’m continually learning new places where “conventional wisdom” will lead you down a difficult path.

Netflix has been kind enough to post their five key lessons from their cloud experiences on their tech blog.  While these lessons may look simple and perhaps obvious in retrospect, there are two that really hit home with me:

1. Prepare to unlearn everything you know about running applications in your own datacenters.

3. The best way to avoid failure is to fail constantly.

First, an entire generation (or maybe two or three) of system and network administrators learned all of what we know about scale and reliability by running our own applications on our own servers in our own datacenters using our own networks.  There are thousands of person-centuries of of experience that have created best (or at least “good”) practice on how to be successful in this model, but this has done very little to prepare us to be successful using cloud resources.  In fact, it might even be working against us.

We’ve all got a lot to un-learn.

Second, in the olden days, uptime was king, and a high time between reboots (or crashes) was considered a mark of a capable system administrator.  Failure was to be avoided at all costs, and testing failover (or disaster recovery) was done infrequently, if at all, due to the high impact and high costs.  We did all get used to a more frequent reboot cycle, if only to be able to install all the needed security patches, but that was just a small change in focus, not a complete sea change.

In computing clouds, it is a given and an expectation that instances will fail at random, and the solution is to have an agile application, not to focus on high availability or increasing hardware reliability.  Just as there is continuous development, testing and deployment, there needs to be continuous failover testing.  Netflix created a tool (Chaos Money) specifically to force random failures in their production systems! That’s right, they are constantly creating failures, just to continuously test their failover methods, in the live production system.

That’s a) really hardcore, b) really scary and c) really cool.

That’s one way to put your reputation on the line.  And it points out just how you need to do some very non-intuitive things, and unlearn decades of good practice to be successful in the cloud.

, ,

1 Comment

an interview with Fred Lloyd (AA7BQ), publisher of QRZ.com

This interview was prompted by QRZ.com‘s recent move into “the cloud”.   QRZ means “Who is calling me?” or “You are being called by ___”, which is very appropriate for what is widely considered to be the largest online community for amateur (“ham”) radio in the world.  Moving this resource from traditional hosting into the cloud is an interesting comment on the readiness of the cloud to actually deliver for a community that has come to depend on this resource.

The computer and ham communities have a long history together. The original “hacker” community originally had quite a few ties to ham radio and computers, as all were involved with experimenting, especially with electronics. In fact, one possible origin for the term “hacker” its use by the amateur radio community from the 1950s to mean “creative tinkering to improve performance”.  This continuing curiosity and desire to build and improve is a hallmark of these communities.

I’ve encountered a few system and network administrators who are hams, and vice versa.  QRZ’s founder and publisher, Fred Lloyd, is no exception.  Fred spent much of his career on the cutting edge of Internet adoption, working for Sun and other companies in Silicon Valley and other locations.  As it turns out, he’s been a ham radio operator about as long.

Fred was kind enough to do an email interview with me earlier this week to discuss system administration, QRZ, ham radio, the Internet and his experiences in moving to the cloud.

Read the rest of this entry »

1 Comment

The more things change (wiretapping the internet)

I feel like we’ve been here before.  The Administration is planning to sponsor legislation to make it easier to (legally) “wiretap the Internet“.  Based on what little has been written, it appears that Justice is arguing that CALEA (and more!) should apply to the Internet.  If that’s the case, then every manufacturer of Internet routing and switching gear would be required to build in the capability for law enforcement to activate a “tap” remotely and with no way for the provider to be aware of it.  Oh, and LE gets decryption assistance, too.

This will not end well.  I don’t have lots of answers, but I’ve got a lot of questions.  Feel free to answer them in the comments 🙂

1. Why bother with the legislation?  The Bush Administration already illegally authorized wiretapping.  Oh, you want the evidence admissible?

2. Which equipment will this apply to?  Large core routers and switches, certainly.  What about my home router?  What about equipment manufactured in China, Russia, Taiwan?  So, all networking gear has to have government approval before installation?  What about a VM appliance, or a home-grown BSD-based firewall?  Will it become illegal to create your own firewall, or use an open source based router/firewall?

3. How will the requirements to support decryption work?  Will US citizens (and companies) be forced to use NERF’ed encryption?  Will the end-to-end SSL/TLS model be deliberately broken to force enabling of  a man-in-the-middle attack?  How will this play against PCI requirements to use best practices.  We’re already seeing massive data spills of credit card and personal data, and the common denominator is often poor or nonexistent encryption.

I don’t claim that there is no need for increased ability for law enforcement to collect and process digital evidence, including network traffic.  That need is real, and in our collective best interests.  But this legislation, as currently described, is impractical and over-reaching, prone to abuse and unenforceable, and completely changes the balance of power between individuals and the government.

Leave a comment

%d bloggers like this: