Archive for category best practice

IPv6 – an interesting address plan (UCSD)

Last year I came across an interesting IPv6 address plan, from the University of California San Diego (UCSD.EDU).  Their networking group presented their IPv6 implementation status and address plan at the annual on-campus IT conference.  Their address plan has some interesting features that I haven’t seen elsewhere.

UCSD is a large campus and currently has an IPv4 /16 (“class B”) and multiple IPv4 /24 assignments. UCSD also has an IPv6 /32 assignment.  The campus spans about 2000 acres and serves about 30,000 students. The campus is large enough that having intra-campus geographically-based routing is useful, and there are about 30 main network nodes identified for this use.

UCSD’s address plan is:

2607:f720:LR0U:UUSS:<host>
  • 2607:f720::/32 is UCSD’s assigned IPv6 prefix
  • LR is actually vlrrrrrr in binary
    • “v” bit is 0 (zero) for this version of the address plan
    • “l” bit is “local”, meaning that any packet to or from this address is to be dropped at the campus net boundary
    • “rrrrrr” is 6 bits that indicate the campus region, or major network node
  • there are four zero bits at /40
  • UUU identifies an organizational unit (department, lab, etc)
  • SS provides 256 separate subnets per business unit
  • <host> is a 64 bit Interface ID

This plan has a few unique features I haven’t seen in any other IPv6 address plan: a versioning scheme and the “local” bit. Note that there are also 4 bits (at /40) that are defined as “0″ (zero).

The “version” bit does cut the present number of addresses in half, but that still leaves an astronomical number of addresses available, with the flexibility of having completely different address plans in the future, all coexisting.

The “local” bit is a kind of RFC-1918 (or at least peudo-NAT) replacement.  Any address with the “l” bit set will be unreachable from the “outside” and unable to reach “outside” as all traffic to or from addresses with the “l” bit will be dropped at the campus network boundary.

They will also be delegating /56s or /60s to clusters, virtual machines, etc. Since UCSD (and SDSC.EDU) run a fair number of supercomputers and clusters of machines, being able to delegate large subnets is useful.

(My company is using OpenStack to build our own private cloud.  OpenStack wants to dynamically DHCP large groups of machines as needed, so I can see why UCSD is reserving these large blocks.)

Having so much address space available offers all kinds of opportunities to encode information into the addresses themselves. Only time will tell if this is a good use of the large address space or not.

1 Comment

IPv6 – address planning and the structure of an IPv6 address

Defining an IPv6 address plan is an important process.  Whatever you create will live on for years.  Some analysis and thought up front can save time and pain later.

Before we dive into addressing plans, it is useful to look at the actual structure of an IPv6 address.  While there’s lots of talk of “340,282,366,920,938,000,000,000,000,000,000,000,000 unique IP addresses”, that sort-of assumes that all addresses are usable (for any purpose).

Creating an address plan for all that would be a truly daunting task :-)  Fortunately (for our purposes) a lot of the space is reserved and there’s some internal structure that we can take advantage of to simplify creating an address plan.

Over the years, many kinds and flavors of IPv6 addresses have been defined and some later removed (“deprecated”), such as “Site-local Unicast”. Also, restrictions or better definitions have been made for some address parts, such as the “Interface Identifier”, which will become important below.

Before we start, go read RFC4291.  Go ahead, I’ll wait.  Really, go read (or at least skim) it. You want to get a few things from this RFC…  First, the standard hex notation for IPv6 addresses (Sec 2.2).  Second, the prefix notation (Sec 2.3). And third, which will become important later, is Section 2.5.1, which specifically defines the size of the Interface ID as 64 bits.

Now let’s look at a regular old IPv6 address.  From RFC4291:

   The general format for IPv6 Global Unicast addresses is as follows:

   |         n bits         |   m bits  |       128-n-m bits         |
   +------------------------+-----------+----------------------------+
   | global routing prefix  | subnet ID |       interface ID         |
   +------------------------+-----------+----------------------------+

Where does the global routing prefix come from?  This is the address assignment for your network which comes from either your ISP for provider aggregatable address space (PA-space), or from a Regional Internet Registry (RIR)  for a provider independent address space (PI-space) assignment.

If you’re a home user, IPv6 tunnel user, or a (small) end point business, you’re likely to have your address assigned from the pool that was assigned to the upstream ISP,  provider (aggregatable) space. The drawback here is that if you change providers, your IPv6 addresses are going to change.

If you’re an ISP, not-small company or any organization that is multi-homed through multiple ISPs, you’re going to want provider independent space. Each RIR has different policies for address assignments, including the size of the assignment.

The important thing about that prefix is that you have no real control over it, either its size or its content. It is assigned to you and that’s it.

So, the Interface ID is always 64 bits, and the global routing prefix is fixed and assigned.  That means that all you really need to worry about to create an IPv6 addressing plan is the subnet ID. Everything else is of a predetermined size, and in the case of the prefix, the content is also fixed.

The IPv6 address plan is really about how big your subnet ID is, and how it is broken down by purpose, location or any other information you may want to encode. Which is we’ll look at next time.

1 Comment

IPv6 – address plans

Space is big. You just won’t believe how vastly, hugely, mind- bogglingly big it is. I mean, you may think it’s a long way down the road to the chemist’s, but that’s just peanuts to space.

– The Hitchhiker’s Guide to the Galaxy, Douglas Adams

The IPv6 address space offers some challenges to the network architect. It’s vastly different in scope and scale from our address-constrained IPv4 world.  Network Address Translation (NAT), subnets-of-subnets, and other familiar workarounds just aren’t needed or helpful.

The biggest challenge in creating an IPv6 address plan may be overcoming decades of IPv4 planning experience.  So much of “best practice” in IPv4 is exactly “worst practice” in IPv6.

Over the next several posts, I’ll be looking at how to create IPv6 address plans, registrar recommendations, IETF Best Current Practices, and practical considerations. I’ve found a few interesting address plans from some research organizations, too. While most of this won’t be needed by the home user, it may help understand what you’re seeing from your home IPv6 network.

Leave a comment

Security – why programmers should study computing history

You can now add LinkedIn, eHarmony and last.fm to the long list of major sites that have had poor password security in their user database designs.  The saddest part is that in the case of LinkedIn, at least, this was apparently completely avoidable. (I haven’t found enough details to comment on the others, yet.)

Protecting stored user passwords is not rocket science.  This problem was pretty much solved in the 80s and 90s: Use a salted one-way hash function of sufficient strength to resist a dictionary attack.

(LinkedIn’s mistake was to use hashes, but to not salt them. )

That’s it.  Really.  UNIX has been using a salted hash since about 1985, initially with a hash based on DES. Since that time, as computing speeds have increased, new (salted) hash functions based on MD5, Blowfish, and SHA-2 have all been introduced.

In other words, stored password security has been a solved problem for at least 25 years. The concept is the same, only the algorithms have needed to be updated as Moore’s Law has dictated.

This is just one reason that programmers (and sysadmins) should study history, if only the history of computer security. Oh, if you’re not a cryptologist, for security-critical functions, please use well-vetted library functions.

A few references:

, , , , , , ,

Leave a comment

Speaking as if you are being translated can help your native-language conversations

Tonight I was out after dinner with some of my colleagues from Japan. With the help of translators we were discussing both personal and work things and I noticed that the conversations were more focused than they might be when everyone speaks the same language. We have some absolutely wonderful bi-lingual folks in our offices, some of who are full-time translators, and some who often serve as translators, in addition to their regular jobs. Over the past years they’ve helped me become more adept at working with translators and be a more effective communicator.

Since as IT we are often working with our customers or users, you could almost say that we are always working with translation. The things that make working with a separate translator and a person who doesn’t speak your language will also help in your communications with others who speak your language, but not might be part of your “culture” (IT).

Having a translator in the conversation changes the way you listen, think and express your ideas. I believe that we could learn from this and improve our regular (non-translated) conversations.  When there is a translator, you especially learn to do four things: listen carefully, think about what you want to say before you say it, consider how the idea might be received by the listener and try to avoid ambiguous or unclear thoughts that might lead to is-understanding, and articulate your ideas concisely and directly.

Listening is key. You must focus not only on the words being spoken by the translator, but before that you also need to listen to the other speaker, while the translator is listening.  Watch and listen to the speaker, not the translator.  Understand the body and facial language of the speaker and get a sense from them about which ideas (in the sequence) are most important.  While they are speaking, pay most of your attention to them, not the translator.  When the translator begins speaking, pay attention to both the translator and the speaker, working to keep everyone involved in the conversation.

When it is time for you to respond, but before you speak, the most important thing is to make sure that you have a completely formed thought (or just a few) that you want to express. You need to think about the idea and how to communicate it clearly, before you open your mouth.  You shouldn’t be trying to expand on or complete your half-formed idea while you’re in the middle of a sentence. Before you speak, know what you want to say, and how you want to say it.

Now that you know what you want to say, you have to decide how to say it. Plan your sentences, plan the sequence of ideas, and consider how to avoid ambiguity or misunderstanding. This is where knowledge of the other person’s language, culture, (business) environment and your relationship with the other person is especially helpful. If I absolutely know a specific word in the other language that helps express the idea completely, I may use it to help in translation or understanding. If there is a term that I know has a special meaning or is used in the office or the company in a special way, I might want to use that word or term. If the other speaker and I have a common background, such as prior conversations or projects we’ve worked on together, I may reference those.

Finally, it is time to open your mouth. Be concise. Speak in reasonable-sized, self-contained “sound bites”. Don’t go on too long without stopping to a) give the translator time to translate and b) look for the other person to want to speak.  No long-winded sentences, no rambling thoughts. Don’t waste the translator’s efforts, don’t expect them to remember a complete five minute monologue with eight bullet points before they begin translating, and don’t make it impossible for the other speaker to interrupt if needed. While you are speaking, pay attention to the other speaker as much (or more) than the translator, looking for their reaction. This will help you understand if your ideas are being understood and how they are being accepted (or not).  All three of you are in the conversation, but it is primarily a conversation between the two speakers.

The things you need to do to effectively work with a translator can also improve your communications with other people speaking the same language: Listen well, form one or a few complete thoughts, think about how you want to say them, and express them concisely.

 

Leave a comment

IPv6 – source address selection

When we did our IPv6 sprint earlier this year, one of the biggest surprises (and sources of confusion) was how we needed to deal with multiple IPv6 addresses per network interface. The confusion wasn’t about having multiple addresses, it was predicting which address would be used as the source address when sending packets. Almost everyone was already familiar with “VIFs” (Virtual Interfaces) or equivalent from Solaris, Linux or other operating systems. But VIFs don’t have the problem of needing to select a source address.

The interesting issue is that the source address you must select depends on the network path between you and your destination. The same source computer shows up as different IPv6 addresses on different destination systems.

Since source addresses are the basis for many security mechanisms, such as rules on network firewalls and destination host iptables configurations, you need to know which address a source host will use in several different cases. This makes managing source-host-specific firewall and iptables rules….. complicated.

Fortunately, the need to be able to predict (or configure) the source address was recognized early on in IPv6 development and rules for selecting IPv6 source address were documented in RFC 3484 (2003). However, like many RFCs, it is a great specification, but is light on readability and explanations. The RFC also has no specification for the implementation details, such as the user interface for the “User Configuration Table” which allows the system administrator to change the default behavior.

Fortunately, at least for Linux, one of the developers for “glibc” (which implements the network stack interface), has written about these issues for Linux, and there are some good articles about the specifics of the Linux RFC 3484 implementation. That’s the good news.  The bad news is that it is still complicated.

Source address selection is controlled by the User Configuration Table, which I’ll show in a later post.  After that, I’ll cover how this adds even more weight to the argument that host-IP-based access restrictions need to be revisited (or just not used) in IPv6-capable networks.

Leave a comment

IPv6 and MacOS X Lion – “Hampered Eyeballs”

As part of the IPv6 sprint at work last month, I ended up doing a lot of IPv6 research. For my part, I spent a lot of time researching “customer issues” and MacOS issues in addition to the purely technical work.

When I started the sprint, my laptop was on MacOS X Snow Leopard, which I used for all my home IPv6 work. Halfway through the sprint, I upgraded to MacOS X Lion.

The upgrade to Lion went well, but Apple has changed the behavior of some IPv6 features, and I personally would have to consider Snow Leopard as a better IPv6 platform than Lion.

Apple didn’t “break” IPv6 in Lion, but they did introduce a new problem, which has been dubbed “hampered eyeballs”.

https://labs.ripe.net/Members/emileaben/hampered-eyeballs

I’ve noticed some newly-hampered IPv6 web browsing since the upgrade.  Some sites that came back solidly on IPv6 100% of the time, now come back as IPv4 up to 20% of the time. (Thanks IPvFox!)

This has lots of implications for how consumers will see the new Internet, especially during the transition.  According to some anecdotal remarks on some IPv6 mailing lists, this is being used as an excuse by some companies to delay (even more) any IPv6 transition or even dual stacking!

This last week was Game Developer Conference in San Francisco, next week is a global IPv6 meeting in New Jersey. I should have lots more “corporate” IPv6 info on the next 10 days.

 

, ,

2 Comments

IPv6 “sprint” – background and results

The last two weeks at work have been some of the most fun in the past few years. A few months ago I moved from management back to my first love: deep technical work. In my new position I’m responsible (with a co-worker) for technical strategy, creating our Enterprise Architecture, and forward-looking technical projects. We’re also tasked with finding new ways to collaborate and take on projects as well as take a hard look to ensure that IT is supporting the rest of the business.

For some of these, we act as facilitators for IT projects, even though we aren’t in the management chain.

IPv6 has been one of my “back burner” projects for almost a year. There is a business mandate that we must have IPv6 connectivity to one of the inter-corporate networks by 1 April. A select set of our internal users need to have IPv6 connectivity to business applications that will only be available over IPv6 via this network.

To prepare for this, we had a need to ramp up IPv6 knowledge from almost nothing, to ready to plan a limited IPv6 deployment next month.

We decided to try a new project methodology (loosely) based on agile concepts: we performed IPv6 testing and deployment preparation as a “sprint”. We got 12 of our most senior system and network admins together in a large conference room with a pile of hardware, a stack of OS install disks, a new IPv6 transit connection and said, “Go!”.

No distractions, no email, no phone calls. Just 12 people off in a completely different building, in a big room with a pile of gear and the mandate to “explore IPv6″ and learn enough to be comfortable planning a limited IPv6 deployment at the end.

It was great seeing people from different IT departments who usually specialize in Linux, MS Windows, VMWare, networking, security, etc. all come together to explore IPv6 on all these platforms, bring up services, test, find vendor bugs :-) and in general build a standalone IPv6 lab from scratch.

We truly did start from scratch; we started with an empty room, a bunch of tables and chairs, two pallets of PCs, assorted network kit, three boxes of ethernet cables and installation media.

Along the way, all of these people stepped out of their comfort zones, learned about each others’ specializations, and worked together for a common goal that we all created together.

At the end of the 2 weeks, we had a fully functioning dual-stack IPv4/IPv6 network:

  • Routers and switches, firewall and IPv4/6 transit from a new provider
  • Fully functioning Windows infrastructure: AD, DNS, DHCP, IIS, Exchange, etc.
  • Linux infrastructure: DNS, DHCP, syslog, apache, Splunk, Puppet (mostly)
  • Windows Server 2008 and 2008 R2, Windows 7 clients
  • Linux Centos 5 and 6 servers and desktop
  • MacOS Snow Leopard and Lion clients

All the results and everything we learned is documented in a wiki full of IPv6 configurations, hints and tips, debugging info, links to IPv6 info, lessons learned and plans for IPv6 next steps to production. I think we generated about 50-60 pages of new documentation along the way on IPv6, and about 6 pages of notes on the sprint experience itself.

The sprint wasn’t perfect, and we had a few stumbles along the way. But we learned a lot about how to run these kinds of sprints, and we’re pretty sure that we’ll have more of them in the future.

We also had two full weeks of face time with our colleagues from four sites in two states. In some cases we had never met each other in person, but had been exchanging email and tickets for years.

It was incredibly productive two weeks. We learned a lot about IPv6, each other and found new ways to work together.

, , , , ,

3 Comments

system logs – analysis (with Splunk)

To recap, a useful system logging solution consists of four components: generation, transport, storage and analysis.

I will argue if you already have any logs at all, that your first step should be to build an analysis capability. This will let you begin to analyze the logs you already have, become familiar with your analysis tool on a smaller dataset and use the analysis tool to help debug any problems that you encounter while building the rest of the system.

I’ve been a big Splunk fan for years. The Splunk folks understand system and network administration and that shows in the design and capabilities of the product. The free “home” license is a great contribution to the community, too.

There is a lot of good documentation out there on getting started with Splunk, so I’ll focus on what it allowed me to find instead of the details of using it. I encourage you to experiment and try different kinds of searches, you’ll be surprised at what you find.

After starting Splunk, I pointed it at my /var/log directory, which has all the usual system logs, and also all my Apache logs. Splunk indexed about 2 million log events in less than 8 minutes, on my low-power Atom CPU with only 2G RAM and a single 150G IDE laptop disk.

In the 30 minutes or so, I found (all on a single host, all in the last 30 days)

  • 935 root SSH root login attempts
  • 838 attempts to exploit PHP bugs in my web server
  • 20 attempts to buffer overflow my web server
  • over 100K attempts to delivery SPAM or use my hosts as a mail relay
  • 40 attempts to use MyAdmin scripts (which I don’t have)

So, less than 30 minutes to install Splunk and 30 minutes of playing with the search tool has already paid off :-)

Next steps: get the home router sending its logs to the log server and setting up some Splunk “canned” searches.

,

Leave a comment

System logs

I am a huge system log junkie. Logs are my go-to first place to look when there is a problem of almost any kind. I think they are one of the most under-utilized collections of useful information that a system (or network) administrator can use. System logs can tell you what has happened (system outages, security incidents), what is happening (performance monitoring and debugging) and what may happen in the future (trending).

At one time in the deep past I “owned” the first large-scale system log collection: 10 years (1993-2003) of continuous logs gathered from over 500 hosts, including four major supercomputers. That was one of (if not the first) large scale log repositories and it provided a great data set for log analysis for SDSC.EDU and CAIDA.ORG administrators and researchers.  The log repo was incredibly useful for security research and practical intrusion analysis.

The most important thing to remember is that system logs are created in real-time, and if not captured (and saved), are lost forever.

A useful system logging solution consists of four components: generation, transport, storage and analysis.

A Simple Log Architecture

Fortunately, you don’t have to build an entire complex large-scale system before you start seeing some value. As soon as you begin to generate and analyze a few log sources, you begin getting a return on your time investment. Your syslog system can grow incrementally, as needed and as time (and budget) permit. You can start small and simple and get some value, and then every small improvement or every system (log source) added to the collection just adds more value.

For a single host you can do an entire log solution on a single host: logs are generated locally, transport is local sockets, storage is on local disk and you analyze with grep (or even Splunk). In a solution like this, most of your incremental improvements will be in making sure that new software is logging as it is installed, and in improving your analysis methods.

I believe that any collection of more than about 3-5 hosts (or network devices) should have a central log repository. Being able to see everything that is going on in one place and correlate events across the network can be invaluable in trouble shooting problems and interactions between the systems.

I’ll be fixing up the system log situation here art home over the next few weeks, to include gathering and processing logs from all the Linux, Windows, Mac and other devices on the home network. I wonder what I will find as I begin the analysis?

, ,

Leave a comment

Follow

Get every new post delivered to your Inbox.

Join 243 other followers

%d bloggers like this: