Archive for category best practice
They all have their uses, but seem to be just too shallow for tech, and life.
Face it, when you need an answer to a technical question or learn about something that isn’t in Wikipedia, chances are that Google will lead you to a blog post. Not a Facebook page (not indexed, and rarely technical). Not Twitter (how much can you explain in 140 characters?) And probably not Google+, either (although there is sometimes good discussion there).
Nope, you’re going to end up at someone’s blog post. Someone who faced the same problem, did their homework, pulled together from other sources, and solved the problem.
Go to Twitter for breaking news, Facebook for your friends, and Google+ for interesting discussions.
But the next time you solve a problem, how about you contribute to the world-wide-knowledgebase via a blog post somewhere?
Since my first trip to Europe 5 years ago, I’ve been trying to get a chip-and-pin credit/debit card. As far as I have been able to find out, other than a single credit union in DC, there is no way to get a chip-and-pin card in the US. American Express and others have chip-and-signature, but that’s not the same, even if they try to tell you that it is. For example, you can’t use chip-and-signature at unattended gas stations, vending machines or many other places in Europe.
It looks like, finally, the American card industry is willing to truly join the EMV card world, and issue chip-and-pin by 2015. It only took 10s of millions of credit cards numbers being stolen within a single month or so, to get them to move.
Almost all of our credit and debit cards were re-issued to us in January, by several credit unions and other financial institutions. That had to be expensive for all of them, and there is talk of the banks suing Target over their breach.
While this won’t end credit card fraud completely, it will definitely make it more difficult.
Just one more thing to think about as I work on my personal privacy…
I got my start in computer security from the personal privacy side of the equation. Revelations over the past year have made me realize that I have become complacent, and it is time to upgrade some aspects of my personal digital privacy.
My first “paper” on security was an essay that warned that “someday, the government and large corporations will be able to search and manipulate hundred of millions of bytes of information, giving them improper leverage over individuals, who won’t have the same access to computing power or storage”. I got a B. My high school English teacher said the writing was very good, but she couldn’t accept the premise 😦 That was in the late 1970’s.
I’ve had, but rarely used PGP/GPG keys for email since the early 1990’s. I have friends who probably encrypt about 10-25% of their email, and sign almost 100%. Others encrypt and sign more, or less. Some are more consistent about this, some less. I felt that this wasn’t necessary for me, as I was a small enough needle in a large enough haystack, that “computational privacy” probably wasn’t needed in my particular case.
I’ve run my own email servers on my own hardware, off and on, for years. I’ve done the same for personal web servers, photo galleries, and other personal storage. Over the past few years, I’ve made much more use of hosted services, like Gmail, and WordPress.com (for this blog) instead of building, maintaining and securing them myself on my own hardware under my own physical control. I’m going to have to re-think some of those decisions, I guess.
The Snowden revelations, coupled with high-profile cases of seizures of data and equipment from hosting providers, and the inability of those service providers to stand against the abuse of certain government powers has led me to believe that it’s time to step things up a bit.
I want to upgrade my personal privacy stance over the next few months. I’m going to have to re-learn lots of the details of encryption, look at products that didn’t exist a few years ago, look into newer encryption algorithms and key search technologies. I expect I’ll need to make changes in the way I use email and the web and in general communicate. There are a lot of good resources out there; I’ll share what I find.
I don’t plan to wear a tinfoil hat, become a crypto-anarchist, bury guns and ammunition in the desert, or buy gold. This isn’t going to be a knee-jerk reaction, just some slow steady Kaizen to improve my digital privacy.
Are your servers getting SLAAC addresses in addition to the addresses you are manually configuring? If so, read on…
You need to find and turn off the “A” bit in the Prefix Length option of your Router Advertisement packets. The “A” bit is on by default on most network routers, and the documentation that describes the interactions between the “M”, “O” and “A” bits is scattered across at least a half dozen RFCs.
When we first set up our IPv6 lab, we went through several phases. Initially we just did client subnets and hosts and let all the stations auto-configure (SLAAC). This all happened “magically” with the default behavior of all the operating systems and network gear we tested.
Then we split the clients and servers onto separate subnets. When we did the split we added a DHCPv6 server and turned ON the M and O bits for the client subnets. For the server subnets, we turned OFF the M and O bits and statically configured the IPv6 (and IPv4) addresses.
The client hosts did everything exactly as expected, gathering IPv6 addresses and other options, exactly as they would have using DHCP and IPv4.
But, we never could quite get the servers to stop creating and configuring SLACC addresses, even with M & O bits turned ON or OFF on their subnets. Making sure that we did NOT have DHCPv6 clients configured on these servers, we tested all four states with nearly identical results.
In other words, each server would always end up with three IPv6 addresses:
- a globally unique (global scoped) static assigned address, the one we configured at boot time
- a globally unique (global scoped) SLAAC address, usually based on its MAC address
- the usual and expected link-local address (fe80::)
So, what else was going on? Most of the documentation we found (especially RFCs) described these two bits in excruciating and often contradictory fashion! Take a look at RFC 4861 for the format of the Router Advertisements, and you’ll see the M and O bits right there in section 4.2). If there are other option bits that might control this, shouldn’t they be shown here?
By the way, the M and O bits are always OFF by default on all the networking gear we’ve seen so far (Cisco, Juniper and HP).
4.2. Router Advertisement Message Format
Routers send out Router Advertisement messages periodically, or in response to Router Solicitations. 0 1 2 3 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Type | Code | Checksum | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Cur Hop Limit |M|O| Reserved | Router Lifetime | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Reachable Time | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Retrans Timer | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Options ... +-+-+-+-+-+-+-+-+-+-+-+-
But in all four combinations of the M and O bits, and IF you aren’t running a DHCPv6 client, you get a SLAAC address in addition to the address you statically (manually) configure. How do you turn off “auto conf” if it isn’t controlled by flags in the Router Advertisement???
It turns out that there are actually three bits in the RA that control host configuration, not two, and so there are 8 possible cases of M, O and “A”, not four. So where is this mysterious “A” bit hiding?
The “A” bit is “hidden” in a Router Advertisement option (“Prefix Information”), which is described in section 4.6.2, about 10 pages farther along in the RFC. This option’s purpose is to tell you about the length of the valid address prefix that’s available on the current subnet, but it also has “A” that controls whether or not a station on that subnet should do SLAAC. And unlike M and O, A seems to always be set ON by default.
4.6.2. Prefix Information
0 1 2 3 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Type | Length | Prefix Length |L|A| Reserved1 | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Valid Lifetime | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Preferred Lifetime | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Reserved2 | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | | + + | | + Prefix + | | + + | | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ Fields: Type 3 Length 4 Prefix Length 8-bit unsigned integer. The number of leading bits in the Prefix that are valid. The value ranges from 0 to 128. The prefix length field provides necessary information for on-link determination (when combined with the L flag in the prefix information option). It also assists with address autoconfiguration as specified in [ADDRCONF], for which there may be more restrictions on the prefix length. L 1-bit on-link flag. When set, indicates that this prefix can be used for on-link determination. When not set the advertisement makes no statement about on-link or off-link properties of the prefix. In other words, if the L flag is not set a host MUST NOT conclude that an address derived from the prefix is off-link. That is, it MUST NOT update a previous indication that the address is on-link. A 1-bit autonomous address-configuration flag. When set indicates that this prefix can be used for stateless address configuration as specified in [ADDRCONF].
So, that’s where the mysterious server SLAAC addresses come from. They are caused by the default-on “A” bit that is in the Prefix Information option to the Router Advertisement. Clear this A bit on your server subnets, and you’ll get only the IPv6 addresses that you configure, and no more SLAAC addresses as an extra bonus.
After I figured out what was going on, I also found these web pages which each shed some light on the situation:
In case anyone hasn’t noticed, I’m not a big fan of the IPv6 transition technologies, such as Teredo, 6to4, DNS64, etc. I also don’t like “big bang”, everything today, or “flag day” cutovers either. Fortunately, for IPv6 there’s a good middle ground, dual-stack.
Consumer-facing may be different, but here’s the strategy that I like for transitioning an internal, corporate or .EDU network to IPv6. You don’t have to build anything that you won’t be using for the future, so no spending on technological cul-de-sacs or band-aids. You won’t have to install, maintain, and then eventually turn off any transition mechanisms. Just a steady, always forward, straightforward path that will eventually lead you to 100% IPv6, and eventually allow you to turn off IPv4, if you ever so desire. You won’t have to, you just may not have many people to talk to on IPv4 in a few years.
You can begin this transition today, given time and a steady plan to refresh your technology over the course of a few years. If you have an urgent need for IPv6, you can go faster. If you don’t, or are time or money constrained, you can go slower, taking years to make the switch.
- boot to the head to anyone who insists on any transition technology for our internal networks, without a very convincing argument
- dual stack the (internal, client) networks + provide “outbound” external dual-stack paths to the public Internet
- dual-stack DNS and DHCP and any thing else I need to manage and operate the clients, such as Active Directory
- dual stack the clients – if an OS doesn’t do IPv6, chuck it or confine it to the legacy pit of doom, an IPv4-only subnet (or two)
- dual stack the servers – do this on your regular refresh/upgrade cycle or as-needed, or if it can’t be upgraded, it goes into the pit
- dual stack any remaining services – your software vendors and internal developers have had enough time to sort out dual-stack network calls, or they go into the pit
- At this point your network, clients, servers and services are all dual-stacked. The users can reach everything on the public Internet, old or new, as well as all your internal services on both IPv4 and IPv6. And guess what? By this time, you’ll only have two reasons to keep IPv4 around:
- Your users might still need to reach old crufty web sites and services out there in the real world
- You might still need to talk to some things in the pit of doom
- Finally, after years, you can look into turning off IPv4. Start by pouring petrol into the pit of doom and lighting it on fire.
- Then start turning off IPv4, first on the the services and servers, then the clients, and eventually the network. Take your time, you’re in no hurry. You can take years, if you like.
We’re doing 1,2 and 3 this year and next. We’ve started on 4 and that will continue for the next year or so. We can start on 5 as soon as we have time. New servers will be born dual-stack starting early next year. A year or three from now we clean up 6, and we’ll be at stage 7.
Then, and only then, do we think about Step 8, turning off IPv4. If that begins by 2017, I’ll be surprised. If Step 9 doesn’t start until 2020, I don’t care.
As long as I’ve got 1 – 4 done, I have a lot less pressure and I can proceed on a rational, non-panic’ed basis to replace servers and services as part of a regular refresh cycle. For many companies, this is 3-4 (or even 5) years.
The consumer-facing case has completely different drivers and requirements, it gets a completely different plan and schedule. But I bet it won’t have any transition technologies in it, if I can help it.
This is rather cold-blooded and (I freely admit) a bit of a pipe dream. But if I can make this work, I’ll never have to justify, pay for, create, debug, support, and then decommission any of the transition technologies. I’ll always have a fall back if there’s a problem with a particular IPv6 implementation or if we run into a show stopper on vendor support for IPv6.
I hate building things that I know I’m only building as a temporary bridge. Spending time (and money) putting in the transition strategies can be better spent just moving forward, not sideways.
IPv6 just isn’t has hard as some might lead you to believe. There are still some implementation glitches here and there, but every day there’s better vendor support and fewer bugs.
Don’t be afraid to move forward, it’s easier than you think.
A recent DEFCON presentation highlights the need to accelerate adoption of IPv6 on your network. You can either turn off IPv6 on all your hosts (which will break things), or get on with it and deploy IPv6 “for real”.
(TL;DR: your hosts already have dual-stack activated, not having IPv6 supported in your network opens up a man-in-the-middle (MITM) attack. Though long known, there is now a “one click” exploit available.)
There’s been a lot of discussion on various IPv6-related mailing lists with how to drive the transition, how to transition, and which (if any) of the transition technologies should be used.
In general, NONE of the transition technologies (other than dual-stack) address this particular MITM attack. They (for the most part) leave old IPv4 nodes as-is on your network, and try to translate protocols and hide IPv6 from those old nodes (and vice versa).
Personally, I find it quite heartening that many are making good business cases for aggressive adoption of native IPv6. Some are also providing good historical evidence that we’ve made similar transitions in the past, without extensive transition technologies, with good success:
On 8/8/13 1:40 PM, Ray Hunter wrote (v6ops):
Actually I think your reasoning and reference to the IPX and Appletalk
phase out would suggest it’s easier to make a bold call: move to IPv6
ASAP for critical systems via dual stack, and for the rest you draw a
box around it and call it legacy and run it on IPv4 until it dies a natural death.
IMHO Going half way with NAT64/DNS64 just prolongs the pain and locks
you into a transition technology that is expensive and difficult to
operate for the life cycle of that box, and which has to remain in place
until the last app is migrated or switched off.
I’ve been in a fair number projects where you sometimes just have to
dare to cut the cord whilst maintaining a process to find out what has
broken. So one valid IPv6 only migration strategy might be: “If it’s
important, they’ll migrate before a flag day date. Otherwise they get
I cannot agree enough with the “prolongs the pain” and “locks you into a transition technology “observations.
At work, we’re going on the assumption that we’ll be able to go dual-stack and not need any translation. So far, that looks viable for our internal networks. When we get to the consumer-facing stuff, well, we’ll see.
Good News, everyone!
IPv6 adoption continues to double, year on year. Of course, that’s only three years of baseline, but things are certainly moving in the right direction.
As this article points out, if this rate continues, more than half of Internet users could have IPv6 within 6 years. This goes along with estimates of IPv6-only customers reaching 20% by 2017.
It remains to be seen if this adoption rate can continue. However, events such as Switzerland moving from 3% to 10% adoption in a single month are interesting. They show that a single large ISP can quickly make a huge difference in adoption rates, as they turn up large portions of IPv6 connectivity in large deployment events. I expect Comcast to quickly begin to have a similar impact on US IPv6 availability later this year.
My company buys a fair amount of IT kit each year. We visit and are visited by vendors almost weekly. Lately, “the talk” has become part of the conversation: “How’s your IPv6 support?”
We’ve had this discussion with our network vendors for quite a while, now we’re talking to the rest of the vendors: storage, cloud services, middleware platforms, monitoring, and security.
A very select few of them have answered immediately: “Of course, we’ve had it for years. What can we do to help you with your IPv6 transition?”
Others have said, “It’s on our roadmap, about X months out, would you like to be in the Beta?”
But too many have responded, “IPv6? Is that important to you? You’re the first customer who has asked about it. We’ll get back to you…”
I predict a rocky next 5 years for the vendors in the the last group. Smaller, more agile, more forward-thinking upstarts are going to make life “interesting” for those folks.
You should have “the talk” with your vendors. If they can’t help you move forward on IPv6, you’ll need to find alternatives that can.
I missed the morning talks as they conflicted with my advanced IPv6 class, but I did catch the afternoon sessions.
It seemed like there were three camps at INET Denver: people already embracing the future by deploying IPv6, people trying to avoid IPv6 as long as possible, and people who planned to make money from both of the other two camps.
Let’s talk about the “wait as long as possible” camp.
For almost two decades the argument from the “business side” of IT was there was “no compelling business reason to move to IPv6“. (That article is is from 2009 by the way, but I haven’t seen a new argument since then.) It’s true, there’s been no “killer app” that everyone demanded that was only available via IPv6. It’s also true that doing nothing was a legitimate strategy for quite a while. After all, what good is it to have a telephone (IPv6), if no one else has one? Until recently, moving to IPv6 truly didn’t have a compelling business argument. After all, doing nothing costs nothing. Mostly.
The Internet has changed. And we’re (almost) out of IPv4 addresses, so you have to do something. Sadly, too many ISPs have tried to do what they think is the cheapest and most minimal amount of work they could get away with. That’s Carrier Grade NAT (CGN).
The economics have changed, too. Lee Howard of Time Warner Cable had a very interesting talk where he deconstructed, and then destroyed the myth that CGN is cheaper to deploy than dual-stack. Since he’s the Director of Network Technology for Time Warner Cable, I guess he knows more about the ISP business than most people.
Mr Howard’s talk shows that CGN will cost you in (unhappy) customers, support costs, and only delay the inevitable, when you’ll have to move to dual-stack anyway. His talk effectively demonstrates that the infrastructure and operational costs of a CGN network are more expensive than dual-stack.
There’s your business case. Deploy IPv6 and save money. Done. Now get to work.
This post looks at sizing the IPv6 Global Routing Prefix and creating the subnet plan for the previously defined hypothetical company.
From the prior post, remember that we have these constraints:
- The assignment of the Global Routing Prefix is gone by an Internet Registrar according to their policies. You must justify the size of the allocation you request.
- Subnets are “always” on a /64 boundary (host identifiers are “always” 64 bits)
- “Sites” are groups of subnets on a /48 boundary
- Only networks with prefixes at /48 or larger are considered “publicly routable” by most ISPs. They won’t announce routing data for anything smaller.
The first thing to look at is the needed size of the address prefix. Here’s a modified diagram from RFC 4291. This one includes the specification that the Interface ID is fixed at 64 bits.
The general format for IPv6 Global Unicast addresses is as follows: | n bits | m bits | 128-n-m bits | +------------------------+-----------+----------------------------+ | global routing prefix | subnet ID | interface ID | +------------------------+-----------+----------------------------+ | P bits | S bits | 64 bits | +------------------------+-----------+----------------------------+
What we need to figure out is what is the size of prefix (P bits) we need, in order to get enough subnets (S bits) to create a reasonable network architecture. There’s no real limit to the number of hosts in a subnet, but subnets are used for all kinds of things including routing and access decisions. Since this company is in North America, we’ll use policies from ARIN.
The ARIN Number Resource Policy Manual (NRPM) uses “sites” as the determining number to determine the prefix size, so let’s look at the “sites” in this company.
While there are only six office locations, there are actually more “sites”. Two locations actually have four sites each, as they each house four completely unique sub-organizations, each meeting the definition of “site” from the NRPM. Two more locations each house two sites, and the last two locations each house a single site. At least two of the locations have their own Internet connections, meaning that they must have at least /48 assignments to be able to announce their routes publicly, which is additional justification that there are multiple sites in some locations. That’s 14 sites in six locations.
In the three co-location facilities, there are independent complexes of independent consumer facing services and extensions of the office (internal) networks for DR. At each so-lo, these consumer services are in distinct “sub-facilities”, each leased to a separate business entity and a unique site. There are six sub-facilities spread across the three hosting locations. The sub-facilities have separate ISPs and must be able to announce their own public routes, providing additional justification that the sub-facilities must be distinct sites. Two co-lo facilities host DR sub-facilities which are also separate sites. Two co-lo’s also have internal services that are used by the office sites. This means that three co-lo facilities actually contain 10 unique sites.
That’s a total of 24 sites in all. Per NRPM Section 22.214.171.124, the allocation of a /40 prefix is justified.
We now know that P is 40 bits, the host ID is 64 bits so we have 24 bits for “subnet”. We also need to work in the /48 definition for “site”, so we end up with something that looks like this. We’re switching from /prefix notation to showing the actual IPv6 address format, which is how people will see the address plan:
In this format:
- PPPP:PPPP:PP represents the /40 IPv6 address prefix assigned to us by ARIN.
- CC represents an 8-bit (2 nibble) “site code” that represents a location, usage, organizational unit or other network “slice” as needed. There are 256 site codes in this plan, numbered 0x00 through 0xFF. A “site” is a /48 prefix that may be announced publicly via an ISP for Internet routing. Sites may be internal (behind a firewall, not announced) or external (publicly announced and routed) as defined by each region.
- SSSS represents a 16-bit (4 nibble) network (subnet) number. These are the traditional “subnets” as used in IPv4, there are just more of them and they are larger. Subnets are on the /64 prefix boundary. Subnets are unique within a single site code, but are not unique beyond site code boundaries. There are 65536 possible subnets per site code, numbered 0x0000 through 0xFFFF. Subnets are NOT publicly routable and will not be accepted by most ISPs for public routing.
- HHHH:HHHH:HHHH:HHHH represents the 64-bit (16-nibble) host interface identifier. This is the same as the host part of an IPv4 address; it is just much larger. Host identifiers can be assigned in many ways including SLAAC, DHCPv6 or by static assignment.