Archive for category Computing History

Preparing for SIMH – Setting up the Google Cloud account and installing the Google Cloud SDK

This installment shows how to set up a Google Cloud account in order to run the SIMH emulator in a GCP instance. This is NOT a complete training or tutorial on Google Cloud, but does explain the settings needed for this project.

In this prior post, I’ve shown how the guest OS will be running on the emulator, in a virtual instance in Google Cloud. This post talks about getting a Google Cloud set up to make all this possible.

Google has written a huge amount of documentation. There are tutorials, quickstarts, API docs, and examples and labs. If you have trouble, Google Cloud Help has everything you need to get unstuck.

In order to prepare for the rest of this series and running SIMH in GCP, start with the Google Cloud console and go through this example. It uses the Google Cloud console to do a part of what we’ll do later with scripts.

Those examples show how to set up a project and enable billing. After that, a VM (instance) is created and Linux is installed. Once you have logged into the instance, and logged out, you can then delete the instance to clean up.

Follow the example, and your Google Cloud account will be ready for the rest of this series.

You’ll also want to set up SSH keys for use with “oslogin” – see the documentation here.

Keep the project open, as you’ll need it later to run the emulator instance.

Finally, we’re going to be using BASH scripts and the Google Cloud SDK (AKA gcloud) for all the future example.

You’ll need to install the SDK, using these instructions for your particular operating system.

Next time we’ll begin the first bash script, to use gcloud to set some configuration variables we need to create and run the SIMH instance.

Advertisements

, ,

Leave a comment

An overview of installing and using SIMH in the Google Cloud (GCP)

As I mentioned in this prior post, I’m running some legacy operating systems (Multics, UNIX v7) using SIMH in Google Cloud. In this post I’ll give an overview of the installation process, and how the legacy OS stacks on the emulated hardware SPU, etc.

The process of using Google Cloud to run SIMH for hosting a legacy operating system has these major steps, no matter which CPU you’ll be emulating, or which operating system you’ll be hosting.

  1. Configure your Google Cloud account. Since we’ll want to script all of this later, we’ll save some key values in a script that can be included in later steps.
  2. Configure the GCP instance. This involves selecting the zone, CPU (instance type), operating system, etc. Again, this all gets saved in a script for future (re)use.
  3. Create the GCP instance. This creates the instance (virtual host) of the proper instance (CPU) type, in the correct location, and does the initial operating system install. When this is done, you have a virtual instance running Linux (typically) in the cloud, albeit with an un-patched operating system.
  4. Patch the base operating system.
  5. Install the development tools that are needed to compile SIMH.
  6. Load the SIMH source code, configure and compile it. At this point you have an SIMH binary that will emulate the desired CPU(s) all running in GCP.
  7. Copy (and then customize) the files needed run the desired guest OS on the emulated CPU to the running instance. This will include SIMH configuration files, disk image files, and other SIMH resources. This may vary considerably depending on the version of SIMH and the guest OS.
  8. Start SIMH, which will bootload the guest OS. If this is the first time the OS has been booted, you may need to then log into SIMH to issue commands, or even directly into the running guest OS for final configuration.
  9. After this, you can halt the guest OS and save the disk image. This saved state lets you reboot the system again (and again) with emulated persistent disk storage.

At this point, you’ve got a guest operating system, running on an emulated CPU, on top of a Linux system, running on a hypervisor, in the cloud.

It looks something like this:

For simplicity’s sake, we can combine some of the steps above into fewer steps, each of which can be a separate script.

  1. Capture configuration information – a single script to set environment variables for the Google Compute Account and the instance configuration.
  2. Create the GCP instance, install the operating system, patch it,
  3. Install the development tools needed to build SIMH, load the SIMH source code, configure and compile it. Copy the needed SIMH configuration files at the same time.
  4. Copy (and then customize) the files needed run the desired guest OS on the emulated CPU to the running instance. This will be different for each operating system.
  5. Start SIMH, which will bootload the guest OS.

Next time, we’ll look a little bit more at the GCP account setup and capturing the account and instance configuration information.

, , ,

Leave a comment

Retrocomputing – using SIMH to run Multics on Google Cloud Platform (GCP)

Last Fall (Oct 2018) I started playing with SIMH, and using it to run some rather ancient operating systems in the Google Cloud (GCP). So far I’ve been able to run Multics, UNIX V6 (PDP-11), and 4.0BSD (VaX).

I started down this path by using the dps8m fork of SIMH to run Multics on a Raspberry Pi 3. This worked very well, and produced performance that for a single user, matched the original mainframe hardware. Not bad for a US$35 pocket sized computer emulating a US$10+ MILLION mainframe (of the 1980s). Of course, Multics supported 100s of simultaneous users using timesharing, but at its heart, Multics (up to 8) CPUs were about 1-2 MIPS each and the system supported up to 8M 36-bit words (32 Mbytes) per memory controller,  up to 4 controllers per system for a grand total of 128 Mbytes per system. Yes, that’s Mbytes, not Gbytes.

For comparison, The $35 Pi 3 B+ runs at about 1000 MIPS, and has 1Gbyte of RAM. The Google Compute f1-micro uses 0.2 of a ~1 Ghz CPU and has 0.60 Gbytes (600 Mbytes) of RAM, making it a reasonable fit.

I’ve been building tools to allow anyone to install SIMH and any of these operating systems in the cloud, so that they can be experienced, studied and understood, without having to use dedicated hardware, or understand the details of using GCP, or SIMH.

In this series of posts, I’ll introduce how I’m using GCP (with scripting), a little about SIMH, a little bit about the hardware being emulated, and the historical operating systems and how to run them all in the GCP cloud, pretty much for free.

You should start by looking into Google Cloud Platform (GCP) and using some of their tutorials.

All of the SIMH examples I will show are running Ubuntu Linux on tiny or small GCP instances.

You can get started by reading about SIMH on Wikipedia, at the main SIMH web site, or at the Github repository for the software.

, , ,

Leave a comment

2018? Wait, what?

Wow, I’m behind. It was a busy year, and not a lot going on that I could really talk about publicly.

The recent meltdown and spectre bugs have brought back some memories from Orange Book days. I’ve also been spending a lot of time thinking about “IT transformation” and non-technical stuff. I’ve also been to the UK and Japan, twice, each, which may become the “new normal”.

Let’s see what happens in the next 12 months.

 

Leave a comment

IPv6 – CGN and Teredo Considered Harmful

There, I said it. The so-called “IPv6 transition strategies” are making it harder, more complicated and less secure to deploy IPv6 than just “doing the right thing”.

Carrier Grade NAT (CGN) and Teredo (among others) are the last gasps of an IPv4 world, and have no place in the modern Internet. While they may have short-term advantages to network operators, they will cause problems for their end users until they are finally phased out. Dual stack would be a better transition process, especially for customers.

keep-calm-and-dual-stackCGN is, as much as anything else, a way for carriers with a large network or large installed base of end users to make the fewest (and hopefully least expensive) changes in their networks. They are betting that by introducing a small number of large-scale NAT devices on the border between their networks and the Internet that they can avoid making sweeping internal network changes, or upgrading CPE (Customer Premise Equipment).

At best, even when working correctly, CGN breaks end-user accountability, geo-location and the end user experience. On top if that, it will slow IPv6 adoption, and force “true IPv6” users to adopt a host operational work-arounds and complicate deployment of next generation mobile and Internet applications.

CGN is inherently selfish on the part of the network operators that deploy it. They are saying “I want to spend less money, so I’m going to force everyone else to make changes or suffer in order to continue to talk to my customers.”

Or, as Owen Delong put it in his excellent look at the tradeoffs in CGN:

Almost all of the advantages of the second approach [transition to CGN and avoid investing in IPv6 deployment] are immediate and accrue to the benefit of the provider, while almost all of the immediate drawbacks impact the subscriber.

The next part of my rant has to do with Teredo, a “last resort transition technology”.

Like CGN, Teredo promises to allow end-user equipment to connect to the public IPv6 Internet over IPv4. It does this by “invisibly” tunneling your IPv6 traffic over the public Internet, to a “Teredo gateway”. A Teredo gateway performs a 4to6 network translation and passes your traffic onto the desired IPv6 destination. Teredo is implemented transparently in some Microsoft operating systems and can by default provide an IPv4 tunnel to the outside world for your IPv6 traffic.  It can, also provide an “invisible” tunnel from the outside world back into the heart of your network. And of course, all your network traffic could be intercepted at the Teredo gateway.

Teredo security has been a hot topic for years, with some concerns being raised shortly after Teredo’s standardization in 2006, and RFC6169 finally providing IETF consensus in 2011. Sadly, Teredo security must still be discussed, even though it is 0.01% of network traffic to dual-stacked resources. Fortunately, there’s a move in IETF to declare 6to4 technologies (including Teredo) as “historic”. Teredo will complicate network security until it is gone.

I for one, cannot wait for both CGN and Teredo to be consigned to the dustbin of history.

Leave a comment

It was 30 years ago today….

… that I logged into a UNIX system for the first time. It was also three days after the ARPANET transitioned to TCP/IP, and my first day at a new job.

The place was Logicon, in San Diego. The system was Programmers Work Bench (PWB) UNIX on a DEC PDP-11/70. The system was in single user mode, since the root filesystem was corrupt and the senior programmers who might have been able to help had quit during the month before. I think that the last one was out of the country for the next 3 weeks. And, my new (completely non-technical) boss had actually been hired after I was; but he was starting at Logicon the same day. It was an interesting beginning to my new job.

I spent that first week at my new job learning enough UNIX to figure out the icheck, dcheck and ncheck commands to repair the filesystem. I eventually got the root filesystem fixed, was able to create an account for myself and bring the system back up into multi-user mode. File systems were much simpler then. So simple in fact that I later learned to use the ed editor to repair (or just change) filenames by editing the directory files.

As soon as the PWB system came back up, I learned that we were now “off net” as the ARPANET had transitioned from NCP to TCP/IP on the 1st of January, and there was no one to port and debug the needed TCP/IP stack for our system. We needed that connection to communicate with our government customer, deliver software and work on our contract for the US Navy.

Fortunately, Logicon re-hired one of the senior programmers; he spent January and part of February working on the TCP/IP code to get us back on the ARPANET. We shared an IMP with the Naval Ocean Systems Center (NOSC, now part of SPAWAR) and UCSD. I met two long-time friends, Ron (@NOSC) and Brian (@UCSD) through our ARPANET connection, and we still keep in touch.

I always remember “ARPANET flag day“, because that’s when I got my start with UNIX and the ARPANET. That led to work on the Internet, HPC, and computer security.

Strangely enough, part of my current job is to work on a transition to IPv6. I have now had direct experience with IPv4, IPv5 and IPv6.

I owe a large debt  to all the people I’ve worked with and the USENIX (and later LOPSA) community of friends. You’ve all been wonderfully helpful, often insightful, and always friendly. Thank you all.

Leave a comment

Security – why programmers should study computing history

You can now add LinkedIn, eHarmony and last.fm to the long list of major sites that have had poor password security in their user database designs.  The saddest part is that in the case of LinkedIn, at least, this was apparently completely avoidable. (I haven’t found enough details to comment on the others, yet.)

Protecting stored user passwords is not rocket science.  This problem was pretty much solved in the 80s and 90s: Use a salted one-way hash function of sufficient strength to resist a dictionary attack.

(LinkedIn’s mistake was to use hashes, but to not salt them. )

That’s it.  Really.  UNIX has been using a salted hash since about 1985, initially with a hash based on DES. Since that time, as computing speeds have increased, new (salted) hash functions based on MD5, Blowfish, and SHA-2 have all been introduced.

In other words, stored password security has been a solved problem for at least 25 years. The concept is the same, only the algorithms have needed to be updated as Moore’s Law has dictated.

This is just one reason that programmers (and sysadmins) should study history, if only the history of computer security. Oh, if you’re not a cryptologist, for security-critical functions, please use well-vetted library functions.

A few references:

, , , , , , ,

Leave a comment

%d bloggers like this: