17 Years, 111 titles, 6 hardware launches.

Three numbers can’t properly summarize my career at Playstation, but they’ve helped me to put it in perspective and reminded me of the best thing about working at Playstation.

Through all those launches, all those challenges, all the successes (and a few failures!), the absolute best thing about working at Playstation has been the people that I was privileged to work alongside.

To my Playstation Family…

July 14th 2020 was my last day at Playstation. Seventeen years and 21 days since I came on board to help prepare for the SOCOM II online launch.

I regret that I was not able to say goodbye to you all in person. COVID-19 sucks.

The passion that World Wide Studios IT team members bring to work each and every day make it a magical place to be. Working with you to support the Studios, to help those incredibly talented people deliver the best games in the world, was a privilege and a pleasure. You consistently deliver solutions to the Studios that push up to (and sometimes beyond!) the limits of technology, to ensure that those developers will have whatever they need to deliver their visions, no matter how ambitious, to our gamers. Thank you for allowing me to be part of that journey we took alongside our Studio partners from SOCOM-II to The Last of Us (part II).

The business side of games brings with it a completely different set of challenges, and my colleagues on that side of IT face those with a style and culture of their own. They too step up, to ensure that the business of delivering consoles and content to our players will run smoothly and efficiently. Working with you was challenging and rewarding in completely different ways, and opened new doors and new opportunities to me.

For my friends in the Studios (including the unsung heroes in PDSG, VASG, Audio, and FPQA), those amazing gatherings of the most creative and passionate people in the business – thank you for welcoming in an “outsider” and allowing me to try to make it easier for you to deliver your incredible results. The magic, the passion and the commitment to excellence that you demonstrate each and every day sets a very high bar for us all. Thanks for allowing me to see your vision and goals, and (hopefully!) help you deliver what you wanted to create. Your commitment to deliver the best content in the world, to excite, astound and amaze our gamers is inspiring.

To my Playstation family all around the world: I wish you all the best for the Playstation5 launch and beyond. I look forward to seeing (and buying!) all the spectacular, world-changing games that you will continue to create.

Someday, when this is all behind us, we will have those tasty beverages together. For my friends overseas, keep an eye out, for some day I may tap you on the shoulder at the Chlacan, or in BrewDog! Or maybe the bar at the Strings, or the Tokyo Whiskey Library! For you locals, I’ll see you at Studio K! My new gig is just around the corner!

Seventeen years – half my professional life. One hundred eleven titles – entertainment for at least 200 million people over those years. Six hardware launches, from the Playstation2 online adapter to Playstation4 Pro (and almost! Playstation5). Numbers can’t tell the tale of experiences we shared over the years, titles and consoles.

Robert Heinlein once had a character say “When the ship lifts, all debts are paid.” He was so very wrong. My ship has journeyed to a new port, but I will always owe a debt of gratitude to you, my Playstation family, for your friendship and support through all we achieved together. A debt I will never be able to repay.


1 Comment

The Night All the Disk Drives Crashed

This is a story a from very early in “a friends” career, concerning an over-zealous computer operator, failed disk drives, mainframe computing, conspiracy and a long-held secret.

In the early days of computing, computers were rooms full of huge racks, and disk drives were the size of washing machines. The disk packs themselves were stacks of aluminum platters that looked like wedding cakes in their smoked plastic covers and they weighed upwards of 20 lbs.

“Strings” of around 8 drives would be connected to a disk controller cabinet. A mainframe could have one or more controller cabinets. Each of these washing machines held a whopping 176 MBytes. Yes, that’s 100 3.5″ floppys (remember those?), or 1/1000th the storage of that SD card that you just threw away because it is too small to be useful.

Yeah, stone knives and bearskins, indeed.

A typical mainframe installation would have rows and rows of washing machines and dedicated people called “operators” who would mount tapes, switch disk packs and run batch jobs according to run books. Run books were essentially programs followed by humans in order to make things happen.

“A Friend” was a student intern in the IT department at a factory that made computer terminals for a large mainframe company. There were two PDP-10 mainframes, the “small” (TEST) system used for testing factory software things, and the “big” one that ran the mystical, mysterious and oh-so-important PRODUCTION. The TEST machine had one controller with six RP06 drives and the big PRODUCTION machine had three rows of eight RP06 drives, each row with its own controller. This becomes important later. They looked a lot like this, actually.

If the PRODUCTION machine wasn’t running, the entire factory stopped, leaving almost 2000 workers twiddling their thumbs at a hefty hourly rate. This was considered a Bad Thing and Never to Be Allowed Upon Pain of Pain.

It was common for different batch jobs to have different disk packs mounted. When you ran payroll, you put in the disk packs with the payroll data. When you ran the factory automation system a different set of packs, and when doing parts inventory true-up a third set, etc. Backups were done to rows and rows of tape drives, but that’s a topic for another story.

At night, on the 3rd shift, after the production jobs had all completed and the backups to tape were all done, there wasn’t a lot for the operators to do. On these slack evenings it was common, permitted and expected that the operators would put in the “GAMES” disk pack and play ADVENT, CHESS, or whatever mainframe game was on the most recent DECUS tape.

Ancient disk drives and packs were not sealed, and it was possible for some dust or even (GASP) a hair to fall into the drive “tub” when the pack was changed. Since the heads “flew” over the platter at a distance measured in microns, or 1/100 the thickness of a hair, any dust would cause a “head crash”, often sending the heads oscillating, skipping across the surface of the platter. So in a “head crash” both the drive heads and the platter were damaged. Here’s a diagram from that era showing all this.

When you changed disk packs, the platters would start to spin and the air filtration system would run. Only after about 60 seconds, after the air had been filtered, would the heads extend out onto the platters and begin to “fly” on a cushion of air.

Late one night after production was ended, the lead operator decided it was time to play some games. As was his privilege, he directed the junior operator to change out the disk packs on the “little” TEST mainframe and load the “GAMES” pack while he (the Senior) went to visit the little operators room, and also step outside for a needed cigarette (and likely also a nip of tequila from his hip flask, it being Arizona).

While the lead operator was out the junior dutifully swapped the GAMES pack into drive T (for test) 05. As it was spinning up, the washing machine emitted a set of beeps and displayed the “FAULT” light and spun back down.

Being a dutiful, and very new operator, the junior wanted to make sure that the ever-so-important lead operator could play the newest games upon his return, so he moved the GAMES pack from the faulty disk drive T05, to the next in line, unit T04. Once again, during the spin up phase, the drive FAULTed and spun down.

So he moved the GAMES pack to unit T03. Which promptly faulted.

The junior operator, being no slouch, realized that there was something wrong here and decided that there was a problem with the TEST mainframe’s single disk controller. Because the odds of three drives failing at the same time was inconceivable. It had to be the disk controller!

So he mounted the GAMES pack into the disk drive labeled P12 on the PRODUCTION mainframe. Which also faulted. The same with P11.

How odd, he thought, another disk controller failure. So he tried the GAMES pack in P05, which while still on the PRODUCTION mainframe, was on disk controller 0, not controller 1.

In all, the junior operator valiantly tried to mount the GAMES pack in six drives, across three disk controllers, on both the TEST and PRODUCTION mainframes. He knew that the lead operator loved his games, and he wanted to demonstrate his perseverance in following orders.

By the time the lead operator came back from his smoke/tequila break, the junior operator had destroyed the heads in six very expensive disk drives.

We later discovered that the original head crash had caused the heads to skitter into the platter, leaving a dent in the aluminum substrate.  When we examined that pack later, it looked like someone had stabbed the platter with a screwdriver, leaving a raised crater that was VISIBLE TO THE NAKED EYE!  So of course, each time he moved the pack to a new drive, the heads quickly crashed into the to-them Himalayan-sized mountain of aluminum, damaging another set of read/write heads and incidentally spraying oxide dust throughout the drive mechanism itself.

The lead operator had the presence of mind to call in the lead system administrator. As this was going to be a dirty job, they also called in the lowly student intern (“my friend”) so they would have TWO very junior someones to crawl under the almost 3 feet deep raised floor to drag the heavy cables (often called “anaconda cables” due to their size) as the machines were reconfigured.

By Tom94022 – Own work, CC BY-SA 4.0, https://commons.wikimedia.org/w/index.php?curid=38411260

The four of “them” spent the late evening and early morning re-cabling drives between the two systems so that Production could run starting at 8am. The in-house Field Engineer (FE) was called the next day to change all the drive heads and clean all the drive air filters, a two-day job. He happily joined the conspiracy as it was immediately obvious to him what had happened. Because he had seen the exact same thing happen at a Major University the year before. They had lost 8(!) drives to a zealous student operator trying to load a disk pack full of ASCII porn pictures. The Senior FE conveniently had a junior FE who needed some extra practice on this incredibly tedious task, having annoyed said Senior FE by interrupting him while he was “explaining computers” to (snogging with) the cute new secretary, in his office late one afternoon.

The junior operator was sworn to secrecy and paid hefty bar tabs for all involved for several months, including a trip to a strip club across town. The intern was promised a good grade and evaluation, and the Junior FE served his multi-day penance never knowing the whole story.

The rash of crashed disk drives was chalked up to a faulty A/C filter in the first failed drive. Said A/C filter having been created by the Senior Field Engineer taking it outside into the Arizona desert and bashing it into a small bush. All the drive heads had been scheduled for replacement and alignment in three weeks anyway, so there was no actual loss to the company.

It’s been over 40 years since that long night crawling under floor tiles and I still remember the lessons of that night. “Dust is bad”, “stop and think”, “know when to call for help,” the value of learning from the mistakes of others, and most importantly, how to keep a secret.

Leave a comment

Some old-school UNIX shell hackery using “tr” for a UNIX v6 kernel build

I had to resort to some old-school UNIX shell hackery to get V6 UNIX running in SIMH as an automated process.

If you refer to the earlier posts, you’ll see that there were three main steps in getting a well-running UNIX kernel with new device drivers created:

  1. Boot from “tape” and copy an image of the root filesystem onto a simulated RK disk
  2. Boot from the RK disk, modify source code for several programs such as “df” and the kernel. Compile the new kernel, copy it into place in the root filesystem.
  3. Boot the new kernel from the new root filesystem.

This post covers step 2, which involves creating and editing several files to include information about new device drivers, then then compiling the new kernel. The entire process is very well documented at: https://gunkies.org/wiki/Installing_Unix_v6_(PDP-11)_on_SIMH so I won’t cover it in detail.

Recall that while we have a running V6 system, there’s no way to copy files into the virtual machine. We can enter shell commands, but not copy files into the emulated system. This means that we have to play games with “cat”, “ed” and other UNIX commands to create new files or edit existing ones. There’s a bit of a complication in that to enter commands into UNIX running within the SIMH emulator, we have to use the SIMH EXPECT/SEND commands. These are documented in the SIMH user’s guide for Version 4.

Also, remember that this is the 1975 “sh” shell. It’s very primitive. If we had the modern “here document” feature, this would have been much simpler. I’m not sure, but I believe that feature didn’t appear until 4BSD (1980). Instead we’ll have to rely on the commands that were there 44 years ago.

For example, this set of commands would normally just be entered on the command line.

chdir /usr/sys/conf
cc mkconf.c
mv a.out mkconf

But to use the EXPECT command, it looks like this, using “;” as the command separator to do this all on a single input line:

expect '#' send 'chdir /usr/sys/conf; cc mkconf.c; mv a.out mkconf\r'; continue

So far so good, but what about something more complex, where we’re even farther down the rabbit hole? Passing input to a program that’s running in the shell, in the emulated OS is one step further in and requires some “old school” shell hackery. For example, what about sending input into the “mkconf” program:

# ./mkconf

It turns out that we can’t just feed this in as multiple EXPECT/SEND combinations due to the need for embedded newlines. Any newlines in the EXPECT script would end the line, any newlines embedded in the SEND strings would also be lost.

This was a head scratcher for an hour, until I remembered some similar problems I’d had years ago doing stream editing (sed) to patch binary program files on the fly instead of re-compiling the source (don’t ask, ugly).

This led me to using the “tr” program to “send” newlines without ever using the newline character in my command.

expect '#' send 'echo \'rkXtmXtcX8dcXlpXdone\' | tr X \\\\012 | mkconf\r'; continue

You can see that the newline character never appears anywhere in the SEND string. The echo command will emit the needed lines, but with X instead of newline. The tr command will replace the X character with an escaped-and-escaped-again 012, the octal for newline. This feeds 6 newline separated strings (lines) into the mkconf program, without ever actually using a newline!

After that, it was back to vanilla scripting, until I hit another similar glitch. To add the RK disk to the list of supported devices in the “df” command, you need to edit the source to add 2 lines into an array that lists the supported devices. Interactively, this is pretty trivial (if you know “ed”).

# chdir /usr/source/s1
# ed df.c
# cc -s -O df.c
# cp a.out /bin/df
# rm a.out

And here we are again, needing to enter multiple lines to a program, without using the newline character. It’s “tr” to the rescue again:

expect '#' send 'chdir /usr/source/s1 ; echo \'/rp0/dX.-2aX  "/dev/rk0",X  "/dev/rk1",X.XwXqX\' | tr X \\\\012 | ed df.c\r' ; continue

And finally, one last time:

# ed /etc/ttys


expect '#' send 'echo \'1,8s/^0/1/pXwXqX\' | tr X \\\\012 | ed /etc/ttys\r' ; continue

You can find all this hackery (and some other uglier things) in the “buildunix.ini file in the github repository.

For me, this was a fun trip down memory lane and the weird things we had to do, back when computers were more primitive and yet sometimes more fun.

, , , ,

Leave a comment

V6 UNIX – first boot and “driving” the SIMH emulator

In the previous post, I alluded to some “extreme expect hackery” needed to configure and install a new UNIX kernel.

Note: The repository name has changed to: https://github.com/tomperrine/unix-v6-pdp11-simh-gcp

To get the most out of this post, get the files from the GitHub repo to follow along.

During the era of PDP-11 and even VAX UNIX, adding device drivers to the kernel required changing the source code. Specifically, there are a set of data structures that define the mapping from UNIX device numbers to the names of the device driver entry points (C functions). Hence, adding a device driver to the kernel requires source code to be edited and a new kernel compiled from the new sources.

The entire process is pretty well documented here.

To get from “first boot” to “booting from rk disk with all patches applied” takes a few steps:

  1. Boot from “tape” and copy an image of the root filesystem onto a simulated RK disk
  2. Boot from the RK disk, modify source code for several programs such as “df” and the kernel. Compile the new kernel, copy it into place in the root filesystem.
  3. Boot the new kernel from the new root filesystem.

This post will cover step 1 and the SIMH scripting needed to automate this process.

If we could just copy new files from “outside” (Ubuntu OS) into the file system of the “inside” or guest OS (v6 UNIX), none of this would be necessary. However, since we can’t, the only way to create files is by executing commands.

We’ll get started by figuring out how to send commands into a program that’s running inside the OS that’s inside the emulator, then (in a future post) work up to some very old-school UNIX tricks to create the needed files.

The first challenge is to pass commands to the SIMH emulator. This is done simply by giving the emulator a script file on the command line. See this example:

$ more buildunix.ini 
set cpu 11/40
set tto 7b
set tm0 locked
attach tm0 dist.tap
attach rk0 rk0
attach rk1 rk1
attach rk2 rk2
dep system sr 173030
boot rk

When you start the emulator with that script file, you get this result:

$ ./simh-master/BIN/pdp11 buildunix.ini 
PDP-11 simulator V4.0-0 Current   git commit id: 0de9b628
sim> set cpu 11/40
Disabling XQ
sim> set tto 7b
sim> set tm0 locked
sim> attach tm0 dist.tap
sim> attach rk0 rk0
sim> attach rk1 rk1
sim> attach rk2 rk2
sim> dep system sr 173030
sim> boot rk0

As you can see, we just gave the emulator a list of commands and they were executed by the emulator, which loads and then runs the UNIX bootloader. It’s the bootloader, running in the emulator that presents the “@” prompt. At this point, the emulator stops passing its input file to the console, leaving us “stranded” at the boot prompt. The bootloader needs the name of the kernel to load, which would normally just be entered by the user. Since we want to automate the entire process, we need to find a way to enter data but not just a a line in the script.

The emulator provides a way to SEND input into the running programs, using its own internal implementation of “expect“. This means that we can use EXPECT/SEND combinations to enter information in the programs that are running inside the OS that’s running inside the emulator. Clear as mud, right?

The “hook” is to set up an EXPECT/SEND combination BEFORE we enter the boot command, so that when the boot command executes and presents the “@” prompt, the emulator knows what to send in response. Now the script looks like this:

$ more buildunix.ini 
set cpu 11/40
set tto 7b
set tm0 locked
attach tm0 dist.tap
attach rk0 rk0
attach rk1 rk1
attach rk2 rk2
dep system sr 173030
: this sets up the rkunix information to be sent later,
: after we enter the boot rk command
expect "@" send "rkunix\r"; continue
boot rk

Which results in something more like this:

$ ./simh-master/BIN/pdp11 buildunix.ini 
PDP-11 simulator V4.0-0 Current   git commit id: 0de9b628
sim> set cpu 11/40
Disabling XQ
sim> set tto 7b
sim> set tm0 locked
sim> attach tm0 dist.tap
sim> attach rk0 rk0
sim> attach rk1 rk1
sim> attach rk2 rk2
sim> dep system sr 173030
sim> boot rk0
mem = 1035

Use, duplication or disclosure is subject to
restrictions stated in Contract with Western
Electric Company, Inc.

And now we have UNIX running inside the emulator, and a command prompt. At this point, our terminal is attached to the UNIX shell, and we can start to manually enter commands. But that’s not enough. We have an entire set of commands that we need to enter to add the new device drivers to the source code so we can compile a new kernel. If it was a single shell script, it would be about 70-90 lines, and of course, we still can’t just copy files into the V6 file system.

Next time, entering all the commands needed to configure and build a new kernel.

, ,

Leave a comment

PDP-11 running UNIX v6 in Google Compute Platform (GCP) using SIMH

Wow!  This post is months overdue!  I blame work, more work, Destiny2, other work, Edinburgh Fringe Festivalother other work, and beer.

This post is a quick overview of my GitHub repo (pdp-11-in-gcp) and how it works to create a fully functional UNIX system from 1976 (UNIX V6) “in the cloud”. It has everything you need to run your own piece of UNIX history.

For the most part, this is an automation of the instructions from http://gunkies.org/wiki/Installing_Unix_v6_(PDP-11)_on_SIMH

This assumes that you have a functioning GCP account with billing enabled, and have at least skimmed earlier posts in this series.

This repo includes several scripts and configuration files:

* launch-pdp11.sh – The master script creates a place to run the SIMH emulator, and builds the emulator. Part of this process is loading another script on to the GCP instance.

* update-os-build-simh.sh – This script is copied on to the GCP Ubuntu instance and gets the SIMH PDP-11 emulator running in the instance. When this script completes, you have a running Ubuntu system with a PDP-11 emulator ready to install v6 UNIX.

The end of the launch-pdp11.sh script provides instructions on how to install V6 UNIX into the emulator. This requires manually running three commands while logged into the GCP Ubuntu instance. Due to limitations of EXPECT, there are a few places where you will need to manually halt the emulator (^E).

* simh-master/BIN/pdp11 tboot.ini – This starts the emulator and does a “tape boot” from an emulated tape image and copies the minimal root filesystem on to the emulated RK disk (which is a file on the Ubuntu host).

* simh-master/BIN/pdp11 buildunix.ini – This script uses extreme expect hackery to do LOTS of customization of the kernel to support an RK disk 

* simh-master/BIN/pdp11 normalboot.ini – boots the fully functional PDP-11 with all software. Use this for all subsequent boots of the UNIX guest

One of the most fun parts of this project was dealing with SIMH’s internal EXPECT function. In the “olden days” you had to change the kernel source code to configure tables for each device driver that you wanted included in a new kernel.  I’ll show some of that in the next post.

, , , ,

1 Comment

Working in Tokyo this week…

I’m in Tokyo this week working on some global projects. Here’s a panorama night time view of Shinagawa from the hotel’s 16th floor.

Tokyo skyline from Shinagawa
A panorama of Tokyo from Shinagawa


Scripting a fast Ubuntu install in Google Cloud Platform (GCP)

In this post I’ll show how to script GCP instance creation, Ubuntu installation and patching in order to support the customized SIMH installs that we’ll do later.

All of my GCP/SIMH installs are based on Ubuntu Linux, running on tiny or small GCP instances. Since one of my goals is quick iteration and making it fast and easy for other people to install the SIMH emulator and the guest OSes, I’ve scripted everything. I’ve been a fan of infrastructure-as-code for two decades, so how could I not apply that to my GCP estate?

For this we need four scripts:

  • create-instance – create an instance, install and patch Ubuntu
  • stop-instance – stop (pause) the instance, preserving the instance state (boot volume)
  • start-instance – (re)start the instance from the saved state
  • destroy-instance – destroy the instance (which deletes the associated boot volume)

All of the examples start with a common Linux base in GCP, so it made sense to script a fast Ubuntu install and update.  While I could use a common SIMH install for almost all the guest operating systems, it makes sense to keep them separate so that people can install just the single OS that they want to play with, instead of them all.

These examples all assume that you have created a Google Cloud account, created at least one project, and enabled billing for that project. You may want to start with these tutorials.

You also need to set a few environment variables as described in this earlier post.

Everything below should be self-explanatory. Essentially, the main steps are to create the instance, then wait for the instance to be up and running. After that, another loop waits until the SSH daemon is running, so that some commands (apt-get update and apt-get upgrade) can be run.


# given a GCP, etc account and the SDK on the install-from host, build and install a new server

. ./set-cloud-configuration.sh

# If you don't use ssh-add to add your key to your active ssh-agent
# you're going to be typing your passphrase an awful lot

# create the instance
gcloud compute instances create ${INSTANCENAME} --machine-type=${MACHINETYPE} --image-family=${IMAGEFAMILY} --image-project=${IMAGEPROJECT}
gcloud compute instances get-serial-port-output ${INSTANCENAME}

# add the oslogin option so I don't need to manage SSH keys
gcloud compute instances add-metadata ${INSTANCENAME} --metadata enable-oslogin=TRUE

# it can take some time, and sometimes(?) the create returns much faster than expected, or the system
# takes a long time to boot and get to the SSH server, so wait for it to be READY
while [[ "RUNNING" != ${SSHRETURN} ]]; do
    SSHRETURN=`gcloud compute instances describe ${INSTANCENAME} | grep status: | awk -F\  ' {print $2}' `
    sleep 5
echo "instance running..."

# now wait until the SSH server is running (we get a response without a timeout)
while [[ ${SSHRETURN} -ne 0 ]]; do
    gcloud compute ssh ${CLOUD_USERNAME}@${INSTANCENAME} --project ${PROJ} --zone ${CLOUDSDK_COMPUTE_ZONE} -- hostname
    sleep 3
echo "SSH up and listening..."

# All we have is a "naked" Ubuntu OK, its always a good idea to update+upgrade immediately after installation
 gcloud compute ssh ${CLOUD_USERNAME}@${INSTANCENAME} --project ${PROJ} --zone ${CLOUDSDK_COMPUTE_ZONE} -- sudo apt-get --yes update
 gcloud compute ssh ${CLOUD_USERNAME}@${INSTANCENAME} --project ${PROJ} --zone ${CLOUDSDK_COMPUTE_ZONE} -- sudo apt-get --yes upgrade


The start, stop and destroy shell scripts are much simpler.

All the code is available in my github repo: https://github.com/tomperrine/create-simple-google-instance

, ,

Leave a comment

Setting configuration variables for the SIMH instance in Google Compute

In this short installment, we’ll create a BASH script that will be re-used as we script the creation of the Linux instance, SIMH installation and guest OS installation.

This assumes that you’ve followed the prior posts in the series, and have a functioning Google Cloud account, with a project created, and billing enabled. You need billing enabled even if you’re using the “free tier” or your initial account credit.

There are (for now) three things we need to have set up: account information for logging in, a project name, and a description of the instance we want to run. The description includes the physical location (region/zone) and the operating system we want.

This simple script will set the variables that we will want and can be included into all the other scripts we’ll write later.

Save this as set-cloud-configuration.sh

# set user-specific configuration info
# we're going to use "oslogin" so set a username
# THIS MUST MATCH your GCP account configuration
# see https://cloud.google.com/compute/docs/instances/managing-instance-access for details

# Set project information - this project MUST already exist in GCP
# This project MUST have billing enabled, even if you plan to use the "free" tier
export PROJ=retro-simh
gcloud config set project ${PROJ}

# set configuration info for this instance
# pick a region
export CLOUDSDK_COMPUTE_ZONE="us-central1-f"
# set information for the instance we will create
export INSTANCENAME="simh-ubuntu-instance"
export MACHINETYPE="f1-micro"
export IMAGEFAMILY="ubuntu-1804-lts"
export IMAGEPROJECT="ubuntu-os-cloud"

In order to continue with the series, you’ll need to make sure you have enabled billing AND configured “oslogin”.

You should also make sure you have ssh-agent running, unless you want to type your password, a lot.

In the next installment, we’ll create, stop, start and destroy GCP instances in order to prepare for compiling and running SIMH.

, ,

1 Comment

Retrocomputing – Multics


For the past few months, I’ve been using the dps8m fork of  SIMH to create and run Multics, one of the first operating systems I ever used, and one of my favorites. I’ve also built a completely automated process to install Multics in “the cloud”, so that others can play with this piece of Internet history. I’ll show how that works in some future posts.

Around 1973 I encountered my first computer,  GCOS (AKA GECOS), thanks to Honeywell and Explorer Post 414 in Phoenix. After “we” “discovered” some quite a few security problems with GCOS Timesharing, Honeywell management and our Boy Scout leaders decided to move us all to Multics, as it was a much more secure platform.

Multics has an interesting place in computer science history. It wasn’t the first timesharing (interactive) system, it wasn’t the first to have virtual memory, it wasn’t the first to be primarily written in a higher level language, and it wasn’t the first to be designed and developed with security as a primary goal. It wasn’t open source, although every system did ship with complete source code, something that was not true of any other operating systems of the era.

But it was the first operating system where all these things (and many more) came together.

It is a proven fact that without Multics, there would have been no UNIX, and therefore no MINIX and no Linux.

A lot has been written about Multics, by the people that created and ran it. For background about Multics see:

Leave a comment

Using SIMH in Google Compute to retrace my (UNIX) OS journey

After being introduced to SIMH and getting Multics running, I thought about using SIMH to retrace the steps (and operating systems) that I’ve used in my career. For now, I’ll focus on the UNIX and UNIX-derived systems.

Before coming to UNIX, I had already used Honeywell GECOS, Multics, CP-V and CP-6, and well as DEC’s VMS and TOPS-10. My first UNIX experience was Programmer’s Workbench (PWB) UNIX, which was an interim version between versions 6 and 7.

But after that I used 4BSD, SunOS, UNICOS, HPUX, DomainOS, SGI IRIX, and a host of other UNIX-flavored systems until finally coming to Linux. Along the way I help to extend or create two security kernels – KSOS-11 and KSOS-32.

So my plan is to bring up as many of these operating systems up as possible using SIMH, and focusing on the UNIX family.

Here’s the dependency graph of what I have in mind to begin, and it’s a roadmap for the rest of this series. I have no idea how long it will take, or how far I’ll get.

To date, I’ve got Multics and V6 UNIX, so I’ll show the tooling for those first. Using this information, you should eventually be able to run any OS for which a SIMH emulator exists for the CPU, and for which you can find a bootable or installable image.

, , , ,

Leave a comment

%d bloggers like this: