Part 3/3: Building an automated building platform for Docker

I have been playing around with Docker for a while now and thought it might be time to document some of this. Today we will cover off configuring the Git Web Service (Gog), then Jenkins (a Continuous Integrator) and finally our Docker service.

At this point you should have an up to date Ubuntu 16.04.1 (or similar) install, with GOG, Jenkins, and Docker up and running.

In this part we are going to create a Hello World nodejs app, then upload it to GOGS. After this we will build a Jenkins Job and then use a GOGS web hook to fire the job whenever we push updates to our code. Finally Jenkins is going to be a good chap and build our docker container and upload it to our private repository, however he could also upload this to any repository (e.g. the Amazon EC2 docker repository or Docker Official Repository).

Creating a project repository and uploading some code

So now we have ourselves the software in place to write code and have it dropped into a container for us to either upload to our own container for test and then to the cloud for production (e.g. Amazon, Azure, Docker, or any other cloud provider)

First off let’s build ourselves a simple Hello World app, to do this we will use nodejs.

Create a new project directory on your machine called Demo

Inside this directory, we need to create a file called server.js

var express = require('express')
var app = express()
app.get('/', function (req, res) {
res.send('Hello World!')

app.listen(3000, function () {
console.log('Example app listening on port 3000!')

When you browse to http://<server_ip>:3000/ you will get the response of Hello World!

Next, we need to create a file called package.json with the following contents

"name": "demo",
"version": "1.0.0",
"description": "",
"main": "server.js",
"dependencies": {
"express": "^4.14.0"
"scripts": {
"test": "echo \"Error: no test specified\" && exit 1",
"start": "node server.js"
"keywords": [],
"author": "Demo Guy",
"license": "ISC"

Our final file is the .gitignore file


At this point you may be wondering what these files are, if your familiar with node you may skip this next bit, if not read on.

The first file was server.js this is our application Hello World, it holds the javascript that node interprets to create the response to our browse get request.

The next file was the package.json file, this file contains many things and is effectively the environment configuration/descriptor file for the project. For more information on this check out

The last file is .gitignore this contains patterns that git uses to determine what to exclude from its pushes to the repository. The above is a basic list and you can add to this for example config.js might be a file you wish to ignore.

Configuring our GOGS repository

The next step is to head to GOGS to create a repository


Then we need to fill in the details of the repository.


GOGS now gives you a run down on how to configure your git client to upload to it


Now that we have our demo project repository up we need to configure git to push it up to GOGS.

To do this part you are going to need git, if you don’t have a git client head over to

*I recommend adding git to your path to make it easier for you to use.

Open the command prompt on your local machine and change directory to your demo project folder

Once there we need to configure the git configuration as shown in GOGS, with a slight change shown below (change is git add * instead of

git init
git add *
git commit -m "first commit"
git remote add origin http://<myserverfqdn>:3000/chris/Demo.git
git push -u origin master

Important things to note here, once finished a branch called master will be created, in git you can have multiple branches for the same repository. Branches can be used in many different ways for example you may choose to have dev and prod as your branch names, or maybe you have main and crazyidea (for that wild idea you have that may not work).




If you look back at GOGS your project has now been uploaded and commited as version 1.


Now your done with GOGS for the moment and you want to head to Jenkins where we will configure the build job and make Jenkins earn its keep on the farm.

Jenkins Configuration

Go on the Jenkins Dashboard, we need to add a new plugin in for GOGS to talk to Jenkins and ask it nicely to build stuff.


Click Manage Jenkins and then click Manage Plugins


Now click Available and then type gogs into the filter field. You should now see the gogs plugin


Click the checkbox next to the plugin and click install without restart


Jenkins will now download and install the GOGS plugin


Once the plugin is installed it will show GOGS plugin Success

You now need to build the first job which we are doing to call Demo because that will be a very descriptive name

Back on the dashboard hit create new job


Enter the Job name as Demo. You will now see a huge range of job types available, feel free to check them out later but for now we will build a freestyle job and then click ok.


You will now be able to fill in the details for the job.

First up you need to fill in the Name and the description. Take note of the name we will need this back over in GOGS in a minute.

You also need to create a GOGS secret, this will be required for the webhook.


The next part of the configuration requires us to fill in the git source code section. From the GOGS repository page you will find the repository URL highlighted in read below


Copy and paste this in to the git repository URL. We then need to press the Add button and select check out to a sub directory


The end result should look like below


Now we need to create the build steps, you have 2 choices here you can head back to the plugin manager and install the Docker build plugins, or you can execute shell commands. I will be executing shell commands today however the plugin is well worth learning if you have time.

First up we configure the build environment options I recommend cleaning the workspace before starting and highly recommend timestamping the logs.


We then need to build to shell executions, the first shell execution is to create the .Dockerignore file and the Dockerfile, whilst the second executes Docker commands to build our application.


The code block is as follows

touch Dockerfile
cat << EOF >> Dockerfile
FROM ubuntu:latest

COPY project /opt/webserver
RUN apt-get update; apt-get install nodejs npm -y; apt-get clean; cd /opt/webserver; npm install
CMD cd /opt/webserver; nodejs .

touch .dockerignore
cat << EOF >> .dockerignore

The second shell execution step looks like this


The code block looks like this

docker build -t demoguy/demo:${BUILD_NUMBER} .
docker tag demoguy/demo:${BUILD_NUMBER} demoguy/demo:latest
docker tag demoguy/demo builder.tower.local:5000/demoguy
docker push builder.tower.local:5000/demoguy

Finally, save the job

Just before we head back to GOGS to wrap this post up lets to a few seconds to look at what is happening in these 2 code blocks

In first code block there were a couple of keywords in the Dockerfile FROM, EXPOSE, MAINTAINER, RUN, and CMD. We will break out what each of these mean below:

FROM This tells Docker what image to base the image from
EXPOSE This instructs Docker that inside the container there are ports being listened to that need exposing.
MAINTAINER This is the details of the maintainer of the image
RUN These are the commands run at build time, you can have multiple RUN commands, but every run command is a new layer and therefore increases the size of the image.
CMD Similar to the RUN command with the exception that it is run on container start and therefore is like the entrypoint, the important thing to note here is this command can be overwritten by specifying the entrypoint on container start.

Finally, we need to head back to GOGS to our repository and find web hooks under Settings


Under Settings you will find Webhooks


Click Webhooks, then Add Webhook, and then GOGS


Now we need to call our job using a special URL


This is in the format of

http://<jenkins FQDN:<Jenkins port>/?job=<Job Name>

Finally, we are only interested in push events


Now we need to test our web hook by pressing Test Delivery.


On success you should see a tick, if you see a triangle then there was a problem (see the bottom of this post for what that might look like.)


Back on the Jenkins screen you should see your job spring to life and start running


Once done you will see the Build status go back to idle



Gogs Webhooks

In GOGS I mentioned that you may get a triangle let’s take a look at how to diagnose that problem. If we look at the image below. we can see our Webhook has failed.


If we click the UID starts with fb9e8e2d in the above image, we will see a breakdown of the task history. In the picture below the response return was no route to host, in this occasion the FQDN was misspelt and therefore unresolvable



Troubleshooting Jenkins

A similar process can be followed with Jenkins, if you click on the job you can find the history tab on the side, failures are indicated in red.


If you click #1 to take a look at that attempt you will be able to click the console output, this output is all of the output captured whilst the tasks were run.


Doing so will show an output similar to below where we will be able to identify the git pull failed and therefore the Docker build failed to copy the application



Hopefully this mini-series will help somebody out there in the wider world.

Part 2/3: Building an automated building platform for Docker

I have been playing around with Docker for a while now and thought it might be time to document some of this. Today we will cover off installing a Git Web Service (Gogs), Docker, then Jenkins (CI/Automation) for running our build instructions and finally configuration of a Docker repository for our built containers.

At this point you should have an up to date Ubuntu 16.04.1 (or similar) install, looking similar to below


Your next step is to login to the server using the credentials you have previously configured

GOGS Install

Now that you’re on the command line we need to install GOG

Start by creating the gogs user   (however you can use whatever username you wish e.g. git)

sudo useradd -m gogs

Next, we need to build out our directory structure, to do this run the following commands

cd /srv
sudo mkdir gogs-repositories
sudo chmod 770 gogs-repositories
sudo chown gogs:gogs gogs-repositories


Now we have our directories in place, we are ready to download gogs and install it. Head over to


There is a prerequisite that need to be installed prior to gogs and that is git.

On the command line type the following

sudo apt-get update
sudo apt-get install git


You may find git has already been installed

The next step is to set up our bin directory for gogs, I always install my custom tarballs to /opt as it makes it easy to find them. You can also build gogs as a docker container but today we won’t be doing this.

Let’s build that directory now doing the following

sudo mkdir /opt/gogs
sudo chown gogs:gogs /opt/gogs
cd /opt/gogs/


Next, we need to head back to the website and find the tarball for our distribution and then do the ole right click and copy link location.



Now we should be able to download this into our home directory and then unarchive it into /opt/gogs

sudo wget


Now you need to untar the tarball

cd /opt
sudo tar zxvf ~/ gogs_v0.9.113_linux_amd64.tar.gz


Follow this up with a ownership change

sudo chown gogs:gogs gogs


Now we need to get our init script sorted out so gogs will start on boot

To do this

cd gogs/scripts/init/debian
sudo cp gogs /etc/init.d/
cd /etc/init.d
sudo chmod 755 gogs
sudo chown root:root gogs


Now we need to adjust our init script to our environment

sudo nano gogs

in here we are looking to change the workingdir to our gogs binary home /opt/gogs and our user from git to gogs. (or whatever username you selected back at the start)


Save and exit

Finally, we need to configure systemctl to load our init script on boot

sudo update-rc.d gogs defaults


Finally start gogs with

sudo service gogs start

and you’re ready to configure it

On your browser head to http://<GOGS FQDN>:3000/

The first time you start Gogs and head to the website you will be greeted with the installer, it looks a lot like this


First you need to select your database back end, I will be going with SQLite, but for bigger installs you will probably want MySQL.

You will also want to set the repository root path to /srv/gogs-repositories


If you require mail functionality, then you will need to set this too


Finally turn the twisty down on the Admin Account Settings and fill in the fields to build your admin account and then hit install Gogs, congrats your gogs install is complete.



Docker Install

Now, we need to create the Docker repository directory, fortunately this is a quick process

Back on the command line

You need to first install Docker (note: the below in Ubuntu it is, other distributions it will be just Docker)

sudo apt-get update
sudo apt-get install

Next, we need to configure our repository directory

sudo mkdir /srv/docker-repo
sudo chmod 770 /srv/docker-repo
sudo mkdir /srv/certs
sudo chmod 770 /srv/certs


Now we need to configure our certificates for our Docker registry, you can use self-signed SSL certificates here or you can head over to the good folks at StartSSL and grab a free one

I won’t cover generating SSL certificates in this blog but if you require help in this area I recommend checking out the Ubuntu documentation here

Finally, we need to configure the Docker registry container as follows

docker run -d -p 5000:5000 --restart=always --name registry \
-v /srv/docker-repo:/var/lib/registry \
-v /srv/certs:/certs \
-e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/domain.crt \
-e REGISTRY_HTTP_TLS_KEY=/certs/domain.key \

Should you be putting multiple instances up then there are 2 suggestions, firstly Docker recommend in their best practice guide that you don’t use local file system storage and instead use S3, google, etc for your storage engine. Finally, that you add -e REGISTRY_HTTP_SECRET=<secret> across your fleet guard against upload issues.

At this point it worth testing the repository to confirm that your files are heading to where you expect, to do this we will pull an image from the official repo and push it to our new repo.

sudo docker pull busybox:latest
sudo docker tag busybox:latest localhost:5000/busybox
sudo docker push localhost:5000/busybox


Checking /srv/docker-repo should now contain the repository. Typing

sudo ls /srv/docker-repo/

should show this

Confirm that your repository is talking by typing

curl https://<server FQDN>:5000/v2/_catalog

or browsing to that location. You should see the words repository and busybox listed there.

Jenkins Install

Finally, we now need to configure Jenkins to do the leg work for us (automation). From the command line (Following the Jenkins install guide

wget -q -O - | sudo apt-key add -
sudo sh -c 'echo deb binary/ > /etc/apt/sources.list.d/jenkins.list'
sudo apt-get update
sudo apt-get install openjdk-8-jdk openjdk-8-jre openjdk-8-jre-headless jenkins


Next step is to add Jenkins to the docker group

sudo usermod -a -G docker jenkins

then start and confirm the service is running

sudo service jenkins start
sudo service jenkins status


Next up you need to configure jenkins, your doing to need the initial admin password, the following command will get this for you.

sudo cat /var/lib/jenkins/secrets/initialAdminPassword

now head to the website http://<server FQDN>:8080/


Fill in the admin password and hit continue.

Next you have given the option to install all plugins or manually select them. I would recommend installing all the plugins so that you see get a feel for what Jenkins is capable of


The installer will now go about installing the plugins.



Once this process is complete you will asked to create the first admin user. Then hit save and finish.


Jenkins is now installed and ready to go


You should now be logged in and looking at the dash board


Part 1/3: Building an automated building platform for Docker

I have been playing around with Docker for a while now and thought it might be time to document some of this. Today we will cover off configuring a VM and installing an Operating System.

Step 1: Building the Virtual Machine

First up we need to build a VM for our platform, I have chosen Ubuntu because of its relatively up to date packages, you can use any distribution you like. If you are familiar with this process your welcome to skip on to part 2

    Build a machine with the following specs (or higher)

  • CPU: 2 x vCPUs
  • Memory: 1024mb
  • 1 x network card
  • HDD: (choose you own layout as you desire, this is what I have done)
Filesystem Type Size Mounted on
/dev/sda1 ext4 16G /
/dev/sdb1 ext4 20G /home
/dev/sdc1 ext4 9.8G /var
/dev/sdd1 ext4 20G /srv
Swap Swap 5G swap

I have chosen to break out /srv which will be used for the Docker and Gogs repositories, as well as the certificate storage for the Docker repository engine.

You have a load of choices on the VM side should you choose that or you could install to bare metal. I’ve chosen VMWare workstation as I have a long-standing history with their hypervisor, and I will be accessing it from VMWare workstation (mainly because I’m too lazy to use the web client).

Create your VM, mine is shown below

I have chosen to build a custom VM


I have set the compatibility to the highest hardware compatibility available which is 11 (since I am using workstation to build it on my ESX platform.


Next you will need to select Linux as your guest OS and then Ubuntu Linux (64 bit as your version


Next you need to name your VM


Then select whether you need a EFI or BIOS for the VM firmware, I have selected BIOS


The Next step is to select the number of processors required, I would recommend at least 2 for your VM.


The next step is to determine the amount of RAM required, GOGS is very light weight whilst Jenkins and Docker are load dependant. I have started out with 1GB to get going.


The next step is to select your network connection


Below is the default / recommended so we will go with that02


Select the disk type leave this as SCSI.


Then create new disk


For this exercise 16GB will be fine, this will later serve as our / mount point


Next provide a name.


Finally, our machine customisation is finished


Next edit the Virtual Machine and click Add


Select Hard Disk and Next


Next you need to provide a name for the disk, this field is pre-filled.

*If you are using workstation to build this on ESX there is this annoying bug where the number on the end of the name doesn’t increment with each new disk you add so you have to adjust this yourself if your adding extra disks.


Repeat the above steps to add the additional drives that you require, I have stuck with thin provisioned with the exception of the drive I will be using for swap, for that drive I have pre-allocated it.

Finally, you need to configure the CD-ROM drive to point to your Ubuntu install disk, this can either be local ISO on your machine or one on the server.



Step 2: Install Ubuntu

Now boot the new VM and the Ubuntu installer should boot.

Select your language and hit enter


Hit enter to begin the installation process


Strangely enough you need to confirm your language again


Next select your location


Now we need to determine our keyboard layout, the default is US




Now the system will begin to load up the networking components required for the next part of the install


Now a name needs to be assigned to the machine


You are now required to add the first user, the reason for this is in Ubuntu you are required to use a normal user account and use sudo to elevate your permissions when required.






The server now has enough information to move on with the setup.


The next step is to set the time-zone for the VM.


The next main step is to configure the hard disk layout


At this point I have selected manual configuration to allow me to configure the disks how I want them.


The layout you see before you now has all of your disks visible and unpartitioned. Select the first disk and press enter.


The installer will now ask your if you wish to create a new partition


After selecting yes, you will be back on the layout screen and you should notice the shiny new partition table.


Next you will need to create a partition on your new disk. Select the new table and then when asked select create new partition


I have followed normal hypervisor principles of one partition per disk, but you may choose what you will.


Select either primary or logical, I recommend primary.


Now you need to configure the file system type, by default on 16.04.1 this will be ext4, also the first partition you configure will be /.


Repeat the process of configuring the partitions for each of your drives including the last one where you need to set the file system type to swap or you will receive a warning about not having swap.


Finally, after you have selected Finish partitioning and write changes to disk the installer will ask for confirmation.


The installer will now go ahead and partition the disks and format them before continuing on with the install process.



Ubuntu will now confirm if you need to access the internet via a proxy, if you do please configure this below.


The next step in the install is to confirm whether you wish to have updates automatically installed, for play environments this is probably okay, if you are installing this as part of a business system then you will probably want to configure landscape.


The next step is to install additional components, I normally install the OpenSSH server here.


Finally, we arrive at the GRUB installer, normally in a VM environment you would just install the GRUB boot loader to the master boot record of the first disk.


To make life easier install the GRUB boot loader to the same disk that you put the / mount point on, usually this is going to be /dev/sda



Congratulations you have now finished installing Ubuntu and the system is now ready to reboot and start up.


Once booted you should see the login console as shown below.


To finish off the fresh install we need to ensure we are running the most update to date kernel etc.

At this point it’s worth confirming that your guest tools are reporting as running, you can check this from your hypervisor management console.

Login as your new user, mine was bob.

I recommend you modify the sources to prevent a warning failure during updates due to the CD-ROM being unmounted, to do this type the following.

sudo nano /etc/apt/sources.list

Comment out the CD-ROM entries with a #

Next, we need to update the system run the system.

sudo apt-get update
sudo apt-get upgrade
sudo apt-get dist-upgrade
sudo apt-get upgrade

Finally, we should configure a static IP for our new system

cd /etc/network/
sudo nano interfaces

now configure your interface similar to below ensuring it is correct for your network. Bold parts below are the parts that need adjustment.

# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).

source /etc/network/interfaces.d/*

# The loopback network interface
auto lo
iface lo inet loopback

# The primary network interface
auto ens160
iface ens160 inet static

then save and exit and type

sudo reboot

Bind9 issues when upgrading from ubuntu 14.04 LTS to 16.04.1 LTS

If you like myself have begun the upgrade to 16.04.1 LTS from 14.04 LTS you will likely find a number of configuration issues post upgrade. One such issue is that your Bind9 / Samba integrated AD solution no longer works. A quick look over the logs using a grep (e.g. grep -I “named” /var/log/messages) will produce an output similar to this

Aug 26 07:36:07 AServer named[3594]: —————————————————-
Aug 26 07:36:07 AServer named[3594]: BIND 9 is maintained by Internet Systems Consortium,
Aug 26 07:36:07 AServer named[3594]: Inc. (ISC), a non-profit 501(c)(3) public-benefit
Aug 26 07:36:07 AServer named[3594]: corporation.  Support and training for BIND 9 are
Aug 26 07:36:07 AServer named[3594]: available at
Aug 26 07:36:07 AServer named[3594]: —————————————————-
Aug 26 07:36:07 AServer named[3594]: adjusted limit on open files from 4096 to 1048576
Aug 26 07:36:07 AServer named[3594]: found 2 CPUs, using 2 worker threads
Aug 26 07:36:07 AServer named[3594]: using 2 UDP listeners per interface
Aug 26 07:36:07 AServer named[3594]: using up to 4096 sockets
Aug 26 07:36:07 AServer named[3594]: loading configuration from ‘/etc/bind/named.conf’
Aug 26 07:36:07 AServer named[3594]: /etc/bind/named.conf:10: open: /var/lib/samba/private/named.conf: permission denied
Aug 26 07:36:07 AServer named[3594]: loading configuration: permission denied
Aug 26 07:36:07 AServer named[3594]: exiting (due to fatal error)

The issue above is clearly a permissions issue (Right!??), the first issue you will discover is that bind no longer runs as root (its now running as bind). So that’s the first thing to fix. The lazy man’s approach here is to chmod x+o until your bind daemon can read the file, no points for guessing what was done here.

Items I found needed fixing were:-
– fix the permissions for named.conf in private
– fix permissions in /etc/bind

In addition to this 16.04 introduces AppArmor, so you will need to add / check rules for this. I did find a forum post which details the answer below. Original Post

vi /etc/apparmor.d/local/usr.sbin.named

# Site-specific additions and overrides for usr.sbin.named.
# For more details, please see /etc/apparmor.d/local/README.
/var/lib/samba/private/named.conf r,
/var/lib/samba/private/dns.keytab kwr,
/usr/lib/samba/** m,
/var/lib/samba/private/dns/** krw,
/var/tmp/** krw,
/dev/urandom rw,

From memory I restarted AppArmor at this point

sudo systemctl restart AppArmor

Once you have those pesky permissions sorted you’re going to be under the belief that all is well and you would be wrong. However if your messages output looks similar to below your on the right track.

Aug 26 07:45:07 AServer named[3841]: —————————————————-
Aug 26 07:45:07 AServer named[3841]: BIND 9 is maintained by Internet Systems Consortium,
Aug 26 07:45:07 AServer named[3841]: Inc. (ISC), a non-profit 501(c)(3) public-benefit
Aug 26 07:45:07 AServer named[3841]: corporation.  Support and training for BIND 9 are
Aug 26 07:45:07 AServer named[3841]: available at
Aug 26 07:45:07 AServer named[3841]: —————————————————-
Aug 26 07:45:07 AServer named[3841]: adjusted limit on open files from 4096 to 1048576
Aug 26 07:45:07 AServer named[3841]: found 2 CPUs, using 2 worker threads
Aug 26 07:45:07 AServer named[3841]: using 2 UDP listeners per interface
Aug 26 07:45:07 AServer named[3841]: using up to 4096 sockets
Aug 26 07:45:07 AServer named[3841]: loading configuration from ‘/etc/bind/named.conf’
Aug 26 07:45:07 AServer named[3841]: reading built-in trusted keys from file ‘/etc/bind/bind.keys’
Aug 26 07:45:07 AServer named[3841]: initializing GeoIP Country (IPv4) (type 1) DB
Aug 26 07:45:07 AServer named[3841]: GEO-106FREE 20160408 Bu
Aug 26 07:45:07 AServer named[3841]: initializing GeoIP Country (IPv6) (type 12) DB
Aug 26 07:45:07 AServer named[3841]: GEO-106FREE 20160408 Bu
Aug 26 07:45:07 AServer named[3841]: GeoIP City (IPv4) (type 2) DB not available
Aug 26 07:45:07 AServer named[3841]: GeoIP City (IPv4) (type 6) DB not available
Aug 26 07:45:07 AServer named[3841]: GeoIP City (IPv6) (type 30) DB not available
Aug 26 07:45:07 AServer named[3841]: GeoIP City (IPv6) (type 31) DB not available
Aug 26 07:45:07 AServer named[3841]: GeoIP Region (type 3) DB not available
Aug 26 07:45:07 AServer named[3841]: GeoIP Region (type 7) DB not available
Aug 26 07:45:07 AServer named[3841]: GeoIP ISP (type 4) DB not available
Aug 26 07:45:07 AServer named[3841]: GeoIP Org (type 5) DB not available
Aug 26 07:45:07 AServer named[3841]: GeoIP AS (type 9) DB not available
Aug 26 07:45:07 AServer named[3841]: GeoIP Domain (type 11) DB not available
Aug 26 07:45:07 AServer named[3841]: GeoIP NetSpeed (type 10) DB not available
Aug 26 07:45:07 AServer named[3841]: using default UDP/IPv4 port range: [32768, 60999]
Aug 26 07:45:07 AServer named[3841]: using default UDP/IPv6 port range: [32768, 60999]
Aug 26 07:45:07 AServer named[3841]: listening on IPv6 interfaces, port 53
Aug 26 07:45:07 AServer named[3841]: listening on IPv4 interface lo,
Aug 26 07:45:07 AServer named[3841]: listening on IPv4 interface eth0,
Aug 26 07:45:07 AServer named[3841]: generating session key for dynamic DNS
Aug 26 07:45:07 AServer named[3841]: sizing zone task pool based on 5 zones
Aug 26 07:45:07 AServer named[3841]: Loading ‘AD DNS Zone’ using driver dlopen
Aug 26 07:45:07 AServer named[3841]: dlz_dlopen: /usr/lib/x86_64-linux-gnu/samba/bind9/ incorrect driver API version 2, requires 3
Aug 26 07:45:07 AServer named[3841]: dlz_dlopen of ‘AD DNS Zone’ failed
Aug 26 07:45:07 AServer named[3841]: SDLZ driver failed to load.
Aug 26 07:45:07 AServer named[3841]: DLZ driver failed to load.
Aug 26 07:45:07 AServer named[3841]: loading configuration: failure
Aug 26 07:45:07 AServer named[3841]: exiting (due to fatal error)

In the above log you can see the issue…. (Psst I have highlighted it for you). You will need to update the conf file with the new library to account for the new version of bind you are now running. Below is the file edit in bold you need to make and then restart bind.

vi /var/lib/samba/private/named.conf

# This DNS configuration is for BIND 9.8.0 or later with dlz_dlopen support.
# This file should be included in your main BIND configuration file
# For example with
# include “/var/lib/samba/private/named.conf”;
# This configures dynamically loadable zones (DLZ) from AD schema
# Uncomment only single database line, depending on your BIND version
dlz “AD DNS Zone” {
# For BIND 9.8.0
# database “dlopen /usr/lib/x86_64-linux-gnu/samba/bind9/”;

# For BIND 9.9.0
# database “dlopen /usr/lib/x86_64-linux-gnu/samba/bind9/”;

# For BIND 9.10.0
database “dlopen /usr/lib/x86_64-linux-gnu/samba/bind9/”;

* Please note in the above config there are no line break between bind9 and /dlz

From here your bind server should start and your back in business!

Ubuntu 16.04 LTS packages required to install VMware-vSphere-Perl-SDK-6.0.0-3561779.x86_64

The below command is more a reminder for me, but I’m sure some will find it useful.

sudo apt-get install libarchive-zip-perl libcrypt-ssleay-perl libclass-methodmaker-perl libdata-dump-perl libsoap-lite-perl perl-doc libssl-dev libuuid-perl liburi-perl libxml-libxml-perl lib32ncurses5 lib32z1 libcrypt-openssl-rsa-perl libsocket6-perl libnet-inet6glue-perl

ATen IP 8000 KVM PCI Card

So recently I bought an Aten IP 8000 KVM PCI Card from eBay, it was 2nd hand and a ¼ of the price of other ones at the time (around 90 dollars AUD). When it arrived the eBay listing was correct, it truly was just the card and the KVM cable, it was missing the feature cable, CD, 5 volt adaptor, a reset jumper shunt, and Manual.

Aten if your reading this don’t stop making these cards, they are awesome but you could make them in PCIe and the web console remote viewer could be better.

My initial thoughts went to what that missing feature cable looked like and why did the card stop responding when on the network. A quick google around showed me what the card should look like by way of a how to install video here. Luckily I had a multi volt adaptor I had previously bought so I was okay on that front.

The manuals, firmware, and software can be downloaded from Aten’s IP 8000 website.

If you have lost / didn’t receive the serial number for the remote software you can ask for the key by logging a ticket to support here. Essentially you need to create a new account, then register your card and finally log a ticket against that card. They are pretty quick to respond – I received a new key in around 8 hours (4 of which would have been outside business hours due to the time difference).

The first problem I faced was rebuilding that feature cable since it seemed nobody sold these. To do this I had 2 options, 1st was to buy a jumper wire pack from Jaycar, the other option was to build one.

Jumper wire pack
Source: Jaycar Catalogue

I checked in with Jaycar but the local shop had sold out the last pack a couple of days earlier and it would be a week or so till they had more stock, so I opted for the build your own option.

To do this I dug through my PC parts box for an old analogue CDROM cable and then pulled a case from the shelf I had meant to throw out in the last rubbish run but forgot to (lucky me). Taking the power and switch jumper housings from the old case I was able to remove the housing from one end of the CDROM cable and put the 2 new housings (power / reset) on in place whilst ensuring that I kept the black cables with the corresponding red and white ones. The end result is shown below


Following the manual (Page 9) I then plugged the cable in ensuring I lined the wires up correctly with the function (reset / power), nothing worse than trying to power the machine on in the KVM only to realise you need to hit reset instead of power because you have wired it backwards.

I decided red would be for power and white would be for reset.


Then it was time to quickly reset the BIOS to clear all previous settings, curiously this process actually gets done outside of the PCI slot. The basic process is at follows

  1. Short the jumper shunt on J2 (you can see it in the picture above it is the 2 right most jumpers with the password default written next to it)
  2. Plug the 5 volt adaptor in and count down from 5 seconds
  3. Unplug the adaptor
  4. Plug PCI card back in to motherboard

The process is documented in the manual on Page 85.

The end result now


After putting the server back in the rack I found that I wasn’t able to connect to it, I kept getting SSL interrupted or timed out Secure Connection Failed errors from both IE and Firefox

From Firefox
ip8000-512bit key failureFrom the WinClient – When Clicking Admin Utility

So the problem you have here is that Windows KB2661254 has been installed on your system (For windows 7 / 2008 and below), if you’re on windows 8 and above there is no hope.

The first clue on the actual issue can be seen from the release notes
ip8000-releasenotesIf your card has firmware older than V1.1.103 (Which mine did it was on V1.0.087) you’re going to need to lower the minimum RSA key size allowed to 512bits so you can upgrade your firmware.

The underlying issue is that the certificate service on windows won’t allow you so connect to an SSL website with a Key Size lower then 1024

I decided I would use a test VM I had on my laptop to fix this problem, as I have personal concerns with adjusting things like this on my main machine.

You will need to run the following command as the Administrator (depending on your OS you may need to right click and run as administrator)

certutil -setreg chain\minRSAPubKeyBitLength 512


                A note from Microsoft on this procedure

NoteAll certutil commands shown in this article require local Administrator privileges because they are changing the registry. You can ignore the message that reads “The CertSvc service may have to be restarted for changes to take effect.” That is not required for these commands because they do not affect the certificate service (CertSvc).


Once you have completed this step like magic your Internet Explorer browser will be happy to show you the this website isn’t safe screen, but this time you can continue on
ip8000-512bit-ie-errorClick on and you will see the admin console
ip8000-admin-console-loginNext Login
ip8000-admin-console-main-menuClick the Maintenance button in the top left hand corner
ip8000-admin-console-firmware-updateFrom here you will be able to add your Firmware file and click upload. The card will now upload your firmware confirm it is error free and suggest you logout so it can actually upgrade your firmware (don’t do this until it tells you to).

Once you have confirmed that you can see the admin console login screen from Firefox you will need to revert the minimum key settings.

Run the following command (running as administrator) to do this

certutil -delreg chain\MinRsaPubKeyBitLength

You will now find that your admin utility will magically start working.

And Done!

Other Interesting notes

  • Prior to the upgrade when my Firefox browser attempted to browse to the web console the card would stop responding, this seems to have fixed itself with the upgrade.
  • The WinClient is fantastic for completing BIOS upgrades and OS Installs.

Linux ADDS and WinStore problems

So you have been to Richard’s blog at and you now have a running Linux ADDS but your windows Store no longer works and throws one of the 2 following errors:

  • Windows Store Error – Unable to download apps – “Try that again” Error Code 0x8004804e
  • HRESULT Exception 0x80070520

The first one you will see on windows 8.1 more often than not. On windows 10 you won’t be able to add your Microsoft account when clicking Start > Settings > Accounts. It will bomb out when you try to log it in. You will also find that on both Windows 8, 8.1 and 10 you can’t log OneDrive in.

After much searching and digging in logs plus going over the winstore log and not finding an answer, I stumbled across a post in the Microsoft forums where people where having problems on Windows ADDS Windows Forum Post. This thread was a huge help as it directed me to the actual problem which was the Credential Manager permissions for the users. The windows Store uses the credential manager to store its credentials.

So whats happening where is your friendly windows workstation is attempting to store your winstore credentials in AD and your friendly Linux ADDS has no idea what to do about that.

The following site details a rather manual way to fix this problem (under the heading of NT4 style domain controllers.

However the best way to ensure this works everywhere as you would expect (on your workstations) is to add it to new group policy (I guess you could add it to the default domain policy if you want).

So let’s get this fixed

The registry setting you will be pushing out is


***If your familiar with this process you can finish reading now, for those of you needing further assistance please read on.

1. Open Group Policy Management
2. Now create a new registry item by right clicking in the left hand panel
3. Create new policy
4. Click Computer Configuration > Preferences > Windows Settings > Registry


5. Fill in the details



6. Then save (Apply then Ok)

Apply this policy to the OU where you’re keeping your Workstations.

You will now want to do a gpupdate /force on your workstation and you’re done.