Installing Containerized OpenStack using Kolla

Dependencies

#!/bin/bash

#I use RDO repos for the host

# curl http://trunk.rdoproject.org/centos7/current/delorean.repo > /etc/yum.repos.d/delorean.repo
# curl http://trunk.rdoproject.org/centos7/dlrn-deps.repo > /etc/yum.repos.d/dlrn.repo

echo "Installing dependencies..."
yum install -y epel-release
yum install -y python-pip docker docker-python ntp ansible python-devel libffi-devel openssl-devel gcc python-openstackclient python-neutronclient git

pip install -U pip
pip install -U docker-py


systemctl enable ntpd.service
systemctl start ntpd.service

# Create the drop-in unit directory for docker.service
mkdir -p /etc/systemd/system/docker.service.d

# Create the drop-in unit file
tee /etc/systemd/system/docker.service.d/kolla.conf <<-'EOF'
[Service]
MountFlags=shared
EOF


#Change the storage driver from the slow devicemapper to overlay

sed -ie 's/DOCKER_STORAGE_OPTIONS=/DOCKER_STORAGE_OPTIONS=--storage-driver=overlay/' /etc/sysconfig/docker-storage
sed -e 's/--selinux-enabled/--selinux-enabled=false/' /etc/sysconfig/docker

sed -ie 's/SELINUX=enforcing/SELINUX=disabled/' /etc/selinux/config

# Run these commands to reload the daemon
systemctl daemon-reload
systemctl enable docker
systemctl restart docker


If the machine name is not on a DNS server we add it to the /etc/hosts file:

kolla-host   192.168.100.201

Install Kolla

It has to be installed from source:

# git clone https://git.openstack.org/openstack/kolla
# pip install kolla/
# cd kolla
# cp -r etc/kolla /etc/

Configure Kolla

You can modify Kolla settings in /etc/kolla/globals.yml

docker_namespace: "kolla"
network_interface: "eth0"
neutron_external_interface: "eth1"
kolla_internal_vip_address: "192.168.100.254"

If we change the docker_namespace an error is thrown

Start Registry

Since we're running kolla from git master we have to run our own registry

# docker run -d -p 4000:5000 --restart=always --name registry registry

Enable remote registry in the config file:

docker_registry: "localhost:4000"

Build Images

Since we're running from master it's recommended to build the images

# kolla-build

I got out of memory errors building some images on a VM with 4GB of RAM.
I had to install neutron-lbaas-agent manually:

# kolla-build neutron-lbaas-agent

Some images can't be built from binary (still, don't understand what this is) I ignore this ones since i don't want this components

# kolla-build -t source watcher-applier watcher-engine senlin-api kuryr senlin-engine senlin-base watcher-base watcher-api

Deploy Kolla

Generate passwords for /etc/kolla/passwords.yml

# kolla-genpwd

Prechecks

# kolla-ansible prechecks

Verify images

# kolla-ansible pull

Run deployment

# kolla-ansible deploy

On a small VM i had to run this command three times. This does not happen with a full system.


Post Deploy

# kolla-ansible post-deploy
# . /etc/kolla/admin-openrc.sh
# kolla/tools/init-runonce

RANDOM NOTES:

On a 4GB VM i got a Cannot allocate memory error when i run kolla build, had to create the images one by one with calling kolla-build for each one

References

2016/08/15 13:24 · Ivan Chavero

Reinstall OpenStack on a running system Part 1: Cleanup

I have three home servers in my closet. Ok, server is just a glorified way to call three headless cheap desktop machines i assembled for my home needs and testing. From now on we'll call them cloud, cloud2 and cloud3 (i know pretty clever names!!)
One of the things that i've spent years testing is: OpenStack, after upgrading and checking out new features, i broke it so much that i thought that the best should be to uninstall everything and then install it again from scratch.
Unfortunately i can't do that, i have a lot of services in the three boxes: cloud (the controller) has a DNS, two ceph OSD's, my IRC permanent connection, it's my point of entry from the world to my home setup, docker images and some other stuff, so i can't wipe the system and start from scratch, the other two servers are ceph nodes and compute nodes for OpenStack, so technically i could install the controller services in one of them but i still have to clean cloud.
Since i still had to clean the current controller i decided to uninstall OpenStack and then reinstall it in the same server.

And yes, this is a the begining of a horror story…

Cleaning Up

First i googled possible ways of cleaning an OpenStack installation and i was happy to find enakai00 github repo that has some cool scripts tha deploy and configure RDO using Packstack, i was even happier when i found a cleanup.sh script that has been continuosly updated since grizzly.
I there are some stuff in the script that i didn't needed to do: switching off iptables, deleting perl, scsi stuff and loopback devices and removing all my virtual machines. So i did some modifications to the script:

#!/bin/sh
 
function cleanup_all {
    services=$(systemctl list-unit-files | grep -E "(redis|mariadb|openstack|neutron|rabbitmq-server).*\s+enabled" | cut -d" " -f1)
    for s in $services; do
        systemctl stop $s
        systemctl disable $s;
    done
 
    # Just remove running VM's
    for x in $(virsh list | grep instance- | awk '{print $2}'); do
        virsh destroy $x
        virsh undefine $x
    done
 
    yum remove -y "rabbitmq-server*" \
        "redis*" "*openstack*" "*neutron*" "*nova*" "*keystone*" \
        "*glance*" "*cinder*" "*heat*" "*ceilometer*" openvswitch \
        "*mariadb*" "*mongo*" "*memcache*" \
        "rdo-release-*"
 
    for x in nova glance cinder keystone horizon neutron heat ceilometer;do
        rm -rf /var/lib/$x /var/log/$x /etc/$x
    done
 
    rm -rf /root/.my.cnf /var/lib/mysql \
        /etc/openvswitch /var/log/openvswitch 
 
    if vgs cinder-volumes; then
        vgremove -f cinder-volumes
    fi
 
    for x in $(losetup -a | grep -v docker | sed -e 's/:.*//g'); do
        losetup -d $x
    done
}
 
# main
 
echo "This will completely uninstall all openstack-related components."
echo -n "Are you really sure? (yes/no) "
read answer
if [[ $answer == "yes" ]]; then
    # Show everything
    cleanup_all 
    echo "Finished. Reboot the server to refresh processes."
else
    echo "Cancelled."
fi

I'm preparing Next of the Series: Reinstall OpenStack on a running system Part 2: Dockerizing OpenStack

References

2016/03/24 01:38 · Ivan Chavero

Running Systemd inside a Fedora Docker container

The Docker community does not recommend to run full a fleged OS into containers but there is no technical reason not to, i've been running VPS using different container technologies (Linux Vserver, Pure LXC) for years so my next logic step is to manage my container VPS using Docker, also there are use cases in which run the container with all the system services is convenient.

The only thing that has to be done to run a container as a VPS is to run the init process, since i'm using Fedora i have to use the infamous systemd which is actually a very good and modern init system, the problem with it is that is's being inserted as a dependency in places where it's not really needed (cron stuff for example).
So i said: “well i'll just run systemd while starting the container!”, to my surprise it didn't work since systemd needs to acces some files that the container does not allow you to see, this gets solved by using a privileged container. Still even with a privileged container it does not work, you have to mount the /sys/fs/cgroup as a volume inside the container.

Having done this we just run the container with a command like this:

docker run --name=test-systemd  --privileged -tdi -v /sys/fs/cgroup:/sys/fs/cgroup:ro fedora-22-systemd-x86_64  /lib/systemd/systemd


This is the dockerfile i used to create the fedora-22-systemd-x86_64:

FROM imcsk8/fedora-22-server-x86_64
MAINTAINER “Iván Chavero” <ichavero@chavero.com.mx>
# Based on the dockerfile created by: “Dan Walsh” <dwalsh@redhat.com>
ENV container docker
RUN dnf -y update; dnf clean all
RUN dnf -y install systemd; dnf clean all;\
(cd /lib/systemd/system/sysinit.target.wants/; for i in *; do [ $i == systemd-tmpfiles-setup.service ] || rm -f $i; done); \
rm -f /lib/systemd/system/multi-user.target.wants/*;\
rm -f /etc/systemd/system/*.wants/*;\
rm -f /lib/systemd/system/local-fs.target.wants/*; \
rm -f /lib/systemd/system/sockets.target.wants/*udev*; \
rm -f /lib/systemd/system/sockets.target.wants/*initctl*; \
rm -f /lib/systemd/system/basic.target.wants/*;\
rm -f /lib/systemd/system/anaconda.target.wants/*;
VOLUME [ “/sys/fs/cgroup” ]
CMD [“/lib/systemd/systemd”]

With this we can create VPS or applications that can behave as they should in a full VM or baremetal system.

References:
Running systemd within a Docker Container
Fedora 22 Systemd Dockerfile

2015/10/15 16:58 · Ivan Chavero

Create RPM Ostree Repo

# cd /srv/rpm-ostree
# mkdir repo
# ostree --repo=repo init --mode=archive-z2

Create treefiles

Base

fedora-moximo.json

{
    "ref": "fedora-moximo/f22/armv7hl/base/core",

    "gpg_key": "<SET KEY ID FOR GPG>",

    "repos": ["fedora", "fedora-updates"],

    "selinux": true,

    "bootstrap_packages": ["filesystem", "glibc", "nss-altfiles", "shadow-utils",
                           "generic-release"],

    "packages": ["kernel", "ostree", "lvm2", 
                 "btrfs-progs", "e2fsprogs", "xfsprogs",
                 "gnupg2", "selinux-policy-targeted",
                 "openssh-server", "openssh-clients",
                 "NetworkManager", "vim-minimal", "nano", "sudo"]
}

fedora-moximo-docker.json

{
    "ref": "fedora-atomic/rawhide/x86_64/server/docker",

    "include": "fedora-rawhide-base.json",

    "packages": ["docker-io"],
    
    "units": ["docker.service", "docker.socket"]
}

Create the repo

# rpm-ostree compose tree --repo=/srv/rpm-ostree/repo --proxy=http://127.0.0.1:8123 sometreefile.json

http://patrick.uiterwijk.org/2014/01/21/rpm-ostree/
https://github.com/cgwalters/rpm-ostree/blob/master/doc/compose-server.md

2015/08/23 17:16 · Ivan Chavero

Upgrading OpenStack from a broken Juno to Kilo

I had Juno installed on my local test server, i did a lot of tests with it and ended up breaking it and forgetting about it. Then i added two other servers to my local NOC (actually a just closet with three headless desktop computers :P). I could have uninstalled my Juno setup but i wanted to experiment fixing it (i rather get this experience at home than on a data center with a client beathing over my shoulder)

I'll use packstack since it's the tool i used for the initial installation

# dnf install -y https://repos.fedorapeople.org/repos/openstack/openstack-kilo/rdo-release-kilo-1.noarch.rpm
# dnf upgrade -y openstack-packstack

I will add two compute nodes to the setup: 192.168.1.10 and 192.168.1.11.
Had to grab the passwords from my old packstack answer file:

CONFIG_MARIADB_PW=fdfsfasdfadsfsdtrtrt
CONFIG_KEYSTONE_DB_PW=fdfsfasdfadsfsdtrtrt
CONFIG_KEYSTONE_ADMIN_PW=fdfsfasdfadsfsdtrtrt
CONFIG_KEYSTONE_DEMO_PW=fdfsfasdfadsfsdtrtrt
CONFIG_GLANCE_DB_PW=fdfsfasdfadsfsdtrtrt
CONFIG_GLANCE_KS_PW=fdfsfasdfadsfsdtrtrt
CONFIG_CINDER_DB_PW=fdfsfasdfadsfsdtrtrt
CONFIG_CINDER_KS_PW=fdfsfasdfadsfsdtrtrt
CONFIG_NOVA_DB_PW=fdfsfasdfadsfsdtrtrt
CONFIG_NOVA_KS_PW=fdfsfasdfadsfsdtrtrt
CONFIG_NEUTRON_KS_PW=fdfsfasdfadsfsdtrtrt
CONFIG_NEUTRON_DB_PW=fdfsfasdfadsfsdtrtrt
CONFIG_NEUTRON_METADATA_PW=fdfsfasdfadsfsdtrtrt
CONFIG_SWIFT_KS_PW=fdfsfasdfadsfsdtrtrt
CONFIG_NAGIOS_PW=fdfsfasdfadsfsdtrtrt
CONFIG_KEYSTONE_ADMIN_TOKEN=fdfsfasdfadsfsdtrtrt

This are the passwords i had set in my original answer file.
I generated a new answer file instead of using the old one because they change between versions.

# packstack --gen-answer-file=upgrade.txt

Delete the directives generated passwords and add the old ones.

# packstack -d --answer-file=upgrade.txt






2015/07/08 12:54 · Ivan Chavero

Older entries >>