Deploying Ceph in my soho servers

I spent 15 years workng as a sysadmin so i like to manage my own infrastructure.
I have some servers hosted in a datacenter covering my internet services but in order to test OpenStack Installers i like to use local VM's so i bought a headless desktop machine with a decent processor and a bunch of memory and set it up in a closet, then i had to buy another and another.
So now with three machines and a decent amount of terabytes of disk on each one it ocurred to me that i should cluster this disk space so i can forget about keeping track of the space that i'm using and be able to scale using cheap SoC like a Raspberry Pi with a big disk connected.
I was about to use swift but i a friend of mine told me that i should use Ceph, at first i was skeptical because i tested it 5 years ago with not very pleasant results. After some beers and a big list of features courtesy of my friend he convinced me and i decided to give it a try…

Confugure the nodes

Installation

Prepare Ceph Repo

Save this content into /etc/yum.repos.d/ceph.repo on each node

[ceph-noarch]
name=Ceph noarch packages
baseurl=http://ceph.com/rpm-hammer/el7/noarch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc

Admin Node

[root@cloud /]# dnf install -y ceph ceph-deploy ceph-radosgw ceph-fuse

Nodes *

[root@cloud2 /]# dnf install -y ceph ceph-fuse ntp ntpdate ntp-doc
[root@cloud3 /]# dnf install -y ceph ceph-fuse ntp ntpdate ntp-doc

Create Ceph user on all Nodes

[root@cloud /]# groupadd ceph && useradd -m -c 'Ceph User' -d /home/ceph -s /bin/bash -g ceph cloud-storage
[root@cloud ~]# passwd cloud-storage # set temporary password
Changing password for user cloud-storage.
New password: 
Retype new password: 
passwd: all authentication tokens updated successfully.
[root@cloud2 /]# groupadd ceph && useradd -m -c 'Ceph User' -d /home/ceph -s /bin/bash -g ceph cloud-storage
[root@cloud2 ~]# passwd cloud-storage # set temporary password
Changing password for user cloud-storage.
New password: 
Retype new password: 
passwd: all authentication tokens updated successfully.
[root@cloud3 /]# groupadd ceph && useradd -m -c 'Ceph User' -d /home/ceph -s /bin/bash -g ceph cloud-storage
[root@cloud3 ~]# passwd cloud-storage # set temporary password
Changing password for user cloud-storage.
New password: 
Retype new password: 
passwd: all authentication tokens updated successfully.

Set sudo privileges on all Nodes

[root@cloud /]# echo "cloud-storage ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/cloud-storage
cloud-storage ALL = (root) NOPASSWD:ALL
[root@cloud /]# chmod 0440 /etc/sudoers.d/cloud-storage

Create SSH Key for passwordless login

Login as the cloud-storate user and create the key

[root@cloud /]# su - cloud-storage
cloud-storage@cloud .ssh$ ssh-keygen 
Generating public/private rsa key pair.
Enter file in which to save the key (/home/ceph/.ssh/id_rsa): 
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /home/ceph/.ssh/id_rsa.
Your public key has been saved in /home/ceph/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:RYhCS3iot3eu4b/Tus3wRqQ3TZ6slGbgFceTIZryuE5 cloud-storage@cloud
The key's randomart image is:
+---[RSA 2048]----+
|  .o+.  ..o.     |
|  oo . o o  .    |
|   o. o * o..    |
|  .  . *.=oo  .  |
|      + S+  .. . |
|     E+...  .. ..|
|     +.+. ..  . o|
|    +.o.  ++.  o.|
|   o ..  +*+  .o*|
+----[SHA256]-----+

Copy the public key to the nodes

cloud-storage@cloud .ssh$ cat id_rsa.pub | ssh cloud2.soho 'mkdir .ssh; chmod 700 .ssh; cat - > .ssh/authorized_keys'
The authenticity of host 'cloud2.soho (192.168.1.10)' can't be established.
ECDSA key fingerprint is SHA256:PyRn3Jdkdh46dktuS6YgdkK7OKFpzpZdDJiTY46gsbF.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'cloud2.soho,192.168.1.10' (ECDSA) to the list of known hosts.
cloud-storage@cloud2.soho's password: 

Open Firewall port on the Nodes

[root@cloud2 ~]# firewall-cmd --zone=public --add-port=6789/tcp --permanent
success
[root@cloud2 ~]# firewall-cmd --zone=public --add-port=6800-7300/tcp --permanent
success

Disable SELinix

[root@cloud2 ~]# setenforce 0
[root@cloud2 ~]# sed -i 's/SELINUX=enforcing/SELINUX=permissive/' /etc/selinux/config # persistent config

Create the Cluster

You have to be logged in as the cloud-storage user

Create the cluster
I want my amin node to be also a common node

cloud-storage@cloud ~$ ceph-deploy new cloud.soho

Change the default number of replicas so ceph can be achieve an active + clean stat with just two OSDs

cloud-storage@cloud ~$ echo "osd pool default size = 2" >> ceph.conf

Ensure we will work on our LAN

cloud-storage@cloud ~$ echo "public network = 192.168.1.0/255.255.255.0" >> ceph.conf

Install Ceph

I'm using Fedra 22 and the radosgw package changed the name to ceph-radosgw so i had to create this little patch:

--- install.py  2015-06-14 17:41:18.646794848 -0600
+++ /usr/lib/python2.7/site-packages/ceph_deploy/hosts/fedora/install.py        2015-06-14 17:22:35.056856530 -0600
@@ -82,6 +82,6 @@
             '-q',
             'install',
             'ceph',
-            'radosgw',
+            'ceph-radosgw',
         ],
     )

Use the –no-adjust-repos in case you run into repository problems like me

cloud-storage@cloud ~$ sudo ceph-deploy install --no-adjust-repos  cloud.soho cloud2.soho cloud3.soho
cloud-storage@cloud ~$ ceph-deploy mon create-initial

Repeat for all the nodes and block devices you're planning to use

cloud-storage@cloud ~$ ceph-deploy disk zap cloud2.soho:sdb
cloud-storage@cloud ~$ ceph-deploy disk zap cloud2.soho:sdc
cloud-storage@cloud ~$ ceph-deploy osd prepare cloud2.soho:sdb:/dev/sdb
cloud-storage@cloud ~$ ceph-deploy osd prepare cloud2.soho:sdc:/dev/sdc
cloud-storage@cloud ~$ ceph-deploy osd activate cloud2.soho:sdb1
cloud-storage@cloud ~$ ceph-deploy osd activate cloud2.soho:sdc1

Set directories as osds on the first node

cloud-storage@cloud ~$ sudo mkdir -p /space/ceph/osd0
cloud-storage@cloud ~$ ceph-deploy osd prepare cloud.soho:/space/ceph/osd0
cloud-storage@cloud ~$ ceph-deploy osd activate  cloud.soho:/space/ceph/osd0

The create option is a shorcut for prepare and activate…

ceph-deploy osd create osdserver1:sdb:/dev/ssd1

Create a metadata server

cloud-storage@cloud ~$ ceph-deploy mds create cloud3.soho

Create the filesystem
This was particulary painful because the quick start documentation does not detail this step, i had to dig through the documentation and mailing lists to notice i had to do this.

ceph mds newfs 0 0 --yes-i-really-mean-it


Ceph Object Gateway

Ceph comes with an object gateway compatible with S3 and swift, the call it Rados Gateway (RGW).

To enable this feature you have to create a new instance of RGW:



Check the cluster

Check ceph satus:
You should see HEALTH_OK, if you don't check that:

  • The ODS are active: ceph osd stat, ceph osd tree
  • The monitor server is up: ceph mon stat
  • The metadata server is up and active: ceph mds stat
cloud-storage@cloud ~$ ceph health
HEALTH_OK
cloud-storage@cloud ~$ ceph -w
    cluster d7e6610b-07e6-49fa-8121-d05d72e85a0b
     health HEALTH_OK
     monmap e1: 1 mons at {cloud=192.168.1.68:6789/0}
            election epoch 1, quorum 0 cloud
     mdsmap e5: 1/1/1 up {0=cloud3.soho=up:active}
     osdmap e38434: 5 osds: 5 up, 5 in
      pgmap v58991: 64 pgs, 1 pools, 314 MB data, 100 objects
            271 GB used, 7931 GB / 8205 GB avail
                  64 active+clean

2015-06-21 13:32:04.919306 mon.0 [INF] pgmap v58991: 64 pgs: 64 active+clean; 314 MB data, 271 GB used, 7931 GB / 8205 GB avail

I was confused when i run ceph -w because i didn't had the metadata server active (without this you can't mount the cluster) and it didn't issue any warning, it just didn't showed it up. This might appear as obvious to the seasoned Ceph user but for a newbie it was a hard to find problem.

Mount your new cluster :)

# mount -t ceph 192.168.1.68:/ /mnt
# # mount | grep ceph
192.168.1.68:/ on /mnt type ceph (rw,relatime,nodcache,nofsc,acl)
#  df -h | grep mnt
192.168.1.68:/                   8.1T   274G  7.8T   4% /mnt

That's it, at first this appears to be very easy but if you are like me and like to poke under the hood while learning you might hit some brick walls, i had problems with the metadata server, the keyring and the version of Fedora i was using so i hope this comments help others not to fall into the same problems i had.

Troubleshooting

There are times in which the pg's have problems, most of the time they are cause by lack of conectvity between nodes but in case you get stale, inactive and/or unclean pg's you can run this little bash script i created to deal with this problems.

#!/bin/bash

ceph pg dump_stuck stale
ceph pg dump_stuck inactive
ceph pg dump_stuck unclean

Benchmarking

One of the things i like about Ceph is that it's very well documented, the https://wiki.ceph.com/Guides/How_To/Benchmark_Ceph_Cluster_Performance they have at their wiki is very good start point for doing this with good explanations of the techniques and tools they provide for benchmarking.

References

2015/06/14 13:15 · Ivan Chavero

[NOTES] Installing Project atomic

The project atomic uses extlinux as the bootloader but the current installer sets the kernel and initrd paths in a wrong way, i had to check the ostree filesystem in order to find it. it's pretty weird, i'm still learning this whole ostree stuff

2014/11/17 19:40 · Ivan Chavero

Upgrading Openstack using Packstack

I wanted to test Juno in more real world environment, since i'm not that reckless, i decided to experiment with my own little home server so i just put my own data at risk.
This headless desktop computer lives in a closet and runs Fedora 20 with OpenStack Havana. I didn't want to reinstall from scratch so i thought that upgrading to Juno would be a nice experiment (Kids, do try this at home and not in a datacenter with your clients important data).
First of all upgrade the system

# yum upgrade -y

Once it finishes install the latest Juno RDO repo.

# yum install -y https://repos.fedorapeople.org/repos/openstack/openstack-juno/rdo-release-juno-1.noarch.rpm

This will add the rdo-release.repo file to your /etc/yum.repos.d directory.
In this step i just could have run yum upgrade again and the latest openstack packages would have installed but there are some configurations that packstack performs that the havana version didn't do (or needed) so i rather do it this way. Besides i wanted to test how packstack will behave in this use case.

Since i had the Havana packstack version installed i upgraded to the Juno version.

# yum install -y openstack-packstack

This upgrades packstack as well as it's dependencies: openstack-packstack-puppet and openstack-puppet-modules.
Once it finished i ran packstack using my old answer file:

# packstack  -d --answer-file=packstack-answers-20130821-004155.txt

I wasn't expecting this to work at first because my answer file is more than a year old, but it did until it got to the mysql plugin:

ERROR : Error appeared during Puppet run: 192.168.1.68_mysql.pp
Error: Execution of '/usr/bin/yum -d 0 -e 0 -y install mariadb-galera-server' returned 1: Error: mariadb-galera-server conflicts with 1:mariadb-server-5.5.39-1.fc20.x86_64
You will find full trace in log /var/tmp/packstack/20141029-174821-1lLOIx/manifests/192.168.1.68_mysql.pp.log

This is because the mariadb-server was previously installed by packstack Havana, i just uninstalled it and ran packstack again:

[root@cloud ~(keystone_admin)]# systemctl stop mariadb
[root@cloud ~(keystone_admin)]# systemctl disable mariadb
rm '/etc/systemd/system/multi-user.target.wants/mariadb.service'
rm '/etc/systemd/system/multi-user.target.wants/mysqld.service'
# yum erase -y mariadb-server
packstack  -d --answer-file=packstack-answers-20130821-004155.txt
packstack  -d --answer-file=packstack-answers-20130821-004155.txt
Welcome to Installer setup utility

Installing:
Clean Up                                             [ DONE ]
Setting up ssh keys                                  [ DONE ]
Discovering hosts' details                           [ DONE ]
Adding pre install manifest entries                  [ DONE ]
Preparing servers                                    [ DONE ]
Adding AMQP manifest entries                         [ DONE ]
Adding MySQL manifest entries                        [ DONE ]
Adding Keystone manifest entries                     [ DONE ]
Adding Swift Keystone manifest entries               [ DONE ]
Adding Swift builder manifest entries                [ DONE ]
Adding Swift proxy manifest entries                  [ DONE ]
Adding Swift storage manifest entries                [ DONE ]
Adding Swift common manifest entries                 [ DONE ]
Adding MongoDB manifest entries                      [ DONE ]
Adding Ceilometer manifest entries                   [ DONE ]
Adding Ceilometer Keystone manifest entries          [ DONE ]
Adding post install manifest entries                 [ DONE ]
Installing Dependencies                              [ DONE ]
Copying Puppet modules and manifests                 [ DONE ]
Applying 192.168.1.68_prescript.pp
Applying 10.3.235.114_prescript.pp
10.3.235.114_prescript.pp:                           [ DONE ]
192.168.1.68_prescript.pp:                           [ DONE ]
Applying 10.3.235.114_amqp.pp
Applying 192.168.1.68_mysql.pp
10.3.235.114_amqp.pp:                                [ DONE ]
192.168.1.68_mysql.pp:                               [ DONE ]
Applying 192.168.1.68_keystone.pp
192.168.1.68_keystone.pp:                            [ DONE ]
Applying 192.168.1.68_ring_swift.pp
192.168.1.68_ring_swift.pp:                          [ DONE ]
Applying 192.168.1.68_swift.pp
192.168.1.68_swift.pp:                               [ DONE ]
Applying 10.3.235.114_mongodb.pp
10.3.235.114_mongodb.pp:                             [ DONE ]
Applying 192.168.1.68_ceilometer.pp
192.168.1.68_ceilometer.pp:                          [ DONE ]
Applying 192.168.1.68_postscript.pp
Applying 10.3.235.114_postscript.pp
10.3.235.114_postscript.pp:                          [ DONE ]         
192.168.1.68_postscript.pp:                          [ DONE ]         
Applying Puppet manifests                            [ DONE ]
Finalizing                                           [ DONE ]

 **** Installation completed successfully ******


Additional information:
 * Deprecated parameter has been used in answer file. Please use parameter CONFIG_CONTROLLER_HOST next time. This parameter deprecates following parameters: ['CONFIG_CEILOMETER_HOST', 'CONFIG_CINDER_HOST', 'CONFIG_GLANCE_HOST', 'CONFIG_HORIZON_HOST', 'CONFIG_HEAT_HOST', 'CONFIG_KEYSTONE_HOST', 'CONFIG_NAGIOS_HOST', 'CONFIG_NEUTRON_SERVER_HOST', 'CONFIG_NEUTRON_LBAAS_HOSTS', 'CONFIG_NOVA_API_HOST', 'CONFIG_NOVA_CERT_HOST', 'CONFIG_NOVA_VNCPROXY_HOST', 'CONFIG_NOVA_SCHED_HOST', 'CONFIG_OSCLIENT_HOST', 'CONFIG_SWIFT_PROXY_HOSTS'].
 * Deprecated parameter has been used in answer file. Please use parameter CONFIG_COMPUTE_HOSTS next time. This parameter deprecates following parameters: ['CONFIG_NOVA_COMPUTE_HOSTS'].
 * Deprecated parameter has been used in answer file. Please use parameter CONFIG_NETWORK_HOSTS next time. This parameter deprecates following parameters: ['CONFIG_NEUTRON_L3_HOSTS', 'CONFIG_NEUTRON_DHCP_HOSTS', 'CONFIG_NEUTRON_METADATA_HOSTS', 'CONFIG_NOVA_NETWORK_HOSTS'].
 * Deprecated parameter has been used in answer file. Please use parameter CONFIG_SWIFT_STORAGES next time. This parameter deprecates following parameters: ['CONFIG_SWIFT_STORAGE_HOSTS'].
 * Time synchronization installation was skipped. Please note that unsynchronized time on server instances might be problem for some OpenStack components.
 * The installation log file is available at: /var/tmp/packstack/20141029-181742-ESZKio/openstack-setup.log
 * The generated manifests are available at: /var/tmp/packstack/20141029-181742-ESZKio/manifests

Apparently everything went well, i checked the system status using top and…

Tasks: 277 total,   6 running, 268 sleeping,   3 stopped,   0 zombie
%Cpu0  : 83.3 us,  0.0 sy,  0.0 ni, 16.7 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 st
%Cpu1  : 85.7 us, 14.3 sy,  0.0 ni,  0.0 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 st
%Cpu2  :100.0 us,  0.0 sy,  0.0 ni,  0.0 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 st
%Cpu3  : 85.7 us, 14.3 sy,  0.0 ni,  0.0 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 st
KiB Mem:  15904096 total,  6865632 used,  9038464 free,   192632 buffers
KiB Swap:  4095996 total,    69764 used,  4026232 free,  4173952 cached

  PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND                                                                                                 
31259 mysql     20   0 1731240  83920  10124 S 105.7  0.5   8:10.90 mysqld                                                                                                  
 1981 keystone  20   0  401476  43724   5816 S  39.6  0.3   3:35.85 keystone-manage                                                                                         
 7968 keystone  20   0  401096  43324   5816 R  39.6  0.3   0:56.46 keystone-manage                                                                                         
 3832 keystone  20   0  401476  43720   5816 S  26.4  0.3   2:56.38 keystone-manage                                                                                         
 4737 keystone  20   0  401476  43720   5816 S  26.4  0.3   2:16.55 keystone-manage                                                                                         
 5751 keystone  20   0  401496  43832   5816 R  26.4  0.3   1:41.61 keystone-manage                                                                                         
 7702 keystone  20   0  401352  43584   5816 R  26.4  0.3   1:15.26 keystone-manage                                                                                         
 8015 keystone  20   0  401100  43324   5816 S  26.4  0.3   0:37.85 keystone-manage                                                                                         
 8083 keystone  20   0  400964  43056   5816 S  26.4  0.3   0:19.59 keystone-manage                                                                                         
 8137 keystone  20   0  400720  42796   5816 R  26.4  0.3   0:03.38 keystone-manage                                                                                         
 1108 keystone  20   0  401488  43724   5816 S  13.2  0.3   4:13.95 keystone-manage                    

The keystone-manage token_flush command was taking a lot of time to finish and since it's being executed every minute it was causing this behaviuour. The problem was that i had 2827 in the token table of the keystone database, i deleted then manually and everything went ok.

This next section is not needed if you're not experimenting with a modified answer file, but well this trick can be useful…
Turns out that i had CONFIG_HORIZON_INSTALL=n in the answer file and horizon wasn't installed so i changed it to CONFIG_HORIZON_INSTALL=y and this time just run packstack with the –dry-run option:

 packstack  -d --dry-run --answer-file=packstack-answers-20130821-004155.txt
Welcome to Installer setup utility

Installing:
Clean Up                                             [ DONE ]
Setting up ssh keys                                  [ DONE ]
Discovering hosts' details                           [ DONE ]
Adding pre install manifest entries                  [ DONE ]
Preparing servers                                    [ DONE ]
Adding AMQP manifest entries                         [ DONE ]
Adding MySQL manifest entries                        [ DONE ]
Adding Keystone manifest entries                     [ DONE ]
Adding Horizon manifest entries                      [ DONE ]
Adding Swift Keystone manifest entries               [ DONE ]
Adding Swift builder manifest entries                [ DONE ]
Adding Swift proxy manifest entries                  [ DONE ]
Adding Swift storage manifest entries                [ DONE ]
Adding Swift common manifest entries                 [ DONE ]
Adding MongoDB manifest entries                      [ DONE ]
Adding Ceilometer manifest entries                   [ DONE ]
Adding Ceilometer Keystone manifest entries          [ DONE ]
Adding post install manifest entries                 [ DONE ]
Installing Dependencies                              [ DONE ]
Copying Puppet modules and manifests                 [ DONE ]
Applying Puppet manifests                            [ DONE ]
Finalizing                                           [ DONE ]

 **** Installation completed successfully ******


Additional information:
 * Deprecated parameter has been used in answer file. Please use parameter CONFIG_CONTROLLER_HOST next time. This parameter deprecates following parameters: ['CONFIG_CEILOMETER_HOST', 'CONFIG_CINDER_HOST', 'CONFIG_GLANCE_HOST', 'CONFIG_HORIZON_HOST', 'CONFIG_HEAT_HOST', 'CONFIG_KEYSTONE_HOST', 'CONFIG_NAGIOS_HOST', 'CONFIG_NEUTRON_SERVER_HOST', 'CONFIG_NEUTRON_LBAAS_HOSTS', 'CONFIG_NOVA_API_HOST', 'CONFIG_NOVA_CERT_HOST', 'CONFIG_NOVA_VNCPROXY_HOST', 'CONFIG_NOVA_SCHED_HOST', 'CONFIG_OSCLIENT_HOST', 'CONFIG_SWIFT_PROXY_HOSTS'].
 * Deprecated parameter has been used in answer file. Please use parameter CONFIG_COMPUTE_HOSTS next time. This parameter deprecates following parameters: ['CONFIG_NOVA_COMPUTE_HOSTS'].
 * Deprecated parameter has been used in answer file. Please use parameter CONFIG_NETWORK_HOSTS next time. This parameter deprecates following parameters: ['CONFIG_NEUTRON_L3_HOSTS', 'CONFIG_NEUTRON_DHCP_HOSTS', 'CONFIG_NEUTRON_METADATA_HOSTS', 'CONFIG_NOVA_NETWORK_HOSTS'].
 * Deprecated parameter has been used in answer file. Please use parameter CONFIG_SWIFT_STORAGES next time. This parameter deprecates following parameters: ['CONFIG_SWIFT_STORAGE_HOSTS'].
 * Time synchronization installation was skipped. Please note that unsynchronized time on server instances might be problem for some OpenStack components.
 * To access the OpenStack Dashboard browse to http://192.168.1.68/dashboard .
Please, find your login credentials stored in the keystonerc_admin in your home directory.
 * The installation log file is available at: /var/tmp/packstack/20141029-201826-heukRa/openstack-setup.log
 * The generated manifests are available at: /var/tmp/packstack/20141029-201826-heukRa/manifests

and then run the horizon puppet manifest manually:

puppet apply --modulepath=/usr/share/openstack-puppet/modules /var/tmp/packstack/20141029-201826-heukRa/manifests/192.168.1.68_horizon.pp

In the end it wasn't as painfull as i thought and now i have my Juno setup to play with, next, i'm gonna configure nova to use Docker as hypervisor but that's for another post ;)

2014/10/29 18:12 · Ivan Chavero

Openstack Juno on ARM

This are just installation notes, the real post is comming later…

For a project i'm working on i need to install openstack on a cubietruck, is not that painful :P First i had to install Fedora 21 on the cubietruck, installed the juno RDO repo packstack and then i just ran packstack and waited for…

# yum install -y https://repos.fedorapeople.org/repos/openstack/openstack-juno/rdo-release-juno-1.noarch.rpm
# yum install -y openstack-packstack
# packstack -d --allinone
...

run packstack again and got this error:

ERROR : Error appeared during Puppet run: 192.168.1.67_keystone.pp
Error: Execution of '/usr/bin/keystone --os-endpoint http://127.0.0.1:35357/v2.0/ user-create --name neutron --enabled True --email neutron@localhost --pass fdfafafdasfds
 --tenant_id 5ac05f30014a4c50a917b4181fef35c4' returned 1: Conflict occurred attempting to store role - Duplicate Entry (HTTP 409)

i tried runnin the puppet manifest manually and got this other errors:

rror: /Stage[main]/Neutron::Keystone::Auth/Keystone_user[neutron]: Could not evaluate: Execution of '/usr/bin/keystone --os-auth-url http://127.0.0.1:35357/v2.0/ token-get'
 returned 1: The request you have made requires authentication. (HTTP 401)
Error: /Stage[main]/Nova::Keystone::Auth/Keystone_user[nova]: Could not evaluate: Execution of '/usr/bin/keystone --os-auth-url http://127.0.0.1:35357/v2.0/ token-get' retur
ned 1: The request you have made requires authentication. (HTTP 401)
Error: /Stage[main]/Glance::Keystone::Auth/Keystone_user[glance]: Could not evaluate: Execution of '/usr/bin/keystone --os-auth-url http://127.0.0.1:35357/v2.0/ token-get' r
eturned 1: The request you have made requires authentication. (HTTP 401)

then i run the command manually and found the real problem:

# OS_SERVICE_ENDPOINT="192.168.1.67"  OS_SERVICE_TOKEN=aaaaaaf /usr/bin/keystone --os-auth-url http://127.0.0.1:35357/v2.0/ token-get
WARNING: Bypassing authentication using a token & endpoint (authentication credentials are being ignored).
'NoneType' object has no attribute 'has_service_catalog'
2014/10/18 15:24 · Ivan Chavero

Playing with Docker on ARM

It's been a while since i started using operating system/container/lightweight virtualization or wathever name you find suitable for it. In the end is a more efficient way of executing isolated systems without the weight of a hypervisor that recreates a computer.

I've used User Mode Linux that allows you to execute multiple kernels inside the computer, Linux-Vserver that creates execution context to isolate and manage the resources the “virtual machines” and Linux Containers (LXC) that use cgroups to isolate and restrict resources for certain proceses or the whole operating system.
LXC and Linux Vserver are somewhat equivalent technologies with a very different implementation, the advantage of LXC is that is being supported by a big community of developers and the underlaying infrastructure for them (cgroups) is part of the official Linux kernel distribution so you can say that LXC is supported in the upstream kernel.
On the LXC arena there's a very interesting approach: docker a system that allows us to create containers and establish reationships between them in a very easy way. It also allows you to upload your images to a central repository or maintain your own.
In this post i'm gonna setup a little proof of concept wordpress service using docker, let's get our hands dirty…

The “docker way” is to have small containers that perform only one task but you can use them as you please (eg. launching a Full OS on a container by just calling /sbin/init or /usr/lib/systemd/systemd)
So let's try a basic example that's been around the net for a while[2]: a wordpress site.
We all know that wordpress needs a database so the logical assumption would be to create two containers but we really need 3: one for wordpress, mysql and data store.

First we have to create the data store. Basically we create a container with an associated volume /var/lib/mysql thanks to the aufs all the containers that get associated to this volume will “mount” it in their respective /var/lib/mysql location.

$ docker run -v /var/lib/mysql --name=mysql_datastore -d busybox echo "Initiaizing MySQL DataStore"

#checkout the volume directory

$ sudo ls -la /var/lib/docker/vfs/dir/e5937a892ebf9a6a35bc6cdca89d9637a7964f72a908d48746453abcd242940e
[sudo] password for ichavero: 
total 8
drwxr-xr-x 2 root root 4096 Sep 11 06:49 .
drwx------ 3 root root 4096 Sep 11 06:49 ..



We can see the directory associated to the volume that i've created.<br /> This container does not have to be running, just have to be created an associated to the volume we created with it.
Then we create the mysql container:

$ docker run --name=mysql_server -e MYSQL_ROOT_PASSWORD=testpass -d mysql



Whoops! i'm using a arm operating system and most docker images are x86_64 so i had to create a arm fedora image. Note that this also should work for arm images ;)
First of all i created a tarball with a minimum system:

yum -y --releasever=20 --nogpg --installroot=/home/ichavero/lxcStuff/filesystems/fedora-arm \
          --disablerepo='*' --enablerepo=fedora install \
          systemd passwd yum fedora-release vim-minimal openssh-server procps-ng



Once the tarball was created i loaded it to docker using the <i>docker import</i> command:

$ cat fedora-arm.tar | docker import - fedora-arm
$ docker images
REPOSITORY              TAG                 IMAGE ID            CREATED             VIRTUAL SIZE
fedora-arm             latest              34fed62416f0        3 minutes ago       490.8 MB
mysql                   latest              a950533b3019        3 weeks ago         235.6 MB
fedora                  latest              88b42ffd1f7c        8 weeks ago         373.7 MB
busybox                 latest              a9eb17255234        3 months ago        2.433 MB


We have to create a container in order to be able to create an image that we can upload to a repository. Run the container with docker run, execute bash and make appropiate changes in this case i'll just modify the /etc/issue file since this container is going to be a generic image.

$ docker run --name=fedora-20-arm -i -t fedora-arm   /usr/bin/bash
bash-4.2# vi /etc/issue
bash-4.2# cat /etc/issue
Fedora release 20 (Heisenbug) - by SotolitoLabs (imcsk8)
Kernel \r on an \m (\l)
bash-4.2# exit
exit
$ docker diff fedora-20-arm
A /.bash_history
C /etc
C /etc/issue



To save the sate of the container in a image you issue <i>docker commit</i>

$ docker commit fedora-20-arm imcsk8/fedora-20-arm
c120847615dc560d089ec84b5d194b9824af83c81231a2f2db8e7242116be210

ichavero@kickflip:~$ docker images
REPOSITORY              TAG                 IMAGE ID            CREATED             VIRTUAL SIZE
imcsk8/fedora-20-arm   latest              c120847615dc        6 seconds ago       490.8 MB
fedora-arm             latest              34fed62416f0        55 minutes ago      490.8 MB
mysql                   latest              a950533b3019        3 weeks ago         235.6 MB
fedora                  latest              88b42ffd1f7c        8 weeks ago         373.7 MB
busybox                 latest              a9eb17255234        3 months ago        2.433 MB



I wanted my friends to enjoy this beautiful image so i uploaded it to the docker registry:

ichavero@kickflip:~$ docker push imcsk8/fedora-20-arm
The push refers to a repository [imcsk8/fedora-20-arm] (len: 1)
Sending image list
Pushing repository imcsk8/fedora-20-arm (1 tags)
34fed62416f0: Image successfully pushed 
c120847615dc: Image successfully pushed 
Pushing tag for rev [c120847615dc] on {https://cdn-registry-1.docker.io/v1/repositories/imcsk8/fedora-20-arm/tags/latest}


I created two more containers: one for mysql and another for wordpress, this time i had to install the proper software in each one.

$ docker run --name=fedora-20-arm-mysql -i -t imcsk8/fedora-20-arm   /usr/bin/bash
bash-4.2# exit
$ docker run --name=fedora-20-arm-wordpress -i -t imcsk8/fedora-20-arm   /usr/bin/bash
bash-4.2# exit


This is just a way of creating them and it might not be the best one.
To prepare the mysql container, i took the entrypoint.sh from the official mysql container to make things more docker style:
First i had to copy the file but the version of docker i used a little trick that i learned when i used Linux Vserver, find the container filesystem and copy the files there (better than a KVM VM right?).
Get the container ID:

$ docker inspect -f '{{.Id}}'  fedora-20-arm-mariadb
b78d09e751bbeb955cf07c7f0a834d76ce2334a555f13add7621506b14a76cdc


Then just copy the file:

$ sudo cp entrypoint.sh /var/lib/docker/aufs/mnt/b78d09e751bbeb955cf07c7f0a834d76ce2334a555f13add7621506b14a76cdc/.


Check the system:

$ docker start -a -i fedora-20-arm-mariadb
bash-4.2# ls -la /entrypoint.sh 
-rwxr-xr-x 1 root root 1289 Sep 23 19:56 /entrypoint.sh
bash-4.2# exit


Commit our changes to a image:

$ docker commit fedora-20-arm-mariadb imcsk8/fedora-20-arm-mariadb


References

2014/10/06 20:44 · Ivan Chavero

<< Newer entries | Older entries >>