Kubernetes Helm Chart

The IM service and web interface can be installed on top of Kubernetes using Helm.

How to install the IM chart:

First add the GRyCAP repo:

$ helm repo add grycap https://grycap.github.io/helm-charts/

Then install the IM chart (with Helm v3):

$ helm install --namespace=im --create-namespace im  grycap/IM

All the information about this chart is available at the IM chart README.

IM Service Installation

Prerequisites

IM needs at least Python 2.7 (Python 3.6 or higher recommended) to run, as well as the next libraries:

  • The RADL parser. (Since IM version 1.5.3, it requires RADL version 1.1.0 or later).

  • The TOSCA parser. A TOSCA YAML Spec 1.0 Parser.

  • paramiko, ssh2 protocol library for python (version 1.14 or later).

  • PyYAML, a YAML parser.

  • suds, a full-featured SOAP library.

  • Netaddr, A Python library for representing and manipulating network addresses.

  • Requests, A Python library for access REST APIs.

Also, IM uses Ansible (2.4 or later) to configure the infrastructure nodes.

These components are usually available from the distribution repositories.

Finally, check the next values in the Ansible configuration file ansible.cfg, (usually found in /etc/ansible):

[defaults]
transport  = smart
host_key_checking = False
nocolor = 1
become_user      = root
become_method    = sudo

[paramiko_connection]

record_host_keys=False

[ssh_connection]

# Only in systems with OpenSSH support to ControlPersist
ssh_args = -o ControlMaster=auto -o ControlPersist=900s
# In systems with older versions of OpenSSH (RHEL 6, CentOS 6, SLES 10 or SLES 11)
#ssh_args =
pipelining = True

Optional Packages

  • The Bottle framework is used for the REST API. It is typically available as the ‘python-bottle’ package.

  • The CherryPy Web framework, is needed for the REST API. It is typically available as the ‘python-cherrypy’ or ‘python-cherrypy3’ package. In newer versions (9.0 and later) the functionality has been moved the cheroot library it can be installed using pip.

  • apache-libcloud 3.0 or later is used in the LibCloud, OpenStack, EGI and GCE connectors.

  • boto 2.29.0 or later is used as interface to Amazon EC2. It is available as package named python-boto in Debian based distributions. It can also be downloaded from boto GitHub repository. Download the file and copy the boto subdirectory into the IM install path.

  • pyOpenSSL is needed to secure the REST API with SSL certificates (see :confval:`REST_SSL`). pyOpenSSL can be installed using pip.

  • The Python interface to MySQL, is needed to access MySQL server as IM data backend. It is typically available as the package ‘python-mysqldb’ or ‘MySQL-python’ package. In case of using Python 3 use the PyMySQL package, available as the package ‘python3-pymysql’ on debian systems or PyMySQL package in pip.

  • The Python interface to MongoDB, is needed to access MongoDB server as IM data backend. It is typically available as the package ‘python-pymongo’ package in most distributions or pymongo package in pip.

  • The Azure Python SDK, is needed by the Azure connector. It is available as the package ‘azure’ at the pip repository.

  • The VMware vSphere API Python Bindings are needed by the vSphere connector. It is available as the package ‘pyvmomi’ at the pip repository.

Installation

From Pip

First you need to install pip tool and some packages needed to compile some of the IM requirements. To install them in Debian and Ubuntu based distributions, do:

$ apt update
$ apt install gcc python3-dev libffi-dev libssl-dev python3-pip sshpass python3-requests

In Red Hat based distributions (RHEL, CentOS, Amazon Linux, Oracle Linux, Fedora, etc.), do:

$ yum install epel-release
$ yum install which gcc python3-devel libffi-devel openssl-devel python3-pip sshpass default-libmysqlclient-dev

Then you only have to call the install command of the pip tool with the IM package:

$ pip install IM

You can also install an specific branch of the Github repository:

$ pip install git+https://github.com/grycap/im.git@master

Pip will also install the, non installed, pre-requisites needed. So Ansible 2.4 or later will be installed in the system. Some of the optional packages are also installed please check if some of IM features that you need requires to install some of the packages of section “Optional Packages”.

You must also remember to modify the ansible.cfg file setting as specified in the “Prerequisites” section.

Configuration

If you want the IM Service to be started at boot time, do

  1. Update the value of the variable IMDAEMON in /etc/init.d/im file to the path where the IM im_service.py file is installed (e.g. /usr/local/im/im_service.py), or set the name of the script file (im_service.py) if the file is in the PATH (pip puts the im_service.py file in the PATH as default):

    $ sudo sed -i 's/`IMDAEMON=.*/`IMDAEMON=/usr/local/IM-0.1/im_service.py'/etc/init.d/im
    
  2. Register the service.

To do the last step on a Debian based distributions, execute:

$ sudo sysv-rc-conf im on

if the package ‘sysv-rc-conf’ is not available in your distribution, execute:

$ sudo update-rc.d im start 99 2 3 4 5 . stop 05 0 1 6 .

For Red Hat based distributions:

$ sudo chkconfig im on

Alternatively, it can be done manually:

$ ln -s /etc/init.d/im /etc/rc2.d/S99im
$ ln -s /etc/init.d/im /etc/rc3.d/S99im
$ ln -s /etc/init.d/im /etc/rc5.d/S99im
$ ln -s /etc/init.d/im /etc/rc1.d/K05im
$ ln -s /etc/init.d/im /etc/rc6.d/K05im

IM reads the configuration from $IM_PATH/etc/im.cfg, and if it is not available, does from /etc/im/im.cfg. There is a template of im.cfg at the directory etc on the tarball. The IM reads the values of the im section. The options are explained next.

Basic Options

Default Virtual Machine Options

Contextualization

XML-RPC API

REST API

OPENID CONNECT OPTIONS

NETWORK OPTIONS

HA MODE OPTIONS

OpenNebula connector Options

The configuration values under the OpenNebula section:

Logging Configuration

IM uses Python logging library (see the documentation). You have two options to configure it: use the configuration variables at the IM configuration file or use the file /etc/im/logging.conf.

The configuration variables are the following:

If you need to specify more advanced details of the logging configuration you have to use the file /etc/im/logging.conf. For example to set a syslogd server as the destination of the log messages:

[handler_fileHandler]
class=logging.handlers.SysLogHandler
level=INFO
formatter=simpleFormatter
args=(('<syslog_ip>', 514),)
[formatter_simpleFormatter]
format=%(asctime)s - %(hostname)s - %(name)s - %(levelname)s - %(message)s
datefmt=

Vault Configuration

From version 1.10.7 the IM service supports reading authorization data from a Vault server. These values are used by the REST API enabling to use Bearer authentication header and get the all the credential values from the configured Vault server.

Vault server must configured with the JWT authentication method enabled, setting you OIDC issuer, e.g. using the EGI Checkin issuer, and setting im as the default role:

vault write auth/jwt/config \
   oidc_discovery_url="https://aai.egi.eu/oidc/" \
   default_role="im"

A KV (v1) secret store must be enabled setting the desired path. In this example the default vaule credentials is used:

vault secrets enable -version=1 -path=credentials kv

Also a policy must be created to enable the users to manage only their own credentials:

vault policy write manage-imcreds - <<EOF
path "credentials/{{identity.entity.id}}" {
capabilities = [ "create", "read", "update", "delete", "list" ]
}
EOF

And finally the im role to assign the policy to the JWT users:

vault write auth/jwt/role/im - <<EOF
{
"role_type": "jwt",
"policies": ["manage-imcreds"],
"token_explicit_max_ttl": 60,
"user_claim": "sub",
"bound_claims": {
   "sub": "*"
},
"bound_claims_type": "glob"
}
EOF

These set of commands are only an example of how to configure the Vault server to be accesed by the IM. Read Vault documentation for more details.

The authentication data must be stored using one item per line in the Authorization File, setting as key value the id of the item and all the auth line (in JSON format) as the value, e.g. An auth line like that:

id = one; type = OpenNebula; host = oneserver:2633; username = user; password = pass

Must be stored in the vault KV secrect, setting one as key and this content as value:

{"id": "one", "type": "OpenNebula", "host": "oneserver:2633", "username": "user", "password": "pass"}

In all the auth lines where an access token is needed it must not be set and the IM will replace it with then access token used to authenticate with the IM itself.

Virtual Machine Tags

Name of the tags that IM will add in the VMs with username, infrastructure ID, URL of the IM service, and IM name comment or leave empty not to set them

IM in high availability mode

From version 1.5.0 the IM service can be launched in high availability (HA) mode using a set of IM instances behind a HAProxy load balancer. Currently only the REST API can be used in HA mode. It is a experimental issue currently it is not intended to be used in a production installation.

This is an example of the HAProxy configuration file:

global
    tune.bufsize 131072
defaults
    timeout connect 600s
    timeout client 600s
    timeout server 600s

    frontend http-frontend
        mode http
        bind *:8800
        default_backend imbackend

    backend imbackend
        mode http
        balance roundrobin
        option httpchk GET /version
        stick-table type string len 32 size 30k expire 60m
        stick store-response hdr(InfID)
        acl inf_id path -m beg /infrastructures/
        stick on path,field(3,/) if inf_id

    server im-8801 10.0.0.1:8801 check
    server im-8802 10.0.0.1:8802 check
    ...

See more details of HAProxy configuration at HAProxy Documentation.

Also the INF_CACHE_TIME variable of the IM config file must be set to a time in seconds lower or equal to the time set in the stick-table expire value (in the example 60m). So for this example INF_CACHE_TIME must be set to less than or equals to 3600.

Purgue IM DB

The IM service does not remove deleted infrastructures from DB for provenance purposes. In case that you want to remove old deleted infrastructures from the DB to reduce its size you can use the delete_old_infs script. It will delete from DB all the infrastructures created before a specified date:

python delete_old_infs.py <date>