Resource management and orchestration engine for distributed systems
Go to file
Jedrzej Nowak 4060b36fed Moved examples, resources and templates
new location is https://github.com/Mirantis/solar-resources,
later will be changed to openstack one.
- vagrant stuff assumes that solar-resources is cloned into /vagrant/solar-resources
- adjusted docker compose file
- added solar-resources to .gitignore

Change-Id: If2fea99145395606e6c15c9adbc127ecff4823f9
2016-01-13 13:33:02 +01:00
bootstrap Moved examples, resources and templates 2016-01-13 13:33:02 +01:00
doc Moved examples, resources and templates 2016-01-13 13:33:02 +01:00
library Added information about library 2015-10-01 18:19:13 +02:00
solar Merge "Change celery config to use sqlite backend messaging and results" 2016-01-12 16:29:48 +00:00
utils Running examples using fuel-devops 2016-01-08 11:21:55 +01:00
.config Change celery config to use sqlite backend messaging and results 2016-01-12 17:41:16 +02:00
.gitignore Moved examples, resources and templates 2016-01-13 13:33:02 +01:00
.gitreview Add gitreview file 2015-12-17 12:30:03 +00:00
.testr.conf SOLAR_DB from env in testr execution if defined 2015-12-17 14:02:10 +01:00
.travis.yml Set sudo: false in travis.yml 2015-12-08 10:39:57 +01:00
.vagrantplugins Install vagrant-vbguest plugin automaticly 2015-10-02 15:26:36 +02:00
docker-compose.yml Moved examples, resources and templates 2016-01-13 13:33:02 +01:00
Dockerfile Install sshpass. Required for testing on Jenkins 2016-01-05 18:43:27 +01:00
LICENSE Initial commit 2015-03-27 15:54:19 -07:00
MANIFEST.in Include README in sdist 2015-11-24 17:35:11 +01:00
README.md Moved examples, resources and templates 2016-01-13 13:33:02 +01:00
requirements.txt Change celery config to use sqlite backend messaging and results 2016-01-12 17:41:16 +02:00
run_tests.sh Run all tests for solar 2015-11-23 22:34:54 +01:00
run.sh Adjust celery configuration and fix ssh_raw to work with ENV 2015-12-09 20:52:37 +02:00
setup.cfg Use stevedore for transports management 2016-01-08 17:02:40 +01:00
setup.py Use pbr to manage setup.py and setup.cfg 2015-12-02 10:16:27 +01:00
snapshotter.py Snapshotter show outputs ALL snapshots 2015-09-25 13:24:08 +02:00
test-requirements.txt Require semver in main requirements.txt 2015-12-21 11:27:19 +00:00
tox.ini Add tox -e docs command 2015-12-23 16:04:38 +01:00
vagrant-settings.yaml_defaults The vagrant script is checking for presence of the original file 2015-11-13 07:12:12 -08:00
Vagrantfile Move bootstrap commands to ansible playbook 2015-12-31 15:58:46 +01:00

Build Status Coverage Status

Requirements

Supported development platforms

Linux or MacOS

Additional software

VirtualBox: 5.x

Vagrant: 1.7.x

Note: Make sure that Vagrant VirtualBox Guest plugin is installed vagrant plugin install vagrant-vbguest

Note: If you are using VirtualBox 5.0 it's worth uncommenting paravirtprovider setting in vagrant-settings.yaml for speed improvements:

paravirtprovider: kvm

For details see Customizing vagrant-settings.yaml section.

Setup development env

Setup environment:

cd solar
vagrant up

Login into vm, the code is available in /vagrant directory

vagrant ssh
solar --help

Get ssh details for running slave nodes (vagrant/vagrant):

vagrant ssh-config

You can make/restore snapshots of boxes (this is way faster than reprovisioning them) with the snapshotter.py script:

./snapshotter.py take -n my-snapshot
./snapshotter.py show
./snapshotter.py restore -n my-snapshot

snapshoter.py to run requires python module click.

  1. On debian based systems you can install it via sudo aptitude install python-click-cli,
  2. On fedora 22 you can install it via sudo dnf install python-click,
  3. If you use virtualenv or similar tool then you can install it just with pip install click,
  4. If you don't have virtualenv and your operating system does not provide package for it then sudo pip install click.
  5. If you don't have pip then install it and then execute command step 4.

Solar usage

For now all commands should be executed from solar-dev machine from /vagrant directory.

Basic flow is:

  1. Create some resources (look at solar-resources/examples/openstack/openstack.py) and connect them between each other, and place them on nodes.
  2. Run solar changes stage (this stages the changes)
  3. Run solar changes process (this prepares orchestrator graph, returning change UUID)
  4. Run solar orch run-once <change-uuid> (or solar orch run-once last to run the lastly created graph)
  5. Observe progress of orch with watch 'solar orch report <change-uuid>' (or watch 'solar orch report last').

Some very simple cluster setup:

cd /vagrant

solar resource create nodes templates/nodes '{"count": 2}'
solar resource create mariadb_service resources/mariadb_service '{"image": "mariadb", "root_password": "mariadb", "port": 3306}'
solar resource create keystone_db resources/mariadb_db/ '{"db_name": "keystone_db", "login_user": "root"}'
solar resource create keystone_db_user resources/mariadb_user/ user_name=keystone user_password=keystone  # another valid format

solar connect node1 mariadb_service
solar connect node1 keystone_db
solar connect mariadb_service keystone_db '{"root_password": "login_password", "port": "login_port", "ip": "db_host"}'
# solar connect mariadb_service keystone_db_user 'root_password->login_password port->login_port'  # another valid format
solar connect keystone_db keystone_db_user

solar changes stage
solar changes process
# <uid>
solar orch run-once <uid> # or solar orch run-once last
watch 'solar orch report <uid>' # or solar orch report last

You can fiddle with the above configuration like this:

solar resource update keystone_db_user '{"user_password": "new_keystone_password"}'
solar resource update keystone_db_user user_password=new_keystone_password   # another valid format

solar changes stage
solar changes process
<uid>
solar orch run-once <uid>

To get data for the resource bar (raw and pretty-JSON):

solar resource show --tag 'resources/bar'
solar resource show --json --tag 'resources/bar' | jq .
solar resource show --name 'resource_name'
solar resource show --name 'resource_name' --json | jq .

To clear all resources/connections:

solar resource clear_all

Show the connections/graph:

solar connections show
solar connections graph

You can also limit graph to show only specific resources:

solar connections graph --start-with mariadb_service --end-with keystone_db

You can make sure that all input values are correct and mapped without duplicating your values with this command:

solar resource validate

Disconnect

solar disconnect mariadb_service node1

Tag a resource:

solar resource tag node1 test-tags # Remove tags
solar resource tag node1 test-tag --delete

Low level API

Usage:

Creating resources:

from solar.core.resource import composer as cr
node1 = cr.create('node1', 'resources/ro_node/', 'rs/', {'ip':'10.0.0.3', 'ssh_key' : '/vagrant/tmp/keys/ssh_private', 'ssh_user':'vagrant'})[0]

node2 = cr.create('node2', 'resources/ro_node/', 'rs/', {'ip':'10.0.0.4', 'ssh_key' : '/vagrant/tmp/keys/ssh_private', 'ssh_user':'vagrant'})[0]

keystone_db_data = cr.create('mariadb_keystone_data', 'resources/data_container/', 'rs/', {'image' : 'mariadb', 'export_volumes' : ['/var/lib/mysql'], 'ip': '', 'ssh_user': '', 'ssh_key': ''}, connections={'ip' : 'node2.ip', 'ssh_key':'node2.ssh_key', 'ssh_user':'node2.ssh_user'})[0]

nova_db_data = cr.create('mariadb_nova_data', 'resources/data_container/', 'rs/', {'image' : 'mariadb', 'export_volumes' : ['/var/lib/mysql'], 'ip': '', 'ssh_user': '', 'ssh_key': ''}, connections={'ip' : 'node1.ip', 'ssh_key':'node1.ssh_key', 'ssh_user':'node1.ssh_user'})[0]

To make connection after resource is created use signal.connect.

To test notifications:

keystone_db_data.args    # displays node2 IP

node2.update({'ip': '10.0.0.5'})

keystone_db_data.args   # updated IP

If you close the Python shell you can load the resources like this:

from solar.core import resource
node1 = resource.load('rs/node1')

node2 = resource.load('rs/node2')

keystone_db_data = resource.load('rs/mariadb_keystone_data')

nova_db_data = resource.load('rs/mariadb_nova_data')

Connections are loaded automatically.

You can also load all resources at once:

from solar.core import resource
all_resources = resource.load_all('rs')

Dry run

Solar CLI has possibility to show dry run of actions to be performed. To see what will happen when you run Puppet action, for example, try this:

solar resource action keystone_puppet run -d

This should print out something like this:

EXECUTED:
73c6cb1cf7f6cdd38d04dd2d0a0729f8: (0, 'SSH RUN', ('sudo cat /tmp/puppet-modules/Puppetfile',), {})
3dd4d7773ce74187d5108ace0717ef29: (1, 'SSH SUDO', ('mv "1038cb062449340bdc4832138dca18cba75caaf8" "/tmp/puppet-modules/Puppetfile"',), {})
ae5ad2455fe2b02ba46b4b7727eff01a: (2, 'SSH RUN', ('sudo librarian-puppet install',), {})
208764fa257ed3159d1788f73c755f44: (3, 'SSH SUDO', ('puppet apply -vd /tmp/action.pp',), {})

By default every mocked command returns an empty string. If you want it to return something else (to check how would dry run behave in different situation) you provide a mapping (in JSON format), something along the lines of:

solar resource action keystone_puppet run -d -m "{\"73c\": \"mod 'openstack-keystone'\n\"}"

The above means the return string of first command (with hash 73c6c...) will be as specified in the mapping. Notice that in mapping you don't have to specify the whole hash, just it's unique beginning. Also, you don't have to specify the whole return string in mapping. Dry run executor can read file and return it's contents instead, just use the > operator when specifying hash:

solar resource action keystone_puppet run -d -m "{\"73c>\": \"./Puppetlabs-file\"}"

Resource compiling

You can compile all meta.yaml definitions into Python code with classes that derive from Resource. To do this run

solar resource compile_all

This generates file resources_compiled.py in the main directory (do not commit this file into the repo). Then you can import classes from that file, create their instances and assign values just like these were normal properties. If your editor supports Python static checking, you will have autocompletion there too. An example on how to create a node with this:

import resources_compiled

node1 = resources_compiled.RoNodeResource('node1', None, {})
node1.ip = '10.0.0.3'
node1.ssh_key = '/vagrant/.vagrant/machines/solar-dev1/virtualbox/private_key'
node1.ssh_user = 'vagrant'

Higher-level API

There's also a higher-level API that allows to write resource instances in more functional way, and in particular avoid for loops. Here's an example:

from solar import template

nodes = template.nodes_from('templates/riak_nodes')

riak_services = nodes.on_each(
    'resources/riak_node',
    {
        'riak_self_name': 'riak{num}',
        'riak_hostname': 'riak_server{num}.solar',
        'riak_name': 'riak{num}@riak_server{num}.solar',
    }
)

riak_master_service = riak_services.take(0)
riak_slave_services = riak_services.tail()

riak_master_service.connect_list(
    riak_slave_services,
    {
        'riak_name': 'join_to',
    }
)

For full Riak example, please look at solar-resources/examples/riak/riaks-template.py.

Full documentation of individual functions is found in the solar/template.py file.

Customizing vagrant-settings.yaml

Solar is shipped with sane defaults in vagrant-setting.yaml_defaults. If you need to adjust them for your needs, e.g. changing resource allocation for VirtualBox machines, you should just compy the file to vagrant-setting.yaml and make your modifications.

Image based provisioning with Solar

  • In vagrant-setting.yaml_defaults or vagrant-settings.yaml file uncomment preprovisioned: false line.
  • Run vagrant up, it will take some time because it builds image for bootstrap and IBP images.
  • Now you can run provisioning /vagrant/solar-resources/examples/provisioning/provision.sh