Merge branch 'master' into resources-migrate
This commit is contained in:
commit
d8ed61cfa4
58
docs/commit-log.md
Normal file
58
docs/commit-log.md
Normal file
@ -0,0 +1,58 @@
|
|||||||
|
# Commit log analysis
|
||||||
|
|
||||||
|
See here https://files.slack.com/files-pri/T03ACD12T-F04V4QC6E/2015-05-21_16.14.50.jpg for details.
|
||||||
|
|
||||||
|
We have some data storage (to decide -- one global storage or separate storage for each resource?
|
||||||
|
One global commit log or separate for each resource?) -- call it DB for simplicity.
|
||||||
|
|
||||||
|
User modifies some data of some resources K1, K2, H. This data is not stored immediately in the DB,
|
||||||
|
instead it is stored in some separate place and queued for execution (we call this 'Staged Log').
|
||||||
|
|
||||||
|
The modified data for a resource is represented as a diff in its inputs. So if user adds new resource
|
||||||
|
and assigns an IP to it, it is represented something like:
|
||||||
|
|
||||||
|
```
|
||||||
|
ip:
|
||||||
|
from: None
|
||||||
|
to: 10.20.0.2
|
||||||
|
```
|
||||||
|
|
||||||
|
User commands 'apply'. Orchestrator takes the modified resources and applies appropriate actions
|
||||||
|
in appropriate order that it computes.
|
||||||
|
|
||||||
|
We think that the 'appropriate action' can be inferred from the diff for each resource. So for example
|
||||||
|
if resource is new and has added IP the action `run` can be inferred because previous state was
|
||||||
|
`None` and current is something new. If on the other hand previous state was some value `A` and
|
||||||
|
new state is some value `B` -- the orchestrator decides that the action to be run is `update`. And
|
||||||
|
if previous state is some `A` and new state is `None` the action will be `remove`.
|
||||||
|
|
||||||
|
The 'appropriate order' taken by orchestrator can be just like the data flow graph initially. We
|
||||||
|
see possibility of optimizing the number of actions taken by orchestrator so that moving Keystone
|
||||||
|
service to another node can be simplified from 4 actions (update HAProxy without removed Keystone,
|
||||||
|
install Keystone on new node, update HAProxy with new Keystone, remove Keystone from old node)
|
||||||
|
taken to 3 actions (add Keystone to new node, update HAProxy removing old Keystone and adding
|
||||||
|
new one, remove Keystone from old node).
|
||||||
|
|
||||||
|
After resource action is finished the new state is saved to the commit log and data is updated in
|
||||||
|
the DB.
|
||||||
|
|
||||||
|
We want to support rollbacks via commit log. Rollback is done by replaying the commit log backwards.
|
||||||
|
|
||||||
|
In case of separate commit logs per resource we think rollback could be done like this: some resource
|
||||||
|
`K` is rolled back by one commit log, the diff action is the same as reversed diff action of the
|
||||||
|
commit we are rolling back. We can update other resources with this new data by analyzing the connections.
|
||||||
|
So in other words -- we change the data in one resource according to what is in the commit to be rolled
|
||||||
|
back and then we trigger changes in other connected resources. Then we run orchestrator actions like
|
||||||
|
described above.
|
||||||
|
|
||||||
|
In case of single commit log for all resources -- is it sufficient to just rollback a commit? Or
|
||||||
|
do we need to trigger changes in connected resources too? In global commit log we have ordering
|
||||||
|
of commits like they were run by orchestrator.
|
||||||
|
|
||||||
|
From analysis of resource removal we think that we need to save connection data in each commit --
|
||||||
|
otherwise when we rollback that resource removal we wouldn't know how to restore its connections
|
||||||
|
to other resources.
|
||||||
|
|
||||||
|
Single commits after every action finished on a resource causes many commits per one user 'apply'
|
||||||
|
action. In order to allow user to revert the whole action and not just single commits we have some
|
||||||
|
idea of 'tagging' group of commits by some action id.
|
27
example.py
27
example.py
@ -1,5 +1,6 @@
|
|||||||
import shutil
|
import shutil
|
||||||
import os
|
import os
|
||||||
|
import time
|
||||||
|
|
||||||
from solar.core import resource
|
from solar.core import resource
|
||||||
from solar.core import signals
|
from solar.core import signals
|
||||||
@ -13,17 +14,16 @@ os.mkdir('rs')
|
|||||||
|
|
||||||
node1 = resource.create('node1', 'resources/ro_node/', 'rs/', {'ip':'10.0.0.3', 'ssh_key' : '/vagrant/tmp/keys/ssh_private', 'ssh_user':'vagrant'})
|
node1 = resource.create('node1', 'resources/ro_node/', 'rs/', {'ip':'10.0.0.3', 'ssh_key' : '/vagrant/tmp/keys/ssh_private', 'ssh_user':'vagrant'})
|
||||||
node2 = resource.create('node2', 'resources/ro_node/', 'rs/', {'ip':'10.0.0.4', 'ssh_key' : '/vagrant/tmp/keys/ssh_private', 'ssh_user':'vagrant'})
|
node2 = resource.create('node2', 'resources/ro_node/', 'rs/', {'ip':'10.0.0.4', 'ssh_key' : '/vagrant/tmp/keys/ssh_private', 'ssh_user':'vagrant'})
|
||||||
node3 = resource.create('node3', 'resources/ro_node/', 'rs/', {'ip':'10.0.0.5', 'ssh_key' : '/vagrant/tmp/keys/ssh_private', 'ssh_user':'vagrant'})
|
|
||||||
|
|
||||||
mariadb_service1 = resource.create('mariadb_service1', 'resources/mariadb_service', 'rs/', {'image':'mariadb', 'root_password' : 'mariadb', 'port' : '3306', 'ip': '', 'ssh_user': '', 'ssh_key': ''})
|
mariadb_service1 = resource.create('mariadb_service1', 'resources/mariadb_service', 'rs/', {'image':'mariadb', 'root_password' : 'mariadb', 'port' : '3306', 'ip': '', 'ssh_user': '', 'ssh_key': ''})
|
||||||
keystone_db = resource.create('keystone_db', 'resources/mariadb_db/', 'rs/', {'db_name':'keystone_db', 'login_password':'', 'login_user':'root', 'login_port': '', 'ip':'', 'ssh_user':'', 'ssh_key':''})
|
keystone_db = resource.create('keystone_db', 'resources/mariadb_db/', 'rs/', {'db_name':'keystone_db', 'login_password':'', 'login_user':'root', 'login_port': '', 'ip':'', 'ssh_user':'', 'ssh_key':''})
|
||||||
keystone_db_user = resource.create('keystone_db_user', 'resources/mariadb_user/', 'rs/', {'new_user_name' : 'keystone', 'new_user_password' : 'keystone', 'db_name':'', 'login_password':'', 'login_user':'root', 'login_port': '', 'ip':'', 'ssh_user':'', 'ssh_key':''})
|
keystone_db_user = resource.create('keystone_db_user', 'resources/mariadb_user/', 'rs/', {'new_user_name' : 'keystone', 'new_user_password' : 'keystone', 'db_name':'', 'login_password':'', 'login_user':'root', 'login_port': '', 'ip':'', 'ssh_user':'', 'ssh_key':''})
|
||||||
|
|
||||||
keystone_config1 = resource.create('keystone_config1', 'resources/keystone_config/', 'rs/', {'config_dir' : '/etc/solar/keystone', 'ip':'', 'ssh_user':'', 'ssh_key':'', 'admin_token':'admin', 'db_password':'', 'db_name':'', 'db_user':'', 'db_host':''})
|
keystone_config1 = resource.create('keystone_config1', 'resources/keystone_config/', 'rs/', {'config_dir' : '/etc/solar/keystone', 'ip':'', 'ssh_user':'', 'ssh_key':'', 'admin_token':'admin', 'db_password':'', 'db_name':'', 'db_user':'', 'db_host':''})
|
||||||
keystone_service1 = resource.create('keystone_service1', 'resources/keystone_service/', 'rs/', {'port':'5000', 'admin_port':'35357', 'ip':'', 'ssh_key':'', 'ssh_user':'', 'config_dir':'', 'config_dir':''})
|
keystone_service1 = resource.create('keystone_service1', 'resources/keystone_service/', 'rs/', {'port':'5001', 'admin_port':'35357', 'ip':'', 'ssh_key':'', 'ssh_user':'', 'config_dir':'', 'config_dir':''})
|
||||||
|
|
||||||
keystone_config2 = resource.create('keystone_config2', 'resources/keystone_config/', 'rs/', {'config_dir' : '/etc/solar/keystone', 'ip':'', 'ssh_user':'', 'ssh_key':'', 'admin_token':'admin', 'db_password':'', 'db_name':'', 'db_user':'', 'db_host':''})
|
keystone_config2 = resource.create('keystone_config2', 'resources/keystone_config/', 'rs/', {'config_dir' : '/etc/solar/keystone', 'ip':'', 'ssh_user':'', 'ssh_key':'', 'admin_token':'admin', 'db_password':'', 'db_name':'', 'db_user':'', 'db_host':''})
|
||||||
keystone_service2 = resource.create('keystone_service2', 'resources/keystone_service/', 'rs/', {'port':'5000', 'admin_port':'35357', 'ip':'', 'ssh_key':'', 'ssh_user':'', 'config_dir':'', 'config_dir':''})
|
keystone_service2 = resource.create('keystone_service2', 'resources/keystone_service/', 'rs/', {'port':'5002', 'admin_port':'35357', 'ip':'', 'ssh_key':'', 'ssh_user':'', 'config_dir':'', 'config_dir':''})
|
||||||
|
|
||||||
|
|
||||||
haproxy_keystone_config = resource.create('haproxy_keystone1_config', 'resources/haproxy_config/', 'rs/', {'name':'keystone_config', 'listen_port':'5000', 'servers':[], 'ports':[]})
|
haproxy_keystone_config = resource.create('haproxy_keystone1_config', 'resources/haproxy_config/', 'rs/', {'name':'keystone_config', 'listen_port':'5000', 'servers':[], 'ports':[]})
|
||||||
@ -62,11 +62,12 @@ signals.connect(node2, keystone_service2)
|
|||||||
signals.connect(keystone_config2, keystone_service2, {'config_dir': 'config_dir'})
|
signals.connect(keystone_config2, keystone_service2, {'config_dir': 'config_dir'})
|
||||||
|
|
||||||
signals.connect(keystone_service1, haproxy_keystone_config, {'ip':'servers', 'port':'ports'})
|
signals.connect(keystone_service1, haproxy_keystone_config, {'ip':'servers', 'port':'ports'})
|
||||||
|
signals.connect(keystone_service2, haproxy_keystone_config, {'ip':'servers', 'port':'ports'})
|
||||||
|
|
||||||
signals.connect(node1, haproxy_config)
|
signals.connect(node2, haproxy_config)
|
||||||
signals.connect(haproxy_keystone_config, haproxy_config, {'listen_port': 'listen_ports', 'name':'configs_names', 'ports' : 'configs_ports', 'servers':'configs'})
|
signals.connect(haproxy_keystone_config, haproxy_config, {'listen_port': 'listen_ports', 'name':'configs_names', 'ports' : 'configs_ports', 'servers':'configs'})
|
||||||
|
|
||||||
signals.connect(node1, haproxy_service)
|
signals.connect(node2, haproxy_service)
|
||||||
signals.connect(haproxy_config, haproxy_service, {'listen_ports':'ports', 'config_dir':'host_binds'})
|
signals.connect(haproxy_config, haproxy_service, {'listen_ports':'ports', 'config_dir':'host_binds'})
|
||||||
|
|
||||||
|
|
||||||
@ -74,19 +75,21 @@ signals.connect(haproxy_config, haproxy_service, {'listen_ports':'ports', 'confi
|
|||||||
from solar.core import actions
|
from solar.core import actions
|
||||||
|
|
||||||
actions.resource_action(mariadb_service1, 'run')
|
actions.resource_action(mariadb_service1, 'run')
|
||||||
|
time.sleep(10)
|
||||||
actions.resource_action(keystone_db, 'run')
|
actions.resource_action(keystone_db, 'run')
|
||||||
actions.resource_action(keystone_db_user, 'run')
|
actions.resource_action(keystone_db_user, 'run')
|
||||||
actions.resource_action(keystone_config1, 'run')
|
actions.resource_action(keystone_config1, 'run')
|
||||||
actions.resource_action(keystone_service1, 'run')
|
actions.resource_action(keystone_service1, 'run')
|
||||||
|
actions.resource_action(keystone_service2, 'run')
|
||||||
actions.resource_action(haproxy_config, 'run')
|
actions.resource_action(haproxy_config, 'run')
|
||||||
actions.resource_action(haproxy_service, 'run')
|
actions.resource_action(haproxy_service, 'run')
|
||||||
|
|
||||||
|
|
||||||
#remove
|
#remove
|
||||||
actions.resource_action(haproxy_service, 'remove')
|
#actions.resource_action(haproxy_service, 'remove')
|
||||||
actions.resource_action(haproxy_config, 'remove')
|
#actions.resource_action(haproxy_config, 'remove')
|
||||||
actions.resource_action(keystone_service1, 'remove')
|
#actions.resource_action(keystone_service1, 'remove')
|
||||||
actions.resource_action(keystone_config1, 'remove')
|
#actions.resource_action(keystone_config1, 'remove')
|
||||||
actions.resource_action(keystone_db_user, 'remove')
|
#actions.resource_action(keystone_db_user, 'remove')
|
||||||
actions.resource_action(keystone_db, 'remove')
|
#actions.resource_action(keystone_db, 'remove')
|
||||||
actions.resource_action(mariadb_service1, 'remove')
|
#actions.resource_action(mariadb_service1, 'remove')
|
||||||
|
@ -2,3 +2,4 @@ click==4.0
|
|||||||
jinja2==2.7.3
|
jinja2==2.7.3
|
||||||
networkx==1.9.1
|
networkx==1.9.1
|
||||||
PyYAML==3.11
|
PyYAML==3.11
|
||||||
|
jsonschema==2.4.0
|
||||||
|
@ -3,5 +3,11 @@ handler: ansible
|
|||||||
version: 1.0.0
|
version: 1.0.0
|
||||||
input:
|
input:
|
||||||
ip:
|
ip:
|
||||||
|
type: str!
|
||||||
|
value:
|
||||||
image:
|
image:
|
||||||
export_volumes:
|
type: str!
|
||||||
|
value:
|
||||||
|
export_volumes:
|
||||||
|
type: str!
|
||||||
|
value:
|
||||||
|
@ -3,13 +3,23 @@ handler: ansible
|
|||||||
version: 1.0.0
|
version: 1.0.0
|
||||||
input:
|
input:
|
||||||
ip:
|
ip:
|
||||||
|
schema: str!
|
||||||
|
value:
|
||||||
image:
|
image:
|
||||||
|
schema: str!
|
||||||
|
value:
|
||||||
ports:
|
ports:
|
||||||
|
schema: [int]
|
||||||
|
value: []
|
||||||
host_binds:
|
host_binds:
|
||||||
|
schema: [int]
|
||||||
|
value: []
|
||||||
volume_binds:
|
volume_binds:
|
||||||
|
schema: [int]
|
||||||
|
value: []
|
||||||
ssh_user:
|
ssh_user:
|
||||||
|
schema: str!
|
||||||
|
value: []
|
||||||
ssh_key:
|
ssh_key:
|
||||||
input-types:
|
schema: str!
|
||||||
ports:
|
value: []
|
||||||
host_binds: list
|
|
||||||
volume_binds: list
|
|
||||||
|
@ -2,4 +2,6 @@ id: file
|
|||||||
handler: shell
|
handler: shell
|
||||||
version: 1.0.0
|
version: 1.0.0
|
||||||
input:
|
input:
|
||||||
path: /tmp/test_file
|
path:
|
||||||
|
schema: str!
|
||||||
|
value: /tmp/test_file
|
||||||
|
@ -3,15 +3,26 @@ handler: ansible
|
|||||||
version: 1.0.0
|
version: 1.0.0
|
||||||
input:
|
input:
|
||||||
ip:
|
ip:
|
||||||
config_dir: {src: /etc/solar/haproxy, dst: /etc/haproxy}
|
schema: int!
|
||||||
|
value:
|
||||||
|
config_dir:
|
||||||
|
schema: {src: str!, dst: str!}
|
||||||
|
value: {src: /etc/solar/haproxy, dst: /etc/haproxy}
|
||||||
listen_ports:
|
listen_ports:
|
||||||
|
schema: [int]
|
||||||
|
value: []
|
||||||
configs:
|
configs:
|
||||||
|
schema: [[str]]
|
||||||
|
value: []
|
||||||
configs_names:
|
configs_names:
|
||||||
|
schema: [str]
|
||||||
|
value: []
|
||||||
configs_ports:
|
configs_ports:
|
||||||
|
schema: [[int]]
|
||||||
|
value: []
|
||||||
ssh_user:
|
ssh_user:
|
||||||
|
schema: str!
|
||||||
|
value:
|
||||||
ssh_key:
|
ssh_key:
|
||||||
input-types:
|
schema: str!
|
||||||
listen_ports: list
|
value:
|
||||||
configs: list
|
|
||||||
configs_names: list
|
|
||||||
configs_ports: list
|
|
||||||
|
@ -3,9 +3,14 @@ handler: none
|
|||||||
version: 1.0.0
|
version: 1.0.0
|
||||||
input:
|
input:
|
||||||
name:
|
name:
|
||||||
|
schema: str!
|
||||||
|
value:
|
||||||
listen_port:
|
listen_port:
|
||||||
|
schema: int!
|
||||||
|
value:
|
||||||
ports:
|
ports:
|
||||||
|
schema: [int]
|
||||||
|
value:
|
||||||
servers:
|
servers:
|
||||||
input-types:
|
schema: [str]
|
||||||
ports: list
|
value:
|
||||||
servers: list
|
|
||||||
|
@ -3,11 +3,29 @@ handler: ansible
|
|||||||
version: 1.0.0
|
version: 1.0.0
|
||||||
input:
|
input:
|
||||||
config_dir:
|
config_dir:
|
||||||
|
schema: str!
|
||||||
|
value:
|
||||||
admin_token:
|
admin_token:
|
||||||
|
schema: str!
|
||||||
|
value:
|
||||||
db_user:
|
db_user:
|
||||||
|
schema: str!
|
||||||
|
value:
|
||||||
db_password:
|
db_password:
|
||||||
|
schema: str!
|
||||||
|
value:
|
||||||
db_host:
|
db_host:
|
||||||
|
schema: str!
|
||||||
|
value:
|
||||||
db_name:
|
db_name:
|
||||||
|
schema: str!
|
||||||
|
value:
|
||||||
ip:
|
ip:
|
||||||
|
schema: str!
|
||||||
|
value:
|
||||||
ssh_key:
|
ssh_key:
|
||||||
|
schema: str!
|
||||||
|
value:
|
||||||
ssh_user:
|
ssh_user:
|
||||||
|
schema: str!
|
||||||
|
value:
|
||||||
|
@ -2,10 +2,24 @@ id: keystone
|
|||||||
handler: ansible
|
handler: ansible
|
||||||
version: 1.0.0
|
version: 1.0.0
|
||||||
input:
|
input:
|
||||||
image: kollaglue/centos-rdo-keystone
|
image:
|
||||||
|
schema: str!
|
||||||
|
value: kollaglue/centos-rdo-keystone
|
||||||
config_dir:
|
config_dir:
|
||||||
|
schema: str!
|
||||||
|
value:
|
||||||
port:
|
port:
|
||||||
|
schema: int!
|
||||||
|
value:
|
||||||
admin_port:
|
admin_port:
|
||||||
|
schema: int!
|
||||||
|
value:
|
||||||
ip:
|
ip:
|
||||||
|
schema: str!
|
||||||
|
value:
|
||||||
ssh_key:
|
ssh_key:
|
||||||
|
schema: str!
|
||||||
|
value:
|
||||||
ssh_user:
|
ssh_user:
|
||||||
|
schema: str!
|
||||||
|
value:
|
||||||
|
@ -3,12 +3,32 @@ handler: ansible
|
|||||||
version: 1.0.0
|
version: 1.0.0
|
||||||
input:
|
input:
|
||||||
keystone_host:
|
keystone_host:
|
||||||
|
schema: str!
|
||||||
|
value:
|
||||||
keystone_port:
|
keystone_port:
|
||||||
|
schema: int!
|
||||||
|
value:
|
||||||
login_user:
|
login_user:
|
||||||
|
schema: str!
|
||||||
|
value:
|
||||||
login_token:
|
login_token:
|
||||||
|
schema: str!
|
||||||
|
value:
|
||||||
user_name:
|
user_name:
|
||||||
|
schema: str!
|
||||||
|
value:
|
||||||
user_password:
|
user_password:
|
||||||
|
schema: str!
|
||||||
|
value:
|
||||||
tenant_name:
|
tenant_name:
|
||||||
|
schema: str!
|
||||||
|
value:
|
||||||
ip:
|
ip:
|
||||||
|
schema: str!
|
||||||
|
value:
|
||||||
ssh_key:
|
ssh_key:
|
||||||
|
schema: str!
|
||||||
|
value:
|
||||||
ssh_user:
|
ssh_user:
|
||||||
|
schema: str!
|
||||||
|
value:
|
||||||
|
@ -6,9 +6,23 @@ actions:
|
|||||||
remove: remove.yml
|
remove: remove.yml
|
||||||
input:
|
input:
|
||||||
db_name:
|
db_name:
|
||||||
|
schema: str!
|
||||||
|
value:
|
||||||
login_password:
|
login_password:
|
||||||
|
schema: str!
|
||||||
|
value:
|
||||||
login_port:
|
login_port:
|
||||||
|
schema: int!
|
||||||
|
value:
|
||||||
login_user:
|
login_user:
|
||||||
ip:
|
schema: str!
|
||||||
ssh_key:
|
value:
|
||||||
ssh_user:
|
ip:
|
||||||
|
schema: str!
|
||||||
|
value:
|
||||||
|
ssh_key:
|
||||||
|
schema: str!
|
||||||
|
value:
|
||||||
|
ssh_user:
|
||||||
|
schema: str!
|
||||||
|
value:
|
||||||
|
@ -3,8 +3,20 @@ handler: ansible
|
|||||||
version: 1.0.0
|
version: 1.0.0
|
||||||
input:
|
input:
|
||||||
image:
|
image:
|
||||||
root_password:
|
schema: str!
|
||||||
port:
|
value:
|
||||||
ip:
|
root_password:
|
||||||
ssh_key:
|
schema: str!
|
||||||
ssh_user:
|
value:
|
||||||
|
port:
|
||||||
|
schema: str!
|
||||||
|
value:
|
||||||
|
ip:
|
||||||
|
schema: int!
|
||||||
|
value:
|
||||||
|
ssh_key:
|
||||||
|
schema: str!
|
||||||
|
value:
|
||||||
|
ssh_user:
|
||||||
|
schema: str!
|
||||||
|
value:
|
||||||
|
@ -6,11 +6,29 @@ actions:
|
|||||||
remove: remove.yml
|
remove: remove.yml
|
||||||
input:
|
input:
|
||||||
new_user_password:
|
new_user_password:
|
||||||
|
schema: str!
|
||||||
|
value:
|
||||||
new_user_name:
|
new_user_name:
|
||||||
|
schema: str!
|
||||||
|
value:
|
||||||
db_name:
|
db_name:
|
||||||
|
schema: str!
|
||||||
|
value:
|
||||||
login_password:
|
login_password:
|
||||||
|
schema: str!
|
||||||
|
value:
|
||||||
login_port:
|
login_port:
|
||||||
|
schema: int!
|
||||||
|
value:
|
||||||
login_user:
|
login_user:
|
||||||
ip:
|
schema: str!
|
||||||
ssh_key:
|
value:
|
||||||
ssh_user:
|
ip:
|
||||||
|
schema: str!
|
||||||
|
value:
|
||||||
|
ssh_key:
|
||||||
|
schema: str!
|
||||||
|
value:
|
||||||
|
ssh_user:
|
||||||
|
schema: str!
|
||||||
|
value:
|
||||||
|
@ -3,5 +3,11 @@ handler: ansible
|
|||||||
version: 1.0.0
|
version: 1.0.0
|
||||||
input:
|
input:
|
||||||
ip:
|
ip:
|
||||||
port: 8774
|
schema: str!
|
||||||
|
value:
|
||||||
|
port:
|
||||||
|
schema: int!
|
||||||
|
value: 8774
|
||||||
image: # TODO
|
image: # TODO
|
||||||
|
schema: str!
|
||||||
|
value:
|
||||||
|
@ -4,5 +4,11 @@ version: 1.0.0
|
|||||||
actions:
|
actions:
|
||||||
input:
|
input:
|
||||||
ip:
|
ip:
|
||||||
|
schema: str!
|
||||||
|
value:
|
||||||
ssh_key:
|
ssh_key:
|
||||||
|
schema: str!
|
||||||
|
value:
|
||||||
ssh_user:
|
ssh_user:
|
||||||
|
schema: str!
|
||||||
|
value:
|
||||||
|
@ -17,8 +17,9 @@ fi
|
|||||||
|
|
||||||
pip install -r requirements.txt --download-cache=/tmp/$JOB_NAME
|
pip install -r requirements.txt --download-cache=/tmp/$JOB_NAME
|
||||||
|
|
||||||
pushd x
|
pushd solar/solar
|
||||||
|
|
||||||
PYTHONPATH=$WORKSPACE CONFIG_FILE=$CONFIG_FILE python test/test_signals.py
|
PYTHONPATH=$WORKSPACE/solar CONFIG_FILE=$CONFIG_FILE python test/test_signals.py
|
||||||
|
PYTHONPATH=$WORKSPACE/solar CONFIG_FILE=$CONFIG_FILE python test/test_validation.py
|
||||||
|
|
||||||
popd
|
popd
|
||||||
|
@ -13,6 +13,7 @@ from solar.core import db
|
|||||||
from solar.core import observer
|
from solar.core import observer
|
||||||
from solar.core import signals
|
from solar.core import signals
|
||||||
from solar.core import utils
|
from solar.core import utils
|
||||||
|
from solar.core import validation
|
||||||
|
|
||||||
|
|
||||||
class Resource(object):
|
class Resource(object):
|
||||||
@ -21,14 +22,12 @@ class Resource(object):
|
|||||||
self.base_dir = base_dir
|
self.base_dir = base_dir
|
||||||
self.metadata = metadata
|
self.metadata = metadata
|
||||||
self.actions = metadata['actions'].keys() if metadata['actions'] else None
|
self.actions = metadata['actions'].keys() if metadata['actions'] else None
|
||||||
self.requires = metadata['input'].keys()
|
|
||||||
self._validate_args(args, metadata['input'])
|
|
||||||
self.args = {}
|
self.args = {}
|
||||||
for arg_name, arg_value in args.items():
|
for arg_name, arg_value in args.items():
|
||||||
type_ = metadata.get('input-types', {}).get(arg_name) or 'simple'
|
metadata_arg = self.metadata['input'][arg_name]
|
||||||
|
type_ = validation.schema_input_type(metadata_arg.get('schema', 'str'))
|
||||||
|
|
||||||
self.args[arg_name] = observer.create(type_, self, arg_name, arg_value)
|
self.args[arg_name] = observer.create(type_, self, arg_name, arg_value)
|
||||||
self.metadata['input'] = args
|
|
||||||
self.input_types = metadata.get('input-types', {})
|
|
||||||
self.changed = []
|
self.changed = []
|
||||||
self.tags = tags or []
|
self.tags = tags or []
|
||||||
|
|
||||||
@ -95,22 +94,13 @@ class Resource(object):
|
|||||||
else:
|
else:
|
||||||
raise Exception('Uuups, action is not available')
|
raise Exception('Uuups, action is not available')
|
||||||
|
|
||||||
def _validate_args(self, args, inputs):
|
|
||||||
for req in self.requires:
|
|
||||||
if req not in args:
|
|
||||||
# If metadata input is filled with a value, use it as default
|
|
||||||
# and don't report an error
|
|
||||||
if inputs.get(req):
|
|
||||||
args[req] = inputs[req]
|
|
||||||
else:
|
|
||||||
raise Exception('Requirement `{0}` is missing in args'.format(req))
|
|
||||||
|
|
||||||
# TODO: versioning
|
# TODO: versioning
|
||||||
def save(self):
|
def save(self):
|
||||||
metadata = copy.deepcopy(self.metadata)
|
metadata = copy.deepcopy(self.metadata)
|
||||||
|
|
||||||
metadata['tags'] = self.tags
|
metadata['tags'] = self.tags
|
||||||
metadata['input'] = self.args_dict()
|
for k, v in self.args_dict().items():
|
||||||
|
metadata['input'][k]['value'] = v
|
||||||
|
|
||||||
meta_file = os.path.join(self.base_dir, 'meta.yaml')
|
meta_file = os.path.join(self.base_dir, 'meta.yaml')
|
||||||
with open(meta_file, 'w') as f:
|
with open(meta_file, 'w') as f:
|
||||||
|
@ -83,8 +83,8 @@ def guess_mapping(emitter, receiver):
|
|||||||
:return:
|
:return:
|
||||||
"""
|
"""
|
||||||
guessed = {}
|
guessed = {}
|
||||||
for key in emitter.requires:
|
for key in emitter.args:
|
||||||
if key in receiver.requires:
|
if key in receiver.args:
|
||||||
guessed[key] = key
|
guessed[key] = key
|
||||||
|
|
||||||
return guessed
|
return guessed
|
||||||
|
115
solar/solar/core/validation.py
Normal file
115
solar/solar/core/validation.py
Normal file
@ -0,0 +1,115 @@
|
|||||||
|
from jsonschema import validate, ValidationError, SchemaError
|
||||||
|
|
||||||
|
|
||||||
|
def schema_input_type(schema):
|
||||||
|
"""Input type from schema
|
||||||
|
|
||||||
|
:param schema:
|
||||||
|
:return: simple/list
|
||||||
|
"""
|
||||||
|
if isinstance(schema, list):
|
||||||
|
return 'list'
|
||||||
|
|
||||||
|
return 'simple'
|
||||||
|
|
||||||
|
|
||||||
|
def _construct_jsonschema(schema, definition_base=''):
|
||||||
|
"""Construct jsonschema from our metadata input schema.
|
||||||
|
|
||||||
|
:param schema:
|
||||||
|
:return:
|
||||||
|
"""
|
||||||
|
if schema == 'str':
|
||||||
|
return {'type': 'string'}, {}
|
||||||
|
|
||||||
|
if schema == 'str!':
|
||||||
|
return {'type': 'string', 'minLength': 1}, {}
|
||||||
|
|
||||||
|
if schema == 'int' or schema == 'int!':
|
||||||
|
return {'type': 'number'}, {}
|
||||||
|
|
||||||
|
if isinstance(schema, list):
|
||||||
|
items, definitions = _construct_jsonschema(schema[0], definition_base=definition_base)
|
||||||
|
|
||||||
|
return {
|
||||||
|
'type': 'array',
|
||||||
|
'items': items,
|
||||||
|
}, definitions
|
||||||
|
|
||||||
|
if isinstance(schema, dict):
|
||||||
|
properties = {}
|
||||||
|
definitions = {}
|
||||||
|
|
||||||
|
for k, v in schema.items():
|
||||||
|
if isinstance(v, dict) or isinstance(v, list):
|
||||||
|
key = '{}_{}'.format(definition_base, k)
|
||||||
|
properties[k] = {'$ref': '#/definitions/{}'.format(key)}
|
||||||
|
definitions[key], new_definitions = _construct_jsonschema(v, definition_base=key)
|
||||||
|
else:
|
||||||
|
properties[k], new_definitions = _construct_jsonschema(v, definition_base=definition_base)
|
||||||
|
|
||||||
|
definitions.update(new_definitions)
|
||||||
|
|
||||||
|
required = [k for k, v in schema.items() if
|
||||||
|
isinstance(v, basestring) and v.endswith('!')]
|
||||||
|
|
||||||
|
ret = {
|
||||||
|
'type': 'object',
|
||||||
|
'properties': properties,
|
||||||
|
}
|
||||||
|
|
||||||
|
if required:
|
||||||
|
ret['required'] = required
|
||||||
|
|
||||||
|
return ret, definitions
|
||||||
|
|
||||||
|
|
||||||
|
def construct_jsonschema(schema):
|
||||||
|
jsonschema, definitions = _construct_jsonschema(schema)
|
||||||
|
|
||||||
|
jsonschema['definitions'] = definitions
|
||||||
|
|
||||||
|
return jsonschema
|
||||||
|
|
||||||
|
|
||||||
|
def validate_input(value, jsonschema=None, schema=None):
|
||||||
|
"""Validate single input according to schema.
|
||||||
|
|
||||||
|
:param value: Value to be validated
|
||||||
|
:param schema: Dict in jsonschema format
|
||||||
|
:param schema: Our custom, simplified schema
|
||||||
|
:return: list with errors
|
||||||
|
"""
|
||||||
|
if jsonschema is None:
|
||||||
|
jsonschema = construct_jsonschema(schema)
|
||||||
|
try:
|
||||||
|
validate(value, jsonschema)
|
||||||
|
except ValidationError as e:
|
||||||
|
return [e.message]
|
||||||
|
except:
|
||||||
|
print 'jsonschema', jsonschema
|
||||||
|
print 'value', value
|
||||||
|
raise
|
||||||
|
|
||||||
|
|
||||||
|
def validate_resource(r):
|
||||||
|
"""Check if resource inputs correspond to schema.
|
||||||
|
|
||||||
|
:param r: Resource instance
|
||||||
|
:return: dict, keys are input names, value is array with error.
|
||||||
|
"""
|
||||||
|
ret = {}
|
||||||
|
|
||||||
|
input_schemas = r.metadata['input']
|
||||||
|
args = r.args_dict()
|
||||||
|
|
||||||
|
for input_name, input_definition in input_schemas.items():
|
||||||
|
errors = validate_input(
|
||||||
|
args.get(input_name),
|
||||||
|
jsonschema=input_definition.get('jsonschema'),
|
||||||
|
schema=input_definition.get('schema')
|
||||||
|
)
|
||||||
|
if errors:
|
||||||
|
ret[input_name] = errors
|
||||||
|
|
||||||
|
return ret
|
@ -4,9 +4,9 @@ import tempfile
|
|||||||
import unittest
|
import unittest
|
||||||
import yaml
|
import yaml
|
||||||
|
|
||||||
from x import db
|
from solar.core import db
|
||||||
from x import resource as xr
|
from solar.core import resource as xr
|
||||||
from x import signals as xs
|
from solar.core import signals as xs
|
||||||
|
|
||||||
|
|
||||||
class BaseResourceTest(unittest.TestCase):
|
class BaseResourceTest(unittest.TestCase):
|
||||||
|
@ -2,7 +2,7 @@ import unittest
|
|||||||
|
|
||||||
import base
|
import base
|
||||||
|
|
||||||
from x import signals as xs
|
from solar.core import signals as xs
|
||||||
|
|
||||||
|
|
||||||
class TestBaseInput(base.BaseResourceTest):
|
class TestBaseInput(base.BaseResourceTest):
|
||||||
@ -12,7 +12,9 @@ id: sample
|
|||||||
handler: ansible
|
handler: ansible
|
||||||
version: 1.0.0
|
version: 1.0.0
|
||||||
input:
|
input:
|
||||||
values: {}
|
values:
|
||||||
|
schema: {a: int, b: int}
|
||||||
|
value: {}
|
||||||
""")
|
""")
|
||||||
|
|
||||||
sample1 = self.create_resource(
|
sample1 = self.create_resource(
|
||||||
@ -63,7 +65,11 @@ handler: ansible
|
|||||||
version: 1.0.0
|
version: 1.0.0
|
||||||
input:
|
input:
|
||||||
ip:
|
ip:
|
||||||
|
schema: string
|
||||||
|
value:
|
||||||
port:
|
port:
|
||||||
|
schema: int
|
||||||
|
value:
|
||||||
""")
|
""")
|
||||||
sample_ip_meta_dir = self.make_resource_meta("""
|
sample_ip_meta_dir = self.make_resource_meta("""
|
||||||
id: sample-ip
|
id: sample-ip
|
||||||
@ -71,6 +77,8 @@ handler: ansible
|
|||||||
version: 1.0.0
|
version: 1.0.0
|
||||||
input:
|
input:
|
||||||
ip:
|
ip:
|
||||||
|
schema: string
|
||||||
|
value:
|
||||||
""")
|
""")
|
||||||
sample_port_meta_dir = self.make_resource_meta("""
|
sample_port_meta_dir = self.make_resource_meta("""
|
||||||
id: sample-port
|
id: sample-port
|
||||||
@ -78,6 +86,8 @@ handler: ansible
|
|||||||
version: 1.0.0
|
version: 1.0.0
|
||||||
input:
|
input:
|
||||||
port:
|
port:
|
||||||
|
schema: int
|
||||||
|
value:
|
||||||
""")
|
""")
|
||||||
|
|
||||||
sample = self.create_resource(
|
sample = self.create_resource(
|
||||||
@ -109,6 +119,8 @@ handler: ansible
|
|||||||
version: 1.0.0
|
version: 1.0.0
|
||||||
input:
|
input:
|
||||||
ip:
|
ip:
|
||||||
|
schema: string
|
||||||
|
value:
|
||||||
""")
|
""")
|
||||||
|
|
||||||
sample = self.create_resource(
|
sample = self.create_resource(
|
||||||
@ -149,6 +161,8 @@ handler: ansible
|
|||||||
version: 1.0.0
|
version: 1.0.0
|
||||||
input:
|
input:
|
||||||
ip:
|
ip:
|
||||||
|
schema: str
|
||||||
|
value:
|
||||||
""")
|
""")
|
||||||
|
|
||||||
sample1 = self.create_resource(
|
sample1 = self.create_resource(
|
||||||
@ -171,6 +185,8 @@ handler: ansible
|
|||||||
version: 1.0.0
|
version: 1.0.0
|
||||||
input:
|
input:
|
||||||
ip:
|
ip:
|
||||||
|
schema: str
|
||||||
|
value:
|
||||||
""")
|
""")
|
||||||
list_input_single_meta_dir = self.make_resource_meta("""
|
list_input_single_meta_dir = self.make_resource_meta("""
|
||||||
id: list-input-single
|
id: list-input-single
|
||||||
@ -178,8 +194,8 @@ handler: ansible
|
|||||||
version: 1.0.0
|
version: 1.0.0
|
||||||
input:
|
input:
|
||||||
ips:
|
ips:
|
||||||
input-types:
|
schema: [str]
|
||||||
ips: list
|
value: []
|
||||||
""")
|
""")
|
||||||
|
|
||||||
sample1 = self.create_resource(
|
sample1 = self.create_resource(
|
||||||
@ -248,7 +264,11 @@ handler: ansible
|
|||||||
version: 1.0.0
|
version: 1.0.0
|
||||||
input:
|
input:
|
||||||
ip:
|
ip:
|
||||||
|
schema: str
|
||||||
|
value:
|
||||||
port:
|
port:
|
||||||
|
schema: int
|
||||||
|
value:
|
||||||
""")
|
""")
|
||||||
list_input_multi_meta_dir = self.make_resource_meta("""
|
list_input_multi_meta_dir = self.make_resource_meta("""
|
||||||
id: list-input-multi
|
id: list-input-multi
|
||||||
@ -256,10 +276,11 @@ handler: ansible
|
|||||||
version: 1.0.0
|
version: 1.0.0
|
||||||
input:
|
input:
|
||||||
ips:
|
ips:
|
||||||
|
schema: [str]
|
||||||
|
value:
|
||||||
ports:
|
ports:
|
||||||
input-types:
|
schema: [int]
|
||||||
ips: list
|
value:
|
||||||
ports: list
|
|
||||||
""")
|
""")
|
||||||
|
|
||||||
sample1 = self.create_resource(
|
sample1 = self.create_resource(
|
||||||
|
174
solar/solar/test/test_validation.py
Normal file
174
solar/solar/test/test_validation.py
Normal file
@ -0,0 +1,174 @@
|
|||||||
|
import unittest
|
||||||
|
|
||||||
|
from solar.test import base
|
||||||
|
|
||||||
|
from solar.core import validation as sv
|
||||||
|
|
||||||
|
|
||||||
|
class TestInputValidation(base.BaseResourceTest):
|
||||||
|
def test_input_str_type(self):
|
||||||
|
sample_meta_dir = self.make_resource_meta("""
|
||||||
|
id: sample
|
||||||
|
handler: ansible
|
||||||
|
version: 1.0.0
|
||||||
|
input:
|
||||||
|
value:
|
||||||
|
schema: str
|
||||||
|
value:
|
||||||
|
value-required:
|
||||||
|
schema: str!
|
||||||
|
value:
|
||||||
|
""")
|
||||||
|
|
||||||
|
r = self.create_resource(
|
||||||
|
'r1', sample_meta_dir, {'value': 'x', 'value-required': 'y'}
|
||||||
|
)
|
||||||
|
errors = sv.validate_resource(r)
|
||||||
|
self.assertEqual(errors, {})
|
||||||
|
|
||||||
|
r = self.create_resource(
|
||||||
|
'r2', sample_meta_dir, {'value': 1, 'value-required': 'y'}
|
||||||
|
)
|
||||||
|
errors = sv.validate_resource(r)
|
||||||
|
self.assertListEqual(errors.keys(), ['value'])
|
||||||
|
|
||||||
|
r = self.create_resource(
|
||||||
|
'r3', sample_meta_dir, {'value': ''}
|
||||||
|
)
|
||||||
|
errors = sv.validate_resource(r)
|
||||||
|
self.assertListEqual(errors.keys(), ['value-required'])
|
||||||
|
|
||||||
|
|
||||||
|
def test_input_int_type(self):
|
||||||
|
sample_meta_dir = self.make_resource_meta("""
|
||||||
|
id: sample
|
||||||
|
handler: ansible
|
||||||
|
version: 1.0.0
|
||||||
|
input:
|
||||||
|
value:
|
||||||
|
schema: int
|
||||||
|
value:
|
||||||
|
value-required:
|
||||||
|
schema: int!
|
||||||
|
value:
|
||||||
|
""")
|
||||||
|
|
||||||
|
r = self.create_resource(
|
||||||
|
'r1', sample_meta_dir, {'value': 1, 'value-required': 2}
|
||||||
|
)
|
||||||
|
errors = sv.validate_resource(r)
|
||||||
|
self.assertEqual(errors, {})
|
||||||
|
|
||||||
|
r = self.create_resource(
|
||||||
|
'r2', sample_meta_dir, {'value': 'x', 'value-required': 2}
|
||||||
|
)
|
||||||
|
errors = sv.validate_resource(r)
|
||||||
|
self.assertListEqual(errors.keys(), ['value'])
|
||||||
|
|
||||||
|
r = self.create_resource(
|
||||||
|
'r3', sample_meta_dir, {'value': 1}
|
||||||
|
)
|
||||||
|
errors = sv.validate_resource(r)
|
||||||
|
self.assertListEqual(errors.keys(), ['value-required'])
|
||||||
|
|
||||||
|
def test_input_dict_type(self):
|
||||||
|
sample_meta_dir = self.make_resource_meta("""
|
||||||
|
id: sample
|
||||||
|
handler: ansible
|
||||||
|
version: 1.0.0
|
||||||
|
input:
|
||||||
|
values:
|
||||||
|
schema: {a: int!, b: int}
|
||||||
|
value: {}
|
||||||
|
""")
|
||||||
|
|
||||||
|
r = self.create_resource(
|
||||||
|
'r', sample_meta_dir, {'values': {'a': 1, 'b': 2}}
|
||||||
|
)
|
||||||
|
errors = sv.validate_resource(r)
|
||||||
|
self.assertEqual(errors, {})
|
||||||
|
|
||||||
|
r.update({'values': None})
|
||||||
|
errors = sv.validate_resource(r)
|
||||||
|
self.assertListEqual(errors.keys(), ['values'])
|
||||||
|
|
||||||
|
r.update({'values': {'a': 1, 'c': 3}})
|
||||||
|
errors = sv.validate_resource(r)
|
||||||
|
self.assertEqual(errors, {})
|
||||||
|
|
||||||
|
r = self.create_resource(
|
||||||
|
'r1', sample_meta_dir, {'values': {'b': 2}}
|
||||||
|
)
|
||||||
|
errors = sv.validate_resource(r)
|
||||||
|
self.assertListEqual(errors.keys(), ['values'])
|
||||||
|
|
||||||
|
def test_complex_input(self):
|
||||||
|
sample_meta_dir = self.make_resource_meta("""
|
||||||
|
id: sample
|
||||||
|
handler: ansible
|
||||||
|
version: 1.0.0
|
||||||
|
input:
|
||||||
|
values:
|
||||||
|
schema: {l: [{a: int}]}
|
||||||
|
value: {l: [{a: 1}]}
|
||||||
|
""")
|
||||||
|
|
||||||
|
r = self.create_resource(
|
||||||
|
'r', sample_meta_dir, {
|
||||||
|
'values': {
|
||||||
|
'l': [{'a': 1}],
|
||||||
|
}
|
||||||
|
}
|
||||||
|
)
|
||||||
|
errors = sv.validate_resource(r)
|
||||||
|
self.assertEqual(errors, {})
|
||||||
|
|
||||||
|
r.update({
|
||||||
|
'values': {
|
||||||
|
'l': [{'a': 'x'}],
|
||||||
|
}
|
||||||
|
})
|
||||||
|
errors = sv.validate_resource(r)
|
||||||
|
self.assertListEqual(errors.keys(), ['values'])
|
||||||
|
|
||||||
|
r.update({'values': {'l': [{'a': 1, 'c': 3}]}})
|
||||||
|
errors = sv.validate_resource(r)
|
||||||
|
self.assertEqual(errors, {})
|
||||||
|
|
||||||
|
def test_more_complex_input(self):
|
||||||
|
sample_meta_dir = self.make_resource_meta("""
|
||||||
|
id: sample
|
||||||
|
handler: ansible
|
||||||
|
version: 1.0.0
|
||||||
|
input:
|
||||||
|
values:
|
||||||
|
schema: {l: [{a: int}], d: {x: [int]}}
|
||||||
|
value: {l: [{a: 1}], d: {x: [1, 2]}}
|
||||||
|
""")
|
||||||
|
|
||||||
|
r = self.create_resource(
|
||||||
|
'r', sample_meta_dir, {
|
||||||
|
'values': {
|
||||||
|
'l': [{'a': 1}],
|
||||||
|
'd': {'x': [1, 2]}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
)
|
||||||
|
errors = sv.validate_resource(r)
|
||||||
|
self.assertEqual(errors, {})
|
||||||
|
|
||||||
|
r.update({
|
||||||
|
'values': {
|
||||||
|
'l': [{'a': 1}],
|
||||||
|
'd': []
|
||||||
|
}
|
||||||
|
})
|
||||||
|
errors = sv.validate_resource(r)
|
||||||
|
self.assertListEqual(errors.keys(), ['values'])
|
||||||
|
|
||||||
|
r.update({'values': {'a': 1, 'c': 3}})
|
||||||
|
errors = sv.validate_resource(r)
|
||||||
|
self.assertEqual(errors, {})
|
||||||
|
|
||||||
|
if __name__ == '__main__':
|
||||||
|
unittest.main()
|
@ -1,5 +1,7 @@
|
|||||||
# TODO
|
# TODO
|
||||||
|
|
||||||
|
- grammar connections fuzzy matching algorithm (for example: type 'login' joins to type 'login' irrespective of names of both inputs)
|
||||||
|
- resource connections JS frontend (?)
|
||||||
- store all resource configurations somewhere globally (this is required to
|
- store all resource configurations somewhere globally (this is required to
|
||||||
correctly perform an update on one resource and bubble down to all others)
|
correctly perform an update on one resource and bubble down to all others)
|
||||||
- config templates
|
- config templates
|
||||||
@ -9,6 +11,7 @@
|
|||||||
when some image is unused to conserve space
|
when some image is unused to conserve space
|
||||||
|
|
||||||
# DONE
|
# DONE
|
||||||
|
- CI
|
||||||
- Deploy HAProxy, Keystone and MariaDB
|
- Deploy HAProxy, Keystone and MariaDB
|
||||||
- ansible handler (loles)
|
- ansible handler (loles)
|
||||||
- tags are kept in resource mata file (pkaminski)
|
- tags are kept in resource mata file (pkaminski)
|
||||||
|
Loading…
x
Reference in New Issue
Block a user