README cleanup and initial documentation dump.

This commit is contained in:
Ryan Petrello 2015-04-02 14:11:24 -04:00
parent a9277f23c7
commit 8eccc0146c
15 changed files with 1132 additions and 43 deletions

View File

@ -1,6 +1,14 @@
# Akanda
[akanda.io](https://akanda.io)
A set of Layer 3 plus Services for OpenStack.
Akanda is the only open source network virtualization solution built by
OpenStack operators for real OpenStack clouds. Originally developed by
[DreamHost](http://dreamhost.com) for their OpenStack-based public cloud,
[DreamCompute](http://dreamhost.com/cloud/dreamcompute), Akanda eliminates the
need for complex SDN controllers, overlays, and multiple plugins by providing
a simple integrated networking stack (routing, firewall, and load balancing via
a virtual software router) for connecting and securing multi-tenant OpenStack
environments.
----
@ -11,32 +19,24 @@ A set of Layer 3 plus Services for OpenStack.
The code for the Akanda project lives in several separate repositories to ease
packaging and management:
* [Akanda Appliance](https://github.com/dreamhost/akanda-appliance)
Supporting software for the Akanda Software Router appliance, which is a
service VM running Linux and IPTables for providing L3+ services in a
virtualized network environment. This includes a REST API for managing the
appliance.
* [Akanda Rug](https://github.com/akanda/akanda-rug)  Orchestration
service for managing the creation, configuration, and health of Akanda
Software Routers in an OpenStack cloud.
* [Akanda Neutron](https://github.com/dreamhost/akanda-quantum)  User-Facing
REST service implemented as OpenStack Neutron API Extensions. Additionally,
subclasses of the several Neutron plugins and supporting code.
* [Akanda Appliance](https://github.com/akanda/akanda-appliance)
Supporting software for the Akanda Software Router appliance, which is
a Linux-based service VM that provides routing and L3+ services in
a virtualized network environment. This includes a REST API for managing
the appliance via the `Akanda Rug` orchestration service.
* [Akanda Neutron](https://github.com/akanda/akanda-neutron)  
Ancillary subclasses of several OpenStack Neutron plugins and supporting code.
* [Akanda Nova](https://github.com/dreamhost/akanda-nova)  Extensions to
OpenStack Nova supporting the creation and management of Akanda Software
Router appliances.
* [Akanda Ceilometer](https://github.com/dreamhost/akanda-ceilometer)
 Integration with OpenStack Ceilometer for metering of activity inside of
Akanda Software Routers.
* [Akanda Horizon](https://github.com/dreamhost/akanda-horizon)  OpenStack
Horizon extensions to enable the management of Akanda Software Routers.
* [Akanda Rug](https://github.com/dreamhost/akanda-rug)  Orchestration
service for managing the creation, configuration, and health of Akanda
Software Routers in an OpenStack cloud.
As such, this repository focuses on project overview and documentation.
As such, *this* repository focuses on project overview and documentation.
### The Name
@ -48,30 +48,15 @@ clearly and with a bevy of excellent synonyms by using the Sanskrit word
अखण्ड (akhaNDa) which has such lovely connotations as "non-stop, "undivided,
"entire," "whole," and most importantly, "**not broken**."
## The Akanda REST APIs
## Get Involved
Akanda comes with two REST APIs:
Additional details and documentation on Akanda can be found at
[akanda.io](http://akanda.io) and [docs.akanda.io](http://docs.akanda.io).
1. The REST API that runs on the router instance itself, recieving simple
iptables-related administrative commands (e.g., "take this data and have iptables parse
it"). This REST API runs only so long a router instance is up and running.
This is not the user-facing, 24/7 REST API.
2. Then there is the user-facing, 24/7, load-balanced REST API. This is what
users will be able to interact with in order to programmatically manage their
router instances (e.g., set NAT, port-forwarding, and basic firewall rules).
This API is exposed as extensions to OpenStack Neutron's API.
## Additional Documentation
Akanda is in use at [DreamHost](http://dreamhost.com) for our OpenStack-based
public cloud, [DreamCompute](http://dreamhost.com/cloud/dreamcompute). As we
work on bringing Akanda to the community, we will be working on additional
documentation, user guides, etc.
Mailing lists and a project website are on the way!
Most Akanda interaction is done via the #akanda channel on
[FreeNode](http://freenode.net) IRC.
## License and Copyright
Akanda is licensed under the Apache-2.0 license and is Copyright 2014,
DreamHost, LLC.
Akanda is licensed under the Apache-2.0 license and is Copyright 2015,
Akanda, Inc.

1
docs/.gitignore vendored Normal file
View File

@ -0,0 +1 @@
build

192
docs/Makefile Normal file
View File

@ -0,0 +1,192 @@
# Makefile for Sphinx documentation
#
# You can set these variables from the command line.
SPHINXOPTS =
SPHINXBUILD = sphinx-build
PAPER =
BUILDDIR = build
# User-friendly check for sphinx-build
ifeq ($(shell which $(SPHINXBUILD) >/dev/null 2>&1; echo $$?), 1)
$(error The '$(SPHINXBUILD)' command was not found. Make sure you have Sphinx installed, then set the SPHINXBUILD environment variable to point to the full path of the '$(SPHINXBUILD)' executable. Alternatively you can add the directory with the executable to your PATH. If you don't have Sphinx installed, grab it from http://sphinx-doc.org/)
endif
# Internal variables.
PAPEROPT_a4 = -D latex_paper_size=a4
PAPEROPT_letter = -D latex_paper_size=letter
ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) source
# the i18n builder cannot share the environment and doctrees with the others
I18NSPHINXOPTS = $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) source
.PHONY: help clean html dirhtml singlehtml pickle json htmlhelp qthelp devhelp epub latex latexpdf text man changes linkcheck doctest coverage gettext
help:
@echo "Please use \`make <target>' where <target> is one of"
@echo " html to make standalone HTML files"
@echo " dirhtml to make HTML files named index.html in directories"
@echo " singlehtml to make a single large HTML file"
@echo " pickle to make pickle files"
@echo " json to make JSON files"
@echo " htmlhelp to make HTML files and a HTML help project"
@echo " qthelp to make HTML files and a qthelp project"
@echo " applehelp to make an Apple Help Book"
@echo " devhelp to make HTML files and a Devhelp project"
@echo " epub to make an epub"
@echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter"
@echo " latexpdf to make LaTeX files and run them through pdflatex"
@echo " latexpdfja to make LaTeX files and run them through platex/dvipdfmx"
@echo " text to make text files"
@echo " man to make manual pages"
@echo " texinfo to make Texinfo files"
@echo " info to make Texinfo files and run them through makeinfo"
@echo " gettext to make PO message catalogs"
@echo " changes to make an overview of all changed/added/deprecated items"
@echo " xml to make Docutils-native XML files"
@echo " pseudoxml to make pseudoxml-XML files for display purposes"
@echo " linkcheck to check all external links for integrity"
@echo " doctest to run all doctests embedded in the documentation (if enabled)"
@echo " coverage to run coverage check of the documentation (if enabled)"
clean:
rm -rf $(BUILDDIR)/*
html:
$(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html
@echo
@echo "Build finished. The HTML pages are in $(BUILDDIR)/html."
dirhtml:
$(SPHINXBUILD) -b dirhtml $(ALLSPHINXOPTS) $(BUILDDIR)/dirhtml
@echo
@echo "Build finished. The HTML pages are in $(BUILDDIR)/dirhtml."
singlehtml:
$(SPHINXBUILD) -b singlehtml $(ALLSPHINXOPTS) $(BUILDDIR)/singlehtml
@echo
@echo "Build finished. The HTML page is in $(BUILDDIR)/singlehtml."
pickle:
$(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) $(BUILDDIR)/pickle
@echo
@echo "Build finished; now you can process the pickle files."
json:
$(SPHINXBUILD) -b json $(ALLSPHINXOPTS) $(BUILDDIR)/json
@echo
@echo "Build finished; now you can process the JSON files."
htmlhelp:
$(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) $(BUILDDIR)/htmlhelp
@echo
@echo "Build finished; now you can run HTML Help Workshop with the" \
".hhp project file in $(BUILDDIR)/htmlhelp."
qthelp:
$(SPHINXBUILD) -b qthelp $(ALLSPHINXOPTS) $(BUILDDIR)/qthelp
@echo
@echo "Build finished; now you can run "qcollectiongenerator" with the" \
".qhcp project file in $(BUILDDIR)/qthelp, like this:"
@echo "# qcollectiongenerator $(BUILDDIR)/qthelp/akanda.qhcp"
@echo "To view the help file:"
@echo "# assistant -collectionFile $(BUILDDIR)/qthelp/akanda.qhc"
applehelp:
$(SPHINXBUILD) -b applehelp $(ALLSPHINXOPTS) $(BUILDDIR)/applehelp
@echo
@echo "Build finished. The help book is in $(BUILDDIR)/applehelp."
@echo "N.B. You won't be able to view it unless you put it in" \
"~/Library/Documentation/Help or install it in your application" \
"bundle."
devhelp:
$(SPHINXBUILD) -b devhelp $(ALLSPHINXOPTS) $(BUILDDIR)/devhelp
@echo
@echo "Build finished."
@echo "To view the help file:"
@echo "# mkdir -p $$HOME/.local/share/devhelp/akanda"
@echo "# ln -s $(BUILDDIR)/devhelp $$HOME/.local/share/devhelp/akanda"
@echo "# devhelp"
epub:
$(SPHINXBUILD) -b epub $(ALLSPHINXOPTS) $(BUILDDIR)/epub
@echo
@echo "Build finished. The epub file is in $(BUILDDIR)/epub."
latex:
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
@echo
@echo "Build finished; the LaTeX files are in $(BUILDDIR)/latex."
@echo "Run \`make' in that directory to run these through (pdf)latex" \
"(use \`make latexpdf' here to do that automatically)."
latexpdf:
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
@echo "Running LaTeX files through pdflatex..."
$(MAKE) -C $(BUILDDIR)/latex all-pdf
@echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex."
latexpdfja:
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
@echo "Running LaTeX files through platex and dvipdfmx..."
$(MAKE) -C $(BUILDDIR)/latex all-pdf-ja
@echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex."
text:
$(SPHINXBUILD) -b text $(ALLSPHINXOPTS) $(BUILDDIR)/text
@echo
@echo "Build finished. The text files are in $(BUILDDIR)/text."
man:
$(SPHINXBUILD) -b man $(ALLSPHINXOPTS) $(BUILDDIR)/man
@echo
@echo "Build finished. The manual pages are in $(BUILDDIR)/man."
texinfo:
$(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo
@echo
@echo "Build finished. The Texinfo files are in $(BUILDDIR)/texinfo."
@echo "Run \`make' in that directory to run these through makeinfo" \
"(use \`make info' here to do that automatically)."
info:
$(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo
@echo "Running Texinfo files through makeinfo..."
make -C $(BUILDDIR)/texinfo info
@echo "makeinfo finished; the Info files are in $(BUILDDIR)/texinfo."
gettext:
$(SPHINXBUILD) -b gettext $(I18NSPHINXOPTS) $(BUILDDIR)/locale
@echo
@echo "Build finished. The message catalogs are in $(BUILDDIR)/locale."
changes:
$(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) $(BUILDDIR)/changes
@echo
@echo "The overview file is in $(BUILDDIR)/changes."
linkcheck:
$(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) $(BUILDDIR)/linkcheck
@echo
@echo "Link check complete; look for any errors in the above output " \
"or in $(BUILDDIR)/linkcheck/output.txt."
doctest:
$(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) $(BUILDDIR)/doctest
@echo "Testing of doctests in the sources finished, look at the " \
"results in $(BUILDDIR)/doctest/output.txt."
coverage:
$(SPHINXBUILD) -b coverage $(ALLSPHINXOPTS) $(BUILDDIR)/coverage
@echo "Testing of coverage in the sources finished, look at the " \
"results in $(BUILDDIR)/coverage/python.txt."
xml:
$(SPHINXBUILD) -b xml $(ALLSPHINXOPTS) $(BUILDDIR)/xml
@echo
@echo "Build finished. The XML files are in $(BUILDDIR)/xml."
pseudoxml:
$(SPHINXBUILD) -b pseudoxml $(ALLSPHINXOPTS) $(BUILDDIR)/pseudoxml
@echo
@echo "Build finished. The pseudo-XML files are in $(BUILDDIR)/pseudoxml."

Binary file not shown.

After

Width:  |  Height:  |  Size: 233 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 233 KiB

236
docs/source/appliance.rst Normal file
View File

@ -0,0 +1,236 @@
.. _appliance:
Virtual Software Router (the Akanda Appliance)
==============================================
Akanda uses Linux-based software router images (stored in OpenStack Glance)
to provide layer 3 routing and advanced networking services. While it's
possible to build your own custom image, Akanda provides stable image releases
for download at `akanda.io <http://akanda.io>`_.
.. _appliance_rest:
REST API
--------
The Akanda Appliance REST API is used by the :ref:`rug` service to manage
health and configuration of services on the router.
Router Health
+++++++++++++
``HTTP GET /v1/status/``
~~~~~~~~~~~~~~~~~~~~~~~~
Used to confirm that a router is responsive and has external network connectivity.
::
Example HTTP 200 Response
Content-Type: application/json
{
'v4': true,
'v6': false,
}
Router Configuration
++++++++++++++++++++
``HTTP GET /v1/firewall/rules/``
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Used to retrieve an overview of configured firewall rules for the router (from
``iptables -L`` and ``iptables6 -L``).
::
Example HTTP 200 Response
Content-Type: text/plain
Chain INPUT (policy DROP)
target prot opt source destination
ACCEPT all -- 0.0.0.0/0 0.0.0.0/0
ACCEPT icmp -- 0.0.0.0/0 0.0.0.0/0 icmptype 8
...
``HTTP GET /v1/system/interface/<ifname>/``
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Used to retrieve JSON data about a specific interface on the router.
::
Example HTTP 200 Response
Content-Type: application/json
{
"interface": {
"addresses": [
"8.8.8.8",
"2001:4860:4860::8888",
],
"description": "",
"groups": [],
"ifname": "ge0",
"lladdr": "fa:16:3f:de:21:e9",
"media": null,
"mtu": 1500,
"state": "up"
}
}
``HTTP GET /v1/system/interfaces``
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Used to retrieve JSON data about a `every` interface on the router.
::
Example HTTP 200 Response
Content-Type: application/json
{
"interfaces": [{
"addresses": [
"8.8.8.8",
"2001:4860:4860::8888",
],
"description": "",
"groups": [],
"ifname": "ge0",
"lladdr": "fa:16:3f:de:21:e9",
"media": null,
"mtu": 1500,
"state": "up"
}, {
...
}]
}
``HTTP PUT /v1/system/config/``
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Used (generally, by :program:`akanda-rug-service`) to push a new configuration
to the router and restart services as necessary:
::
Example HTTP PUT Body
Content-Type: application/json
{
"configuration": {
"networks": [
{
"address_allocations": [],
"interface": {
"addresses": [
"8.8.8.8",
"2001:4860:4860::8888"
],
"description": "",
"groups": [],
"ifname": "ge1",
"lladdr": null,
"media": null,
"mtu": 1500,
"state": "up"
},
"name": "",
"network_id": "f0f8c937-9fb7-4a58-b83f-57e9515e36cb",
"network_type": "external",
"v4_conf_service": "static",
"v6_conf_service": "static"
},
{
"address_allocations": [],
"interface": {
"addresses": [
"..."
],
"description": "",
"groups": [],
"ifname": "ge0",
"lladdr": "fa:16:f8:90:32:e3",
"media": null,
"mtu": 1500,
"state": "up"
},
"name": "",
"network_id": "15016de1-494b-4c65-97fb-475b40acf7e1",
"network_type": "management",
"v4_conf_service": "static",
"v6_conf_service": "static"
},
{
"address_allocations": [
{
"device_id": "7c400585-1743-42ca-a2a3-6b30dd34f83b",
"hostname": "10-10-10-1.local",
"ip_addresses": {
"10.10.10.1": true,
"2607:f298:6050:f0ff::1": false
},
"mac_address": "fa:16:4d:c3:95:81"
}
],
"interface": {
"addresses": [
"10.10.10.1/24",
"2607:f298:6050:f0ff::1/64"
],
"description": "",
"groups": [],
"ifname": "ge2",
"lladdr": null,
"media": null,
"mtu": 1500,
"state": "up"
},
"name": "",
"network_id": "31a242a0-95aa-49cd-b2db-cc00f33dfe88",
"network_type": "internal",
"v4_conf_service": "static",
"v6_conf_service": "static"
}
],
"static_routes": []
}
}
Survey of Software and Services
-------------------------------
The Akanda Appliance uses a variety of software and services to manage routing
and advanced services, such as:
* ``iproute2`` tools (e.g., ``ip neigh``, ``ip addr``, ``ip route``, etc...)
* ``dnsmasq``
* ``bird6``
* ``iptables`` and ``iptables6``
In addition, the Akanda Appliance includes two Python-based services:
* The REST API (which :program:`akanda-rug-service)` communicates with to
orchestrate router updates), deployed behind `gunicorn
<http://gunicorn.org>`_.
* A Python-based metadata proxy.
Proxying Instance Metadata
--------------------------
When OpenStack VMs boot with ``cloud-init``, they look for metadata on a
well-known address, ``169.254.169.254``. To facilitate this process, Akanda
sets up a special NAT rule (one for each local network)::
-A PREROUTING -i eth2 -d 169.254.169.254 -p tcp -m tcp --dport 80 -j DNAT --to-destination 10.10.10.1:9602
...and a special rule to allow metadata requests to pass across the management
network (where OpenStack Nova is running, and will answer requests)::
-A INPUT -i !eth0 -d <management-v6-address-of-router> -j DROP
A Python-based metadata proxy runs locally on the router (in this example,
listening on ``http://10.10.10.1:9602``) and proxies these metadata requests
over the management network so that instances on local tenant networks will
have access to server metadata.

View File

@ -0,0 +1,82 @@
High-Level Architecture
=======================
Akanda is a network orchestration platform that delivers network services
(L3-L7) via VMs that provide routing, load balancing, firewall and more.
Akanda also interacts with any L2 overlay - including open source solutions
based on OVS and Linux bridge (VLAN, VXLAN, GRE) and most proprietary solutions
- to deliver a centralized management layer for all OpenStack networking decisions.
In a canonical OpenStack deployment, Neutron server emits L3 and DHCP
messages which are handled by a variety of Neutron agents (the L3 agent, DHCP
agent, agents for advanced services such as load balancing, firewall, and VPN
as a service):
.. image:: _static/neutron-canonical.png
When we add Akanda into the mix, we're able to replace these agents with
a virtualized software router that manages layer 3 routing and other advanced
networking services, significantly lowering the barrier of entry for operators
(in terms of deployment, monitoring and management):
.. image:: _static/neutron-akanda.png
Akanda takes the place of many of the agents that OpenStack Neutron
communicates with (L3, DHCP, LBaaS, FWaaS) and acts as a single control point
for all networking services. By removing the complexity of extra agents, Akanda
can centrally manage DHCP and L3, orchestrate load balancing and VPN Services,
and overall reduce the number of components required to build, manage and
monitor complete virtual networks within your cloud.
Akanda Building Blocks
----------------------
From an architectural perspective, Akanda is composed of a few subprojects:
* | :ref:`akanda-rug <rug>`
A service for managing the creation, configuration, and health of Akanda
Software Routers in an OpenStack cloud. The :py:mod:`akanda-rug` acts in part as
a replacement for Neutron's various L3-L7 agents by listening for
Neutron AMQP events and coalescing them into software
router API calls (which configure and manage embedded services on the
router VM). Additionally, :py:mod:`akanda-rug` contains a health monitoring
component which monitors health and guarantees uptime for existing
software routers.
* | :ref:`akanda-appliance <appliance>`
The software and services (including tools for building custom router
images themselves) that run on the virtualized Linux router. Includes
drivers for L3-L7 services and a RESTful API that :py:mod:`akanda-rug` uses to
orchestrate changes to router configuration.
* | `akanda-neutron <http://github.com/akanda/akanda-neutron>`_
Addon API extensions and plugins for OpenStack Neutron which enable
functionality and integration with the Akanda project, notably Akanda
router appliance interaction.
Software VM Lifecycle
---------------------
As Neutron emits events in reaction to network operations (e.g., a user creates
a new network/subnet, a user attaches a virtual machine to a network,
a floating IP address is associated, etc...), :py:mod:`akanda-rug` receives these
events, parses, and dispatches them to a pool of workers which manage the
lifecycle of every virtualized router.
This management of individual routers is handled via a state machine per
router; as events come in, the state machine for the appropriate router
transitions, modifying its virtualized router in a variety of ways, such as:
* Booting a virtual machine for the router via the Nova API (if one doesn't
exist).
* Checking for aliveness of the router via the :ref:`REST API
<appliance_rest>` on the router VM.
* Pushing configuration updates via the :ref:`REST API
<appliance_rest>` to configure routing
and manage services (such as ``iptables``, ``dnsmasq``, ``bird6``,
etc...).
* Deleting virtual machines via the Nova API (e.g., when a router is
deleted from Neutron).

291
docs/source/conf.py Normal file
View File

@ -0,0 +1,291 @@
# -*- coding: utf-8 -*-
#
# akanda documentation build configuration file, created by
# sphinx-quickstart on Thu Apr 2 14:55:06 2015.
#
# This file is execfile()d with the current directory set to its
# containing dir.
#
# Note that not all possible configuration values are present in this
# autogenerated file.
#
# All configuration values have a default; values that are commented out
# serve to show the default.
import sys
import os
import shlex
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
#sys.path.insert(0, os.path.abspath('.'))
# -- General configuration ------------------------------------------------
# If your documentation needs a minimal Sphinx version, state it here.
#needs_sphinx = '1.0'
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
# ones.
extensions = [
'sphinx.ext.intersphinx',
'sphinx.ext.graphviz'
]
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
# The suffix(es) of source filenames.
# You can specify multiple suffix as a list of string:
# source_suffix = ['.rst', '.md']
source_suffix = '.rst'
# The encoding of source files.
#source_encoding = 'utf-8-sig'
# The master toctree document.
master_doc = 'index'
# General information about the project.
project = u'akanda'
copyright = u'2015, Akanda, Inc'
author = u'Akanda, Inc'
# The version info for the project you're documenting, acts as replacement for
# |version| and |release|, also used in various other places throughout the
# built documents.
#
# The short X.Y version.
version = '1.0'
# The full version, including alpha/beta/rc tags.
release = '1.0'
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
#
# This is also used if you do content translation via gettext catalogs.
# Usually you set "language" from the command line for these cases.
language = None
# There are two options for replacing |today|: either, you set today to some
# non-false value, then it is used:
#today = ''
# Else, today_fmt is used as the format for a strftime call.
#today_fmt = '%B %d, %Y'
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
exclude_patterns = []
# The reST default role (used for this markup: `text`) to use for all
# documents.
#default_role = None
# If true, '()' will be appended to :func: etc. cross-reference text.
#add_function_parentheses = True
# If true, the current module name will be prepended to all description
# unit titles (such as .. function::).
#add_module_names = True
# If true, sectionauthor and moduleauthor directives will be shown in the
# output. They are ignored by default.
#show_authors = False
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx'
# A list of ignored prefixes for module index sorting.
#modindex_common_prefix = []
# If true, keep warnings as "system message" paragraphs in the built documents.
#keep_warnings = False
# If true, `todo` and `todoList` produce output, else they produce nothing.
todo_include_todos = False
# -- Options for HTML output ----------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
html_theme = 'alabaster'
# Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the
# documentation.
#html_theme_options = {}
# Add any paths that contain custom themes here, relative to this directory.
#html_theme_path = []
# The name for this set of Sphinx documents. If None, it defaults to
# "<project> v<release> documentation".
#html_title = None
# A shorter title for the navigation bar. Default is the same as html_title.
#html_short_title = None
# The name of an image file (relative to this directory) to place at the top
# of the sidebar.
#html_logo = None
# The name of an image file (within the static path) to use as favicon of the
# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
# pixels large.
#html_favicon = None
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = ['_static']
# Add any extra paths that contain custom files (such as robots.txt or
# .htaccess) here, relative to this directory. These files are copied
# directly to the root of the documentation.
#html_extra_path = []
# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
# using the given strftime format.
#html_last_updated_fmt = '%b %d, %Y'
# If true, SmartyPants will be used to convert quotes and dashes to
# typographically correct entities.
#html_use_smartypants = True
# Custom sidebar templates, maps document names to template names.
#html_sidebars = {}
# Additional templates that should be rendered to pages, maps page names to
# template names.
#html_additional_pages = {}
# If false, no module index is generated.
#html_domain_indices = True
# If false, no index is generated.
#html_use_index = True
# If true, the index is split into individual pages for each letter.
#html_split_index = False
# If true, links to the reST sources are added to the pages.
#html_show_sourcelink = True
# If true, "Created using Sphinx" is shown in the HTML footer. Default is True.
#html_show_sphinx = True
# If true, "(C) Copyright ..." is shown in the HTML footer. Default is True.
#html_show_copyright = True
# If true, an OpenSearch description file will be output, and all pages will
# contain a <link> tag referring to it. The value of this option must be the
# base URL from which the finished HTML is served.
#html_use_opensearch = ''
# This is the file name suffix for HTML files (e.g. ".xhtml").
#html_file_suffix = None
# Language to be used for generating the HTML full-text search index.
# Sphinx supports the following languages:
# 'da', 'de', 'en', 'es', 'fi', 'fr', 'hu', 'it', 'ja'
# 'nl', 'no', 'pt', 'ro', 'ru', 'sv', 'tr'
#html_search_language = 'en'
# A dictionary with options for the search language support, empty by default.
# Now only 'ja' uses this config value
#html_search_options = {'type': 'default'}
# The name of a javascript file (relative to the configuration directory) that
# implements a search results scorer. If empty, the default will be used.
#html_search_scorer = 'scorer.js'
# Output file base name for HTML help builder.
htmlhelp_basename = 'akandadoc'
# -- Options for LaTeX output ---------------------------------------------
latex_elements = {
# The paper size ('letterpaper' or 'a4paper').
#'papersize': 'letterpaper',
# The font size ('10pt', '11pt' or '12pt').
#'pointsize': '10pt',
# Additional stuff for the LaTeX preamble.
#'preamble': '',
# Latex figure (float) alignment
#'figure_align': 'htbp',
}
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title,
# author, documentclass [howto, manual, or own class]).
latex_documents = [
(master_doc, 'akanda.tex', u'akanda Documentation',
u'Akanda, Inc', 'manual'),
]
# The name of an image file (relative to this directory) to place at the top of
# the title page.
#latex_logo = None
# For "manual" documents, if this is true, then toplevel headings are parts,
# not chapters.
#latex_use_parts = False
# If true, show page references after internal links.
#latex_show_pagerefs = False
# If true, show URL addresses after external links.
#latex_show_urls = False
# Documents to append as an appendix to all manuals.
#latex_appendices = []
# If false, no module index is generated.
#latex_domain_indices = True
# -- Options for manual page output ---------------------------------------
# One entry per manual page. List of tuples
# (source start file, name, description, authors, manual section).
man_pages = [
(master_doc, 'akanda', u'akanda Documentation',
[author], 1)
]
# If true, show URL addresses after external links.
#man_show_urls = False
# -- Options for Texinfo output -------------------------------------------
# Grouping the document tree into Texinfo files. List of tuples
# (source start file, target name, title, author,
# dir menu entry, description, category)
texinfo_documents = [
(master_doc, 'akanda', u'akanda Documentation',
author, 'akanda', 'One line description of project.',
'Miscellaneous'),
]
# Documents to append as an appendix to all manuals.
#texinfo_appendices = []
# If false, no module index is generated.
#texinfo_domain_indices = True
# How to display URL addresses: 'footnote', 'no', or 'inline'.
#texinfo_show_urls = 'footnote'
# If true, do not generate a @detailmenu in the "Top" node's menu.
#texinfo_no_detailmenu = False
# Example configuration for intersphinx: refer to the Python standard library.
intersphinx_mapping = {'https://docs.python.org/': None}

View File

@ -0,0 +1,20 @@
Contributing
============
Installing Akanda Locally for Development
-----------------------------------------
Akanda's own `continuous integration <http://ci.akanda.io>`_ is open source
`(github.com/akanda/akanda-ci) <https://github.com/akanda/akanda-ci>`_, and
includes `Ansible <http://ansibleworks.com>`_ playbooks which can be used to
spin up the Akanda platform with a `devstack
<http://docs.openstack.org/developer/devstack/>`_ installation::
$ pip install ansible
$ ansible-playbook -vv playbooks/ansible-devstack.yml -e "branch=stable/juno"
Submitting Code Upstream
------------------------
All of Akanda's code is 100% open-source and is hosted `on GitHub
<http://github.com/akanda/>`_. Pull requests are welcome!

41
docs/source/index.rst Normal file
View File

@ -0,0 +1,41 @@
.. akanda documentation master file, created by
sphinx-quickstart on Thu Apr 2 14:55:06 2015.
You can adapt this file completely to your liking, but it should at least
contain the root `toctree` directive.
Akanda
======
Akanda is the only 100% open source network virtualization platform built by
OpenStack operators for real OpenStack clouds. Originally developed by
`DreamHost <https://dreamhost.com>`_ for their OpenStack-based public cloud,
`DreamCompute <https://dreamhost.com/cloud/compute>`_, Akanda eliminates the
need for complex SDN controllers, overlays, and multiple plugins by providing
a simple integrated networking stack (routing, firewall, and load balancing via
a :ref:`virtual software router <appliance>`) for connecting and securing
multi-tenant OpenStack environments.
Narrative Documentation
-----------------------
.. toctree::
:maxdepth: 2
architecture.rst
rug.rst
appliance.rst
contribute.rst
operation.rst
Reference
---------
.. toctree::
:maxdepth: 2
reference.rst
Licensing
---------
Akanda is licensed under the Apache-2.0 license and is copyright `Akanda, Inc
<http://akanda.io>`_.

88
docs/source/operation.rst Normal file
View File

@ -0,0 +1,88 @@
.. _operator_tools:
Operation and Deployment
========================
Installation
------------
You can install from GitHub directly with ``pip``::
$ pip install -e git://github.com/akanda/akanda-rug.git@stable/juno#egg=akanda-rug
After installing :py:mod:`akanda.rug`, it can be invoked as::
$ akanda-rug-service --config-file /etc/akanda-rug/rug.ini
The :py:mod:`akanda.rug` service is intended to run on a management network (a
separate network for use by your cloud operators). This segregation prevents
system administration and the monitoring of system access from being disrupted
by traffic generated by guests.
Operator Tools
--------------
rug-ctl
+++++++
:program:`rug-ctl` is a tool which can be used to send manual instructions to
a running :py:mod:`akanda.rug` via AMQP::
$ rug-ctl browse
A curses console interface for browsing the state
of every Neutron router and issuing `rebuild` commands
$ rug-ctl poll
Sends a POLL instruction to every router to check health
$ rug-ctl router rebuild <router-id>
Sends a REBUILD instruction to a specific router
$ rug-ctl router update <router-id>
Sends an UPDATE instruction to a specific router
$ rug-ctl router debug <router-id>
Places a specific router in `debug mode`.
This causes the rug to ignore messages for the specified
router (so that, for example, operators can investigate
troublesome routers).
$ rug-ctl router manage <router-id>
Removes a specific router from `debug mode` and places
it back under akanda-rug management.
$ rug-ctl tenant debug <tenant-id>
Places a specific tenant in `debug mode`.
This causes the rug to ignore messages for the specified
tenant.
troublesome routers).
$ rug-ctl tenant manage <tenant-id>
Removes every router for a specific tenant from `debug mode`
and places the tenant back under akanda-rug management.
$ rug-ctl ssh <router-id>
Establishes an ssh connection with a specified router VM.
$ rug-ctl workers debug
Causes the rug to print debugging diagnostics about the
current state of its worker processes and the state machines
under their management.
akanda-debug-router
+++++++++++++++++++
:program:`akanda-debug-router` is a diagnostic tool which can be used to
analyze the state machine flow of any router and step through its operation
using Python's debugger. This is particularly useful for development purposes
and understanding the nature of the :py:mod:`akanda.rug` state machine, but it's
also useful for debugging problematic routers as an operator; a common pattern
for determining why a router VM won't boot is to place the router in `debug
mode`::
$ rug-ctl router debug <router-id>
...and then step through the handling of a manual ``UPDATE`` event to see where
it fails::
$ akanda-debug-router --router-id <router-id>

View File

@ -0,0 +1,5 @@
Configuration Options
=====================
:py:mod:`akanda-rug` uses :py:mod:`oslo.config` for configuration, so it's
configuration file format should be very familiar to OpenStack deployers::

107
docs/source/rug.rst Normal file
View File

@ -0,0 +1,107 @@
.. _rug:
Router Orchestration and Management
===================================
RUG - Router Update Generator
-----------------------------
:program:`akanda-rug-service` is a multiprocessed, multithreaded Python process
composed of three primary subsystems, each of which are spawned as a subprocess
of the main :py:mod:`akanda.rug` process:
L3 and DHCP Event Consumption
-----------------------------
:py:mod:`akanda.rug.notifications` uses `kombu <https://pypi.python.org/pypi/kombu>`_
and a Python :py:mod:`multiprocessing.Queue` to listen for specific Neutron service
events (e.g., ``router.interface.create``, ``subnet.create.end``,
``port.create.end``, ``port.delete.end``) and normalize them into one of
several event types:
* ``CREATE`` - a router creation was requested
* ``UPDATE`` - services on a router need to be reconfigured
* ``DELETE`` - a router was deleted
* ``POLL`` - used by the :ref:`health monitor<health>` for checking aliveness
of a router VM
* ``REBUILD`` - a router VM should be destroyed and recreated
As events are normalized and shuttled onto the :py:mod:`multiprocessing.Queue`,
:py:mod:`akanda.rug.scheduler` shards (by Tenant ID, by default) and
distributes them amongst a pool of worker processes it manages.
This system also consumes and distributes special :py:mod:`akanda.rug.command` events
which are published by the :program:`rug-ctl` :ref:`operator tools<operator_tools>`.
State Machine Workers and Router Lifecycle
------------------------------------------
Each multithreaded worker process manages a pool of state machines (one
per virtual router), each of which represents the lifecycle of an individual
router. As the scheduler distributes events for a specific router, logic in
the worker (dependent on the router's current state) determines which action to
take next:
.. graphviz:: worker_diagram.dot
For example, let's say a user created a new Neutron network, subnet, and router.
In this scenario, a ``router-interface-create`` event would be handled by the
appropriate worker (based by tenant ID), and a transition through the state
machine might look something like this:
.. graphviz:: sample_boot.dot
State Machine Flow
++++++++++++++++++
The supported states in the state machine are:
:CalcAction: The entry point of the state machine. Depending on the
current status of the router VM (e.g., ``ACTIVE``, ``BUILD``, ``SHUTDOWN``)
and the current event, determine the first step in the state machine to
transition to.
:Alive: Check aliveness of the router VM by attempting to communicate with
it via its REST HTTP API.
:CreateVM: Call ``nova boot`` to boot a new router VM. This will attempt
to boot a router VM up to a (configurable) number of times before
placing the router into ``ERROR`` state.
:CheckBoot: Check aliveness (up to a configurable number of seconds) of the
router until the VM is responsive and ready for initial configuration.
:ConfigureVM: Configure the router VM and its services. This is generally
the final step in the process of booting and configuring a router. This
step communicates with the Neutron API to generate a comprehensive network
configuration for the router (which is pushed to the router via its REST
API). On success, the state machine yields control back to the worker
thread and that thread handles the next event in its queue (likely for
a different router VM and its state machine).
:ReglugVM: Attempt to hot-plug/unplug a network from the router via ``nova
interface-attach`` or ``nova-interface-detach``.
:StopVM: Terminate a running router VM. This is generally performed when
a Neutron router is deleted or via explicit operator tools.
:ClearError: After a (configurable) number of ``nova boot`` failures, Neutron
routers are automatically transitioned into a cooldown ``ERROR`` state
(so that :py:mod:`akanda.rug` will not continue to boot them forever; this is
to prevent further exasperation of failing hypervisors). This state
transition is utilized to add routers back into management after issues
are resolved and signal to :py:mod:`akanda-rug` that it should attempt
to manage them again.
.. _health:
Health Monitoring
-----------------
``akanda.rug.health`` is a subprocess which (at a configurable interval)
periodically delivers ``POLL`` events to every known virtual router. This
event transitions the state machine into the ``Alive`` state, which (depending
on the availability of the router), may simply exit the state machine (because
the router's status API replies with an ``HTTP 200``) or transition to the
``CreateVM`` state (because the router is unresponsive and must be recreated).

View File

@ -0,0 +1,14 @@
digraph sample_boot {
rankdir=LR;
node [shape = doublecircle];
CalcAction;
node [shape = circle];
CalcAction -> Alive;
Alive -> CreateVM;
CreateVM -> CheckBoot;
CheckBoot -> CheckBoot;
CheckBoot -> ConfigureVM;
}

View File

@ -0,0 +1,27 @@
digraph sample_boot {
node [shape = square];
AMQP;
"Event Processing + Scheduler";
Nova;
Neutron;
node [shape = circle];
AMQP -> "Event Processing + Scheduler";
subgraph clusterrug {
"Event Processing + Scheduler" -> "Worker 1";
"Event Processing + Scheduler" -> "Worker ...";
"Event Processing + Scheduler" -> "Worker N";
"Worker 1" -> "Thread 1"
"Worker 1" -> "Thread ..."
"Worker 1" -> "Thread N"
}
"Thread 1" -> "Router VM 1";
"Thread 1" -> "Router VM ..." [ label = "Appliance REST API" ];
"Thread 1" -> "Router VM N";
"Thread 1" -> "Nova" [ label = "Nova API" ];
"Thread 1" -> "Neutron" [ label = "Neutron API" ];
}