Retire workload-ref-archs repo

As discussed in TC meeting[1], TC is retiring the
workload-ref-archs repo.

[1] https://meetings.opendev.org/meetings/tc/2021/tc.2021-06-17-15.00.log.html#l-98

Change-Id: I935d26a6b6b8fc137491674b7f81667127a58137
This commit is contained in:
Ghanshyam Mann 2021-06-17 19:16:28 -05:00 committed by Ghanshyam
parent 4379915dc5
commit 4f68968bed
62 changed files with 8 additions and 6223 deletions

58
.gitignore vendored
View File

@ -1,58 +0,0 @@
*.py[cod]
# C extensions
*.so
# Packages
*.egg*
*.egg-info
dist
build
eggs
parts
bin
var
sdist
develop-eggs
.installed.cfg
lib
lib64
# Installer logs
pip-log.txt
# Unit test / coverage reports
cover/
.coverage*
!.coveragerc
.tox
nosetests.xml
.testrepository
.venv
# Translations
*.mo
# Mr Developer
.mr.developer.cfg
.project
.pydevproject
# Complexity
output/*.html
output/*/index.html
# Sphinx
doc/build
# pbr generates these
AUTHORS
ChangeLog
# Editors
*~
.*.swp
.*sw?
# Files created by releasenotes build
releasenotes/build

View File

@ -1,3 +0,0 @@
# Format is:
# <preferred e-mail> <other e-mail 1>
# <preferred e-mail> <other e-mail 2>

View File

@ -1,3 +0,0 @@
- project:
templates:
- build-openstack-docs-pti

View File

@ -1,17 +0,0 @@
If you would like to contribute to the development of OpenStack, you must
follow the steps in this page:
http://docs.openstack.org/infra/manual/developers.html
If you already have a good understanding of how the system works and your
OpenStack accounts are set up, you can skip to the development workflow
section of this documentation to learn how changes to OpenStack should be
submitted for review via the Gerrit tool:
http://docs.openstack.org/infra/manual/developers.html#development-workflow
Pull requests submitted through GitHub will be ignored.
Bugs should be filed on Launchpad, not GitHub:
https://bugs.launchpad.net/workload-ref-archs

View File

@ -1,74 +0,0 @@
workload-ref-archs Style Commandments
=====================================
- Read the OpenStack Style Commandments
https://docs.openstack.org/hacking/latest/
- Read the `OpenStack Documentation Contributor Guide`_,
especially the `Writing style`_, `RST conventions`_
and `Diagram guidelines`_
.. _OpenStack Documentation Contributor Guide: https://docs.openstack.org/contributor-guide
.. _Writing style: https://docs.openstack.org/contributor-guide/writing-style.html
.. _RST conventions: https://docs.openstack.org/contributor-guide/rst-conv.html
.. _Diagram guidelines: https://docs.openstack.org/contributor-guide/diagram-guidelines.html
Proposing a new Workload Reference Architecture
-----------------------------------------------
- Anyone can propose new Workload Reference Architecture
- Anyone can propose to enhance or modify existing Workload Reference
Architecture
- All proposal will be reviewed by the core team members of Enterprise WG
Submission Process
------------------
#. Follow the instructions at `First timers <https://docs.openstack.org/contributor-guide/quickstart/first-timers.html>`_
to configure a local environment.
#. Clone the repository::
git clone https://github.com/openstack/workload-ref-archs.git
#. Create a branch for the new workload ::
git checkout -b <workload-name>
#. Create the following directory structures under `doc/source <doc/source>`_
for the new workload::
<workload-name>
<workload-name>/<workload-name>.rst
<workload-name>/figures
<workload-name>/sample/heat
<workload-name>/sample/murano
.. list-table::
:widths: 15 25
* - <workload-name>.rst
- Provides a full description of the workload.
Please follow the structure in `workload-template.rst <workload-template.rst>`_
* - figures
- Include all images in this folder
* - sample/heat
- Include sample code for heat (if any)
* - sample/murano
- Include sample code for murano (if any)
#. Commit the changes::
git commit -a -m "new workload <workload-name>"
#. Submit the changes for review, use the "new-workload" topic for new
workload::
git review -t new-workload
#. The core reviewers of the Workload Reference Architectures team will review
the submision.

176
LICENSE
View File

@ -1,176 +0,0 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.

View File

@ -1,6 +0,0 @@
include AUTHORS
include ChangeLog
exclude .gitignore
exclude .gitreview
global-exclude *.pyc

View File

@ -1,22 +1,10 @@
==============================================================
OpenStack Enterprise WG Workload Reference Architectures.
==============================================================
This project is no longer maintained.
The OpenStack Enterprise Work Group is developing a series of
'tenant-level' reference architectures to help users to learn and
understand how to deploy different types of workloads on an OpenStack
Cloud. These reference architectures describe the list of OpenStack
services that can be used to support the workload and provides sample code
to bootstrap the workload environment on an OpenStack Cloud.
The contents of this repository are still available in the Git
source code management system. To see the contents of this
repository before it reached its end of life, please check out the
previous commit with "git checkout HEAD^1".
This repo will be used to maintain the documents of these reference
architectures and sample code.
* Free software: Apache license
* Bugs: http://bugs.launchpad.net/workload-ref-archs
.. * Documentation: http://docs.openstack.org/developer/workload-ref-archs
.. * Source: http://git.openstack.org/cgit/openstack/workload-ref-archs
The `Hacking.rst <HACKING.rst>`_ file contains details on how to contribute
workload reference architectures.
For any further questions, please email
openstack-discuss@lists.openstack.org or join #openstack-dev on
OFTC.

View File

@ -1,436 +0,0 @@
OpenStack Workload Reference Architecture: Big Data
===================================================
Introduction
------------
Big Data analytics has established itself as an important process to support
new or enhanced business models. Big Data is a term for data sets that are so
large or complex that traditional data processing applications are inadequate
to deal with them. Big Data analytics refers to the use of predictive
analytics, user behavior analytics, or certain other advanced data analytics
methods that extract value from data.
Since Big Data analytics can include and analyze all types of data sources,
the results are valuable for most departments in an enterprise. Each might
perform analytics with different business objectives. Considering the short
innovation cycle of most digital business models, Enterprise IT is often
under pressure to fulfill a multitude of demands quickly. A flexible, fast,
efficient and easy-to-manage Big Data deployment is critical.
Cloud is one approach to tackle the dynamic situation caused by high volumes
of analytics requests with rapid deployment time requirements. In an
OpenStack-based cloud environment, a Big Data cluster can be provisioned in
an automated manner. The value of Big Data on cloud contributes to it being
one of the top use cases for OpenStack. According to the April 2016
`OpenStack User Survey`_, 27 percent of users have deployed or are testing
Big Data analytics solutions.
Apache Hadoop on OpenStack offers a Big Data infrastructure that scales out
both compute and storage resources, and provides the secure and automated
capabilities for the analytics process. The `Apache Hadoop project`_ is the
de facto standard open source framework for Big Data analytics, used in the
vast majority of deployments. Multiple Hadoop clusters are often deployed to
respond to an enterprises needs.
This reference architecture is intended for enterprise architects who are
looking to deploy Big Data Hadoop clusters on an OpenStack cloud. It describes
a generic Hadoop architecture and uses open source technologies:
* `OpenStack cloud software`_
* `Ubuntu Linux`_ operating system
* `Apache Ambari`_ open source software to provision, manage and monitor
Hadoop clusters.
.. _OpenStack User Survey: https://www.openstack.org/assets/survey/April-2016-User-Survey-Report.pdf
.. _Apache Hadoop project: http://hadoop.apache.org/
.. _OpenStack cloud software: http://www.openstack.org/software/
.. _Ubuntu Linux: https://www.ubuntu.com/
.. _Apache Ambari: http://ambari.apache.org/
This reference architecture describes and includes installation files for a
basic Hadoop cluster. Additional services can be applied for a more complex
configuration and will be covered in future works.
Figure 1: High-level overview of Hadoop architecture
.. figure:: figures/figure01.png
:alt: Figure 1: High-level overview of Hadoop architecture
OpenStack for Hadoop Clusters
-----------------------------
This Hadoop architecture is derived from actual use cases and experience.
Building a Hadoop-based Big Data environment can be a complex task. It is
highly recommended to use common approaches to reduce this complexity, such as
identifying the data processing models. These processing models demand high
availability of resources, networking, bandwidth, storage, as well as security
constraints in the enterprise context.
* **Batch processing model** Analytics based on historic data
In the batch processing model, the analytic tasks are executed or queried in
a scheduled or recurring manner. Typically the data is already available for
analysis in a static repository such as large files or databases. The batch
processing model is often used to analyze business data of a certain period.
One example is an ETL (extract, transform, load) process to extract business
data from various ERP systems for supply chain planning.
* **Stream processing model** Business real-time analytics
In the stream processing model, data is continuously streamed and directly
analyzed in real time. Actions can be triggered in case of occurrence of
special or defined events. An example of a stream processing workload is
fraud detection for credit card companies. A credit card transaction is
transmitted online to the credit card company and is evaluated in real time
based on certain parameters; for example, checking the cards validity and
the purchase amount against the limit. It is also possible to check the
location of purchases and compare this to other recent purchases.
For example, if purchases are made in the U.S. and Europe in a timespan of
only a few hours, this indicates a high likelihood of fraud and action can be
taken to decline the transaction.
* **Predictive processing model** Predict outcome based on recent and
historical data
This model is used to predict an outcome, behavior or other actions for the
future. Generally this analytic model consists of various predictive
algorithms. One example is predictive maintenance. Data from machines,
engines or other sensors is collected and analyzed so that predictive actions
can be made to recommend the next maintenance cycle before a failure might
occur.
Hadoop clusters use a master-slave architecture. The data is ingested into the
cluster and stored in blocks in the Hadoop distributed file system (HDFS). The
default block size is 64MB. The blocks of data are replicated to different
nodes in the clusters. Part of the core Hadoop project, `YARN`_ provides a
framework for job scheduling and cluster resource management. With YARN,
multiple data processing applications can be implemented in the Hadoop cluster.
.. _YARN: http://hortonworks.com/apache/yarn/
Typically a Hadoop cluster with YARN is composed of different types of cluster
nodes:
* **NameNode** The metadata about the data blocks are stored in the NameNode.
This provides lookup functionality and tracking for all data or files in the
Hadoop cluster. NameNode does not store the actual data. Generally the
NameNode requires high memory (RAM) allocation. The NameNode belongs to the
"master" part of Hadoop architecture.
* **DataNode** This is also referred as the worker node and belongs to the
"slave" part of a Hadoop architecture. It is responsible for storing and
computing the data and responds to the NameNode for filesystem operations.
Generally a DataNode requires high amount of storage space.
* **ResourceManager** This is the master that manages the resources in the
Hadoop cluster. It has a scheduler to allocate resources to the various
applications across the cluster.
* **NodeManager** This takes instruction from the ResourceManager and is
responsible for executing the applications. It monitors and reports the
resources (cpu, memory, disk) to the ResourceManager.
An OpenStack cloud is powered by many different services (also known as
projects). Utilizing the core services and the Hadoop Common package, a
Hadoop cluster can be deployed in a virtualized environment with minimal
effort. Optional services such as the OpenStack Orchestration service (Heat)
can be added to automate deployment. This reference architecture does not
cover OpenStack Big Data Service (Sahara). Sahara provides a simple means to
provision as well as scale previously provisioned Hadoop clusters.
Sahara will be covered in future reference architecture documents.
Figure 2 shows the core and optional services in relation to one another,
and the services to confirm are available in your OpenStack cloud.
Figure 2. Logical representation of OpenStack services in support of Hadoop
clusters
.. figure:: figures/figure02.png
:alt: Figure 2. Logical representation of OpenStack services in support of Hadoop clusters
Brief descriptions of the core and optional services are as follow.
The `OpenStack Project Navigator`_ provides additional information.
.. _OpenStack Project Navigator: http://www.openstack.org/software/project-navigator/
.. list-table:: **Core services**
:widths: 20 50
* - Compute (Nova)
- Manages the life cycle of compute instances, including spawning,
scheduling, and decommissioning of virtual machines (VMs) on demand.
* - Image Service (Glance)
- Stores and retrieves VM disk images. Used by OpenStack Compute during
instance provisioning.
* - Block Storage (Cinder)
- Virtualizes the management of block storage devices and provides a
self-service API to request and use those resources regardless of the
physical storage location or device type. Supports popular storage
devices.
* - Networking (Neutron)
- Enables network connectivity as a service for other OpenStack services,
such as OpenStack Compute. Provides an API to define networks and their
attachments. Supports popular networking vendors and technologies. Also
provides LBaaS and Firewall-as-a-Service (FWaaS).
* - Identity Service (Keystone)
- Provides authentication and authorization for the other OpenStack
services.
* - Object Storage (Swift)
- Stores and retrieves arbitrary unstructured data objects via a RESTful
HTTP-based API. Highly fault-tolerant with data replication and
scale-out architecture.
.. list-table:: **Optional services**
:widths: 20 50
* - Orchestration (Heat)
- Orchestrates multiple composite cloud applications by using either the
native HOT template format or the AWS CloudFormation template format,
through both an OpenStack-native REST API and a
CloudFormation-compatible Query API.
* - Telemetry (Ceilometer)
- Monitors and meters the OpenStack cloud for billing, benchmarking,
scalability, and statistical purposes.
* - Dashboard (Horizon)
- Provides an extensible web-based self-service portal to interact with
underlying OpenStack services, such as launching an instance, assigning
IP addresses, or configuring access controls.
Figure 3 illustrates the basic functional interaction between these services.
For further details:
`OpenStack Conceptual Architecture Diagram <http://docs.openstack.org/admin-guide/common/get-started-conceptual-architecture.html>`_.
Figure 3. Functional interaction between OpenStack components
.. figure:: figures/figure03.png
:alt: Figure 3. Functional interaction between OpenStack components
Structuring a Hadoop Cluster with OpenStack
-------------------------------------------
OpenStack provides the necessary compute, network and data storage services
for building a cloudbased Hadoop cluster to meet the needs of the various
processing models.
Networking
**********
Multiple networks can be created for the Hadoop cluster connectivity. Neutron
routers are created to route the traffic between networks.
* **Edge Network** Provides connectivity to the client-facing and enterprise
IT network. End users are accessing the Hadoop cluster through this network.
* **Cluster Network** Provides inter-node communication for the Hadoop
cluster.
* **Management Network** Optionally provides a dedicated network for
accessing the Hadoop nodes' operating system for maintenance and monitoring
purposes.
* **Data Network** Provides a dedicated network for accessing the object
storage within an OpenStack Swift environment or to an external object
storage such as Amazon S3. This is optional if object storage is not used.
Neutron security groups are used to filter traffic. Hadoop uses different
ports and protocols depending on the services deployed and communications
requirements. Different security groups can be created for different types of
nodes, depending on the Hadoop services running on it. With OpenStack security
groups, multiple rules can be specified that allow/deny traffic from certain
protocols, ports, or IP addresses or ranges. Each virtual machine (VM) can be
applied with one or more security groups. In OpenStack, each tenant has a
default security group, which is applied to instances that have no other
security group defined. Unless changed, this security group denies all
incoming traffic.
Image Management
****************
There are multiple options to provide operating system configuration for the
Hadoop nodes. On-the-fly configuration allows greater flexibility but can
increase spawning time. The operating system images can also be pre-configured
to contain all of the Hadoop-related packages required for the different types
of nodes. Pre-configuration can reduce instance build time, but includes its
own set of problems, such as patching and image lifecycle management. In this
example, the Heat orchestration features are used to configure the Hadoop
nodes on-the-fly. Additional Hadoop and operating system packages are installed
on-the-fly depending on the node type (e.g. NameNode, DataNode). These packages
can be downloaded from Internet-based or local repositories. For a more secure
enterprise environment, local package repository is recommended.
Data Management
***************
Similar to an external hard drive, Cinder volumes are persistent block-storage
virtual devices that may be mounted and dismounted from the VM. Cinder volumes
can be attached to only one instance at a time. A Cinder volume is attached to
each Hadoop DataNode to provide the HDFS.
If the data to be processed by a Hadoop cluster needs to be accessed by other
applications, the OpenStack Swift object storage can be used to store it.
Swift offers a cost-effective way of storing unstructured data. Hadoop provides
a built-in interface to access Swift or AWS S3 object storage; either can be
configured to serve data over HTTP to the Hadoop cluster.
Orchestration
*************
Heat uses template files to automate the deployment of complex cloud
environments. Orchestration is more than just standing up virtual servers;
it can also be used to install software, apply patches, configure networking
and security, and more. Heat templates are provided with this reference
architecture that allow the user to quickly and automatically setup and
configure a Hadoop cluster for different data processing models
(types of analytics).
Figure 4: A Hadoop cluster on OpenStack
.. figure:: figures/figure04.png
:alt: Figure 4: A Hadoop cluster on OpenStack
Demonstration and Sample Code
-----------------------------
This section describes the Heat template provided for this workload. The
template is used to configure all of the Hadoop nodes. It has been created
for reference and training and is not intended to be used unmodified in a
production environment.
An Ambari Hadoop environment is created on a standard Ubuntu 14.04 server
cloud image in QEMU copy on write (qcow2). The qcow2 cloud image is stored in
the Glance repository. The Apache Ambari open source project makes Hadoop
management simpler by providing an easy-to-use Hadoop management web UI backed
by its RESTful APIs. Basically, Ambari is the central management service
for open source Hadoop. In this architecture, an Ambari service is installed
on the Master Node (NameNode). The Heat template also installs additional
required services such as the name server, Network Time Protocol (NTP) server,
database, and the operating system configuration customization required for
Ambari. Floating IP can be allocated to the Master Node to provide user access
to the Ambari service. In addition, an Ambari agent service is deployed on
each node of the cluster. This provides communication and authentication
functionality between the cluster nodes.
The following nodes are installed by the Heat template:
* **Master Node (NameNode)** This node houses the cluster-wide management
services that provide the internal functionality to manage the Hadoop cluster
and its resources.
* **Data Nodes** Services used for managing and analyzing the data, stored in
HDFS, are located on these nodes. Analytics jobs access and compute the data
on the Data Nodes.
* **Edge Node** Services used to access the cluster environment or the data
outside the cluster are on this node. For security, direct user access to the
Hadoop cluster should be minimized. Users can access the cluster via the
command line interface (CLI) from the Edge Node. All data-import and
data-export processes can be channeled on one or more Edge Nodes.
* **Admin Node** Used for system-wide administration
Multiple networks (edge, cluster, management, data) described in previous
sections are created by the Heat orchestration. A Neutron security group
is attached to each instance of the cluster node. The template also provisions
Cinder volumes and attaches one Cinder volume to each node. Swift is not
configured in this template and will be covered in future work.
The Heat template, BigData.yaml, can be downloaded from
http://www.openstack.org/software/sample-configs/#big-data.
Please review the README file for further details.
Scope and Assumptions
---------------------
The Heat template provided for this reference architecture assumes that the
Hadoop cluster workload is deployed in a single-region, single-zone OpenStack
environment. The deployment in a multi-zone/multiregion environment is outside
the scope of this document.
The Heat template is configured to address the minimum infrastructure
resources for deploying a Hadoop cluster. Architecting a Hadoop cluster is
highly dependent on the data volume and other performance indicators defined by
the business use cases, such as response times for analytic processes and how
and which services will be used.
The sample environment uses the Java environment. As such, the Heat template
installer will be required to accept the Java license agreement.
As mentioned, Sahara is not used in this implementation. Sahara is the
OpenStack Big Data Service that provisions a data-intensive application cluster
such as Hadoop or Spark. The Sahara project enables users to easily provision
and manage clusters with Hadoop and other data processing frameworks on
OpenStack. An update to this reference architecture to include Sahara is under
consideration.
Summary
-------
There are many possible choices or strategies for deploying a Hadoop cluster
and there are many possible variations in OpenStack deployment. This document
and the accompanying Heat templates serve as a general reference architecture
for a basic deployment and installation process via Openstack orchestration.
They are intended to demonstrate how easily and quickly a Hadoop Cluster can be
deployed, using the core OpenStack services. Complementary services will be
included in future updates.
These additional resources are recommended to delve into more depth on overall
OpenStack cloud architecture, the OpenStack services covered in this reference
architecture, and Hadoop and Ambari. The vibrant, global OpenStack community
and ecosystem can be invaluable for their experience and advice, especially the
users that have deployed Big Data solutions. Visit openstack.org to get started
or click on these resources to begin designing your OpenStack-based Big Data
analytics system.
.. list-table::
:widths: 25 50
:header-rows: 1
* - Resource
- Overview
* - `OpenStack Marketplace`_
- One-stop resource to the skilled global ecosystem for distributions,
drivers, training, services and more.
* - `OpenStack Architecture Design Guide`_
- Guidelines for designing an OpenStack cloud architecture for common use
cases. With examples.
* - `OpenStack Networking Guide`_
- How to deploy and manage OpenStack Networking (Neutron).
* - `OpenStack Virtual Machine Image Guide`_
- This guide describes how to obtain, create, and modify virtual machine
images that are compatible with OpenStack.
* - `Complete OpenStack documentation`_
- Index to all documentation, for every role and step in planning and
operating an OpenStack cloud.
* - `Community Application Catalog`_
- Download this LAMP/WordPress sample application and other free
OpenStack applications here.
* - `Apache Hadoop project`_
- The de facto standard open source framework for Big Data analytics,
used in this reference architecture.
* - `Apache Ambari project`_
- This reference architecture and files deploy Big Data using Ambari, an
open source package for installing, configuring and managing a Hadoop
cluster.
* - `Welcome to the community!`_
- Join mailing lists and IRC chat channels, find jobs and events, access
the source code and more.
* - `User groups`_
- Find a user group near you, attend meetups and hackathons—or organize
one!
* - `OpenStack events`_
- Global schedule of events including the popular OpenStack Summits and
regional OpenStack Days.
.. _OpenStack Marketplace: http://www.openstack.org/marketplace/
.. _OpenStack Architecture Design Guide: http://docs.openstack.org/arch-design/
.. _OpenStack Networking Guide: http://docs.openstack.org/mitaka/networking-guide/
.. _OpenStack Virtual Machine Image Guide: http://docs.openstack.org/image-guide/
.. _Complete OpenStack Documentation: http://docs.openstack.org/
.. _Community Application Catalog: http://apps.openstack.org/
.. _Apache Ambari project: http://ambari.apache.org/
.. _Welcome to the community!: http://www.openstack.org/community/
.. _User groups: https://groups.openstack.org/
.. _OpenStack events: http://www.openstack.org/community/events/

Binary file not shown.

Before

Width:  |  Height:  |  Size: 25 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 31 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 34 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 39 KiB

View File

@ -1,881 +0,0 @@
### Heat Template ###
heat_template_version: 2014-10-16
description: >
Generated template
parameters:
network_external_for_floating_ip:
default: 38a4e580-e368-4404-a2e0-cbef9343740e
description: Network to allocate floating IP from
type: string
network_router_0_external:
default: 38a4e580-e368-4404-a2e0-cbef9343740e
description: Router external network
type: string
network_router_1_external:
default: 38a4e580-e368-4404-a2e0-cbef9343740e
description: Router external network
type: string
network_router_2_external:
default: 38a4e580-e368-4404-a2e0-cbef9343740e
description: Router external network
type: string
image_ubuntu:
default: a808eacb-ab6f-4929-873d-be3ae8535f0d
description: An Ubuntu cloud image (glance image id) to use for all server
type: string
flavor_edge:
default: l1.medium
description: Flavor to use for edge server
type: string
flavor_master:
default: l1.medium
description: Flavor to use for master server
type: string
flavor_data:
default: l1.medium
description: Flavor to use for worker server
type: string
flavor_repo:
default: l1.medium
description: Flavor to use for repository server
type: string
config_dns_nameserver:
default: 8.8.8.8
description: DNS Server for external Access (Temporary)
type: string
resources:
deploymentscript:
type: OS::Heat::SoftwareConfig
properties:
inputs:
- name: previous
default: 'NONE'
group: script
config:
str_replace:
params:
$variable1: "Test"
template: |
#!/bin/bash
case $(hostname) in
*edge*)
SYSTEMTYPE="edge";
;;
*master*)
SYSTEMTYPE="master";
;;
*data*)
SYSTEMTYPE="data";
;;
*repo*)
SYSTEMTYPE="repo";
;;
*)
SYSTEMTYPE="nothing";
;;
esac
FULLHOSTNAME=$(curl http://169.254.169.254/latest/meta-data/hostname)
SHORTHOSTNAME=$(echo $FULLHOSTNAME | awk -F'.' {'print $1'})
DOMAIN=$(echo $FULLHOSTNAME | awk -F'.' {'print $NF'})
MASTERNODE=master-node
function issue_start {
echo ${@}: started >> /etc/issue
}
function issue_end {
if [ "$1" -eq "0" ]; then
echo ${@:2}: success >> /etc/issue
else
echo ${@:2}: failed >> /etc/issue
fi
}
function set_local_hosts {
# Set hostname
ip -o a | grep "inet " | grep -v "^1: lo" | awk -F"/" {'print $1'} | awk {'print $4 " HOSTNAME-"$2".DOMAIN HOSTNAME-"$2'} | sed s/HOSTNAME/$HOSTNAME/g | sed s/DOMAIN/$DOMAIN/g > /mnt/shared/host-$HOSTNAME.txt
# Change eth to networkname
COUNT=0;
for i in ${@}; do
sed -i s/eth${COUNT}/$i/g /mnt/shared/host-$HOSTNAME.txt
COUNT=$(($COUNT + 1));
done
sed -i s/-Cluster-Network//g /mnt/shared/host-$HOSTNAME.txt
}
if [ "$SYSTEMTYPE" == "repo" ]; then
issue_start nfsserver
apt-get -y install nfs-server
mkdir /shared
chmod 777 /shared
echo "/shared *(rw)" >> /etc/exports
service nfs-kernel-server start
issue_end $? nfsserver
# Set SSH Key
ssh-keygen -b 4096 -t rsa -f /root/.ssh/id_rsa -N ''
cp -rp /root/.ssh/id_rsa.pub /shared
fi
cp -rp /etc/issue /etc/issue.orig
issue_start GroupCheck
echo "SYSTEMTYPE: $SYSTEMTYPE" >> /root/output.txt
echo "params: $variable1" >> /root/output.txt
issue_end $? GroupCheck
# Format Partition
issue_start Prepare /dev/vdb
mkfs.ext4 /dev/vdb
# /hadoop
mkdir /hadoop
echo "/dev/vdb /hadoop ext4 defaults 0 0" >> /etc/fstab
mount /hadoop
issue_end $? Prepare /dev/vdb
# Set multiple network adapters
issue_start dhclient
ip a | grep mtu | grep -v lo: | awk {'print "dhclient "$2'} | sed s/:$//g | bash
issue_end $? dhclient
issue_start set ulimits
cat << EOF >> /etc/security/limits.conf
* - nofile 32768
* - nproc 65536
EOF
issue_end $? set ulimits
issue_start deactivate transparent huge pages
cat << EOF > /etc/rc.local
#!/bin/bash
if test -f /sys/kernel/mm/transparent_hugepage/enabled; then
echo "never" > /sys/kernel/mm/transparent_hugepage/enabled
fi
if test -f /sys/kernel/mm/transparent_hugepage/defrag; then
echo "never" > /sys/kernel/mm/transparent_hugepage/defrag
fi
EOF
/bin/bash /etc/rc.local
issue_end $? deactivate transparent huge pages
# Mount NFS Share
issue_start mount nfs share
apt-get -y install nfs-common
mkdir /mnt/shared
# Check if mount is available
while [ ! "$(showmount -e 10.20.7.5)" ]; do
issue_end 1 mount nfs share: not available at present
done
mount 10.20.7.5:/shared /mnt/shared
issue_end $? mount nfs share
# Set Admin SSH Key for easy access
issue_start set admin ssh key
cat /mnt/shared/id_rsa.pub >> /root/.ssh/authorized_keys
issue_end $? set admin ssh key
# Save Hostnames to /mnt/shared
issue_start gathering hostnames
case $SYSTEMTYPE in
edge)
set_local_hosts admin Cluster-Network edge
;;
master)
set_local_hosts admin Cluster-Network Object-Storage-Connect-Network Management
;;
data)
set_local_hosts admin Cluster-Network Object-Storage-Connect-Network Management
;;
repo)
set_local_hosts admin Cluster-Network edge
;;
*)
set_local_hosts normal
;;
esac
issue_end $? gathering hostnames
# Set local /etc/hosts
issue_start hosts_localhost
echo "127.0.0.1 $FULLHOSTNAME $SHORTHOSTNAME" >> /etc/hosts
issue_end $? hosts_localhost
# Configure Name Server
#issue_start nameserver
#echo "nameserver 8.8.8.8" > /etc/resolv.conf
#issue_end $? nameserver
# Configure Time-Server
issue_start Install ntp
apt-get -y install ntp
issue_end $? Install ntp
# Deactivate Swappiness
issue_start Deactivate swappiness
echo "vm.swappiness=1" >> /etc/sysctl.conf
sysctl -w vm.swappiness=1
issue_end $? Deactivate swappiness
# Activate Hortonworks Repository
issue_start Installation ambari-agent
wget -nv http://public-repo-1.hortonworks.com/ambari/ubuntu14/2.x/updates/2.4.0.1/ambari.list -O /etc/apt/sources.list.d/ambari.list
apt-key adv --recv-keys --keyserver keyserver.ubuntu.com B9733A7A07513CAD
apt-get update
apt-get -y install ambari-agent
sed -i s/hostname=localhost/hostname=${MASTERNODE}.$DOMAIN/g /etc/ambari-agent/conf/ambari-agent.ini
issue_end $? Installation ambari-agent
# Install Java 1.8
issue_start java
echo "\n" | add-apt-repository ppa:webupd8team/java
apt-get update
# Accept Licence
echo debconf shared/accepted-oracle-license-v1-1 select true | debconf-set-selections
echo debconf shared/accepted-oracle-license-v1-1 seen true | debconf-set-selections
apt-get -y install oracle-java8-installer
issue_end $? java
# Set all /etc/hosts
issue_start hosts
cp -rp /etc/hosts /tmp/hosts-original
cat /tmp/hosts-original | grep -v "127.0.0.1 $FULLHOSTNAME" > /etc/hosts
cat /mnt/shared/host*.txt >> /etc/hosts
issue_end $? hosts
###################### Individual parts ######################
if [ "$SYSTEMTYPE" == "master" ]; then
issue_start ambari-server
apt-get -y install ambari-server expect
JAVA_HOME="/usr/lib/jvm/java-8-oracle/jre/"
SETUP_AMBARI=$(expect -c "
set timeout 60
spawn ambari-server setup -j $JAVA_HOME
expect \"Customize user account for ambari-server daemon\" {send \"n\r\"}
expect \"Enter advanced database configuration\" {send \"n\r\"}
expect eof
")
echo "${SETUP_AMBARI}"
touch /mnt/shared/ambari-server-installed.txt
service ambari-server start
issue_end $? ambari-server
fi
if [ "$SYSTEMTYPE" == "repo" ]; then
issue_start puppetmaster
apt-get -y install puppetmaster
issue_end $? puppetmaster
fi
issue_start Start Ambari Agent
# Start ambari Agent
# Checks if /mnt/shared/ambari-server-installed.txt exists
while [ ! "$(ls /mnt/shared/ambari-server-installed.txt)" ]; do
issue_end 1 Check if Ambaris Server is installed $(date)
sleep 60
done
service ambari-agent start
issue_end $? Start Ambari Agent
issue_end 0 Finished
volume_0:
properties:
metadata:
attached_mode: rw
readonly: 'False'
bootable: 'False'
size: 10
type: OS::Cinder::Volume
volume_1:
properties:
metadata:
attached_mode: rw
readonly: 'False'
bootable: 'False'
size: 10
type: OS::Cinder::Volume
volume_2:
properties:
metadata:
attached_mode: rw
readonly: 'False'
bootable: 'False'
size: 10
type: OS::Cinder::Volume
volume_3:
properties:
metadata:
attached_mode: rw
readonly: 'False'
bootable: 'False'
size: 10
type: OS::Cinder::Volume
volume_4:
properties:
metadata:
attached_mode: rw
readonly: 'False'
bootable: 'False'
size: 10
type: OS::Cinder::Volume
volume_5:
properties:
metadata:
attached_mode: rw
readonly: 'False'
bootable: 'False'
size: 10
type: OS::Cinder::Volume
floatingip_0:
properties:
floating_network_id:
get_param: network_external_for_floating_ip
type: OS::Neutron::FloatingIP
key_0:
properties:
name: demo1
public_key: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDayVuy2lZ11GuFVQmA402tZvDl7CopLCSPNZn/IqVvdA5A4XtocQnkZVUegQYJ8XMz9RMPAi/0LreUQbaS4/mSDtjAs0GupAbFeMumjzlwdmZEmgCO+iEwkawmXiARV/7A1qZT+5WP7hVJk9svQv2BAiHiXugGQPx4TlRCnMOJZf3T5LmIeNh1XgzWpcmj7NX97hs12iiIBu7HWALgyrp5qshZo0y1vxnedSIQgwnOQiFx0/fUAL7k1pioE7fe88rwQegMDibSeTvDgABLhJUOtC6Gv8kp02XuoOoAecrlqIRfBASQQf7aaNs9oIBiJ4U6Jt6ladHlB/fKpqMbPllf
type: OS::Nova::KeyPair
network_1:
properties:
admin_state_up: true
name: Cluster-Network
shared: false
type: OS::Neutron::Net
subnet_1:
properties:
allocation_pools:
- end: 10.20.1.100
start: 10.20.1.10
cidr: 10.20.1.0/24
dns_nameservers: [ {get_param: config_dns_nameserver} ]
enable_dhcp: true
host_routes: []
ip_version: 4
name: subCluster-Network
network_id:
get_resource: network_1
type: OS::Neutron::Subnet
network_2:
properties:
admin_state_up: true
name: Object-Storage-Connect-Network
shared: false
type: OS::Neutron::Net
subnet_2:
properties:
allocation_pools:
- end: 10.20.2.100
start: 10.20.2.10
cidr: 10.20.2.0/24
dns_nameservers: [ {get_param: config_dns_nameserver} ]
enable_dhcp: true
host_routes: []
ip_version: 4
name: subObject-Storage-Connect-Network
network_id:
get_resource: network_2
type: OS::Neutron::Subnet
network_3:
properties:
admin_state_up: true
name: Object-Storage-Cluster-Network
shared: false
type: OS::Neutron::Net
subnet_3:
properties:
allocation_pools:
- end: 10.20.3.100
start: 10.20.3.10
cidr: 10.20.3.0/24
dns_nameservers: [ {get_param: config_dns_nameserver} ]
enable_dhcp: true
host_routes: []
ip_version: 4
name: subObject-Storage-Cluster-Network
network_id:
get_resource: network_3
type: OS::Neutron::Subnet
network_4:
properties:
admin_state_up: true
name: Management
shared: false
type: OS::Neutron::Net
subnet_4:
properties:
allocation_pools:
- end: 10.20.4.100
start: 10.20.4.10
cidr: 10.20.4.0/24
dns_nameservers: [ {get_param: config_dns_nameserver} ]
enable_dhcp: true
host_routes: []
ip_version: 4
name: subManagement
network_id:
get_resource: network_4
type: OS::Neutron::Subnet
network_5:
properties:
admin_state_up: true
name: Storage-Access-Network
shared: false
type: OS::Neutron::Net
subnet_5:
properties:
allocation_pools:
- end: 10.20.5.100
start: 10.20.5.10
cidr: 10.20.5.0/24
dns_nameservers: [ {get_param: config_dns_nameserver} ]
enable_dhcp: true
host_routes: []
ip_version: 4
name: subStorage-Access-Network
network_id:
get_resource: network_5
type: OS::Neutron::Subnet
network_6:
properties:
admin_state_up: true
name: Edge
shared: false
type: OS::Neutron::Net
subnet_6:
properties:
allocation_pools:
- end: 10.20.6.100
start: 10.20.6.10
cidr: 10.20.6.0/24
dns_nameservers: [ {get_param: config_dns_nameserver} ]
enable_dhcp: true
host_routes: []
ip_version: 4
name: subEdge
network_id:
get_resource: network_6
type: OS::Neutron::Subnet
network_7:
properties:
admin_state_up: true
name: Admin
shared: false
type: OS::Neutron::Net
subnet_7:
properties:
allocation_pools:
- end: 10.20.7.100
start: 10.20.7.10
cidr: 10.20.7.0/24
dns_nameservers: [ {get_param: config_dns_nameserver} ]
enable_dhcp: true
host_routes: []
ip_version: 4
name: subAdmin
network_id:
get_resource: network_7
type: OS::Neutron::Subnet
router_0:
properties:
admin_state_up: true
name: Router_Storage
type: OS::Neutron::Router
router_0_gateway:
properties:
network_id:
get_param: network_router_0_external
router_id:
get_resource: router_0
type: OS::Neutron::RouterGateway
router_0_interface_0:
properties:
router_id:
get_resource: router_0
subnet_id:
get_resource: subnet_3
type: OS::Neutron::RouterInterface
router_1:
properties:
admin_state_up: true
name: Router_Ext
type: OS::Neutron::Router
router_1_gateway:
properties:
network_id:
get_param: network_router_1_external
router_id:
get_resource: router_1
type: OS::Neutron::RouterGateway
router_1_interface_0:
properties:
router_id:
get_resource: router_1
subnet_id:
get_resource: subnet_5
type: OS::Neutron::RouterInterface
router_2:
properties:
admin_state_up: true
name: Router_Admin
type: OS::Neutron::Router
router_2_gateway:
properties:
network_id:
get_param: network_router_2_external
router_id:
get_resource: router_2
type: OS::Neutron::RouterGateway
router_2_interface_0:
properties:
router_id:
get_resource: router_2
subnet_id:
get_resource: subnet_7
type: OS::Neutron::RouterInterface
security_group_0:
properties:
description: ''
name: master
rules:
- direction: ingress
ethertype: IPv4
protocol: icmp
remote_ip_prefix: 0.0.0.0/0
- direction: egress
ethertype: IPv6
- direction: ingress
ethertype: IPv4
port_range_max: 65535
port_range_min: 1
protocol: udp
remote_ip_prefix: 0.0.0.0/0
- direction: egress
ethertype: IPv4
- direction: ingress
ethertype: IPv4
port_range_max: 65535
port_range_min: 1
protocol: tcp
remote_ip_prefix: 0.0.0.0/0
type: OS::Neutron::SecurityGroup
security_group_1:
properties:
description: ''
name: data
rules:
- direction: ingress
ethertype: IPv4
protocol: icmp
remote_ip_prefix: 0.0.0.0/0
- direction: egress
ethertype: IPv6
- direction: ingress
ethertype: IPv4
port_range_max: 65535
port_range_min: 1
protocol: udp
remote_ip_prefix: 0.0.0.0/0
- direction: egress
ethertype: IPv4
- direction: ingress
ethertype: IPv4
port_range_max: 65535
port_range_min: 1
protocol: tcp
remote_ip_prefix: 0.0.0.0/0
type: OS::Neutron::SecurityGroup
security_group_3:
properties:
description: ''
name: edge
rules:
- direction: ingress
ethertype: IPv4
protocol: icmp
remote_ip_prefix: 0.0.0.0/0
- direction: ingress
ethertype: IPv4
port_range_max: 65535
port_range_min: 1
protocol: tcp
remote_ip_prefix: 0.0.0.0/0
- direction: egress
ethertype: IPv4
- direction: ingress
ethertype: IPv4
port_range_max: 65535
port_range_min: 1
protocol: udp
remote_ip_prefix: 0.0.0.0/0
- direction: egress
ethertype: IPv6
type: OS::Neutron::SecurityGroup
security_group_6:
properties:
description: ''
name: Admin
rules:
- direction: ingress
ethertype: IPv4
protocol: icmp
remote_ip_prefix: 0.0.0.0/0
- direction: egress
ethertype: IPv6
- direction: ingress
ethertype: IPv4
port_range_max: 65535
port_range_min: 1
protocol: udp
remote_ip_prefix: 0.0.0.0/0
- direction: egress
ethertype: IPv4
- direction: ingress
ethertype: IPv4
port_range_max: 65535
port_range_min: 1
protocol: tcp
remote_ip_prefix: 0.0.0.0/0
type: OS::Neutron::SecurityGroup
server_0:
type: OS::Nova::Server
depends_on: [ volume_0, subnet_1, subnet_2, subnet_3, subnet_4, subnet_5, subnet_6, subnet_7, server_5 ]
properties:
name: data-node-3
diskConfig: AUTO
flavor:
get_param: flavor_data
image:
get_param: image_ubuntu
key_name:
get_resource: key_0
networks:
- network:
get_resource: network_7
- network:
get_resource: network_1
- network:
get_resource: network_2
- network:
get_resource: network_3
security_groups:
- get_resource: security_group_1
block_device_mapping_v2:
- device_name: /dev/vdb
boot_index: 1
volume_id:
get_resource: volume_0
user_data_format: SOFTWARE_CONFIG
user_data: {get_resource: deploymentscript}
server_1:
type: OS::Nova::Server
depends_on: [ volume_1, subnet_1, subnet_2, subnet_3, subnet_4, subnet_5, subnet_6, subnet_7, server_5 ]
properties:
name: data-node-2
diskConfig: AUTO
flavor:
get_param: flavor_data
image:
get_param: image_ubuntu
key_name:
get_resource: key_0
networks:
- network:
get_resource: network_7
- network:
get_resource: network_1
- network:
get_resource: network_2
- network:
get_resource: network_4
security_groups:
- get_resource: security_group_1
block_device_mapping_v2:
- device_name: /dev/vdb
boot_index: 1
volume_id:
get_resource: volume_1
user_data_format: SOFTWARE_CONFIG
user_data: {get_resource: deploymentscript}
server_2:
type: OS::Nova::Server
depends_on: [ volume_2, subnet_1, subnet_2, subnet_3, subnet_4, subnet_5, subnet_6, subnet_7, server_5 ]
properties:
name: data-node-1
diskConfig: AUTO
flavor:
get_param: flavor_data
image:
get_param: image_ubuntu
key_name:
get_resource: key_0
networks:
- network:
get_resource: network_7
- network:
get_resource: network_1
- network:
get_resource: network_2
- network:
get_resource: network_4
security_groups:
- get_resource: security_group_1
block_device_mapping_v2:
- device_name: /dev/vdb
boot_index: 1
volume_id:
get_resource: volume_2
user_data_format: SOFTWARE_CONFIG
user_data: {get_resource: deploymentscript}
server_3:
type: OS::Nova::Server
depends_on: [ volume_3, subnet_1, subnet_2, subnet_3, subnet_4, subnet_5, subnet_6, subnet_7, server_5 ]
properties:
name: master-node
diskConfig: AUTO
flavor:
get_param: flavor_master
image:
get_param: image_ubuntu
key_name:
get_resource: key_0
networks:
- network:
get_resource: network_7
- network:
get_resource: network_1
- network:
get_resource: network_2
- network:
get_resource: network_4
security_groups:
- get_resource: security_group_0
block_device_mapping_v2:
- device_name: /dev/vdb
boot_index: 1
volume_id:
get_resource: volume_3
user_data_format: SOFTWARE_CONFIG
user_data: {get_resource: deploymentscript}
server_4:
type: OS::Nova::Server
depends_on: [ volume_4, subnet_1, subnet_2, subnet_3, subnet_4, subnet_5, subnet_6, subnet_7, server_5 ]
properties:
name: edge-server
diskConfig: AUTO
flavor:
get_param: flavor_edge
image:
get_param: image_ubuntu
key_name:
get_resource: key_0
networks:
- network:
get_resource: network_7
- network:
get_resource: network_1
- network:
get_resource: network_6
security_groups:
- get_resource: security_group_3
block_device_mapping_v2:
- device_name: /dev/vdb
boot_index: 1
volume_id:
get_resource: volume_4
user_data_format: SOFTWARE_CONFIG
user_data: {get_resource: deploymentscript}
server_5:
type: OS::Nova::Server
depends_on: [ volume_5, subnet_1, subnet_2, subnet_3, subnet_4, subnet_5, subnet_6, subnet_7 ]
properties:
name: repo-server
diskConfig: AUTO
flavor:
get_param: flavor_repo
image:
get_param: image_ubuntu
key_name:
get_resource: key_0
networks:
- port:
get_resource: server_5_port_admin
- network:
get_resource: network_1
- network:
get_resource: network_6
block_device_mapping_v2:
- device_name: /dev/vdb
boot_index: 1
volume_id:
get_resource: volume_5
user_data_format: SOFTWARE_CONFIG
user_data: {get_resource: deploymentscript}
server_5_port_admin:
type: OS::Neutron::Port
properties:
network_id: { get_resource: network_7 }
security_groups:
- get_resource: security_group_6
fixed_ips:
- subnet_id: { get_resource: subnet_7 }
ip_address: 10.20.7.5

View File

@ -1,103 +0,0 @@
Big Data Sample Heat Template
==============================
This heat templates deploy a Hadoop cluster with Apache Ambari.
Ambari is the central management service for Open Source Hadoop. It provides
central administration and management functionality via a web UI. In this
example, the Ambari service is installed on the MasterNode and an Ambari agent
is deployed on each DataNode in the cluster. This provides communication and
authentication functionality between the Hadoop cluster nodes.
**Type of roles in this Hadoop cluster**
====== ==================================================================
Role Details
====== ==================================================================
Master Master Node (aka Name Node) - this node houses the cluster-wide
management services that provide the internal functionality to manage
the Hadoop cluster and its resources.
Data Data Nodes services used for managing and analyzing the data,
stored in HDFS, are located on these nodes. Analytics jobs access and
compute the data on the Data Nodes.
Edge Services used to access the cluster environment or the data outside
the cluster are on this node. For security, direct user access to the
Hadoop cluster should be minimized. Users can access the cluster via
the command line interface (CLI) from the Edge Node. All data-import
and data-export processes can be channeled on one or more Edge Nodes.
Admin Administrative Server - Used for system-wide administration.
====== ==================================================================
This template provision a small testing environment which demonstrate the
deployment of a Hadoop cluster in an OpenStack cloud environment. The
default settings used in this template should not be used without changes
in a production environment. Users are advised to change the settings that
fit in their own environment.
This template was tested using Mitaka & Liberty release of OpenStack.
-----------------
Heat File Details
-----------------
This template requires a few standard components such as an Ubuntu cloud image
and an external network for internet access.
The template prepares a few resources that are required by the Hadoop
deployment.
Multiple Cinder volumes are created for the Hadoop filesystem.
For simplicity, every node is attached with a Cinder volume with a default size
in this example.
Multiple Neutron subnets are created. This includes:
================== ======================
Subnet Details
================== ======================
Cluster Network Provides inter-node communication for the Hadoop cluster.
Data Network Provides a dedicated network for accessing the object
storage within an OpenStack Swift environment or to an
external object storage such as Amazon S3. This is
optional if object storage is not used.
Management Network Provides a dedicated network for accessing the Hadoop
nodes' operating system for maintenance and monitoring
purposes.
Edge Network Provides connectivity to the client-facing and enterprise
IT network. End users are accessing the Hadoop cluster
through this network.
================== ======================
Multiple routers are created to route the traffic between subnets.
Other networks can also be created depending on your specific needs.
Security Groups are defined and attached to every Node in the cluster.
Custom rules can be created for different types of nodes to allow/deny
traffic from certain protocols, ports or IP address ranges.
Next, the template creates a few servers of different roles (Master, Data,
Edge, Admin). An Ubuntu 14.04 cloud image is assumed to be used as the default
operating system of each servers.
When the server is booted, additional packages (depending on roles) are
installed and configured on each server. In this example, the Apache Ambari
is installed and all systems are configured with name server, ntp,
package repositories and other necessary settings for the Apache Ambari
service.
The Ambari Web UI can be accessed by pointing to the MasterNode's
IP address at port 8080. A Floating IP can be associated to the MasterNode.
-------------------------------
Running the heat template files
-------------------------------
You need to source the OpenStack credential file. You may download a copy of
the credential file from Horizon under Project>Compute>Access & Security>API
Access
Prior to running the template, please edit and change the default value of each
parameters to the one that match your own environment.
**Example to setup the Hadoop cluster environment**::
openstack stack create --template BigData.yaml HadoopCluster

View File

@ -1,137 +0,0 @@
# -*- coding: utf-8 -*-
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
import sys
import openstackdocstheme
import subprocess
sys.path.insert(0, os.path.abspath('../..'))
# -- General configuration ----------------------------------------------------
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
# extensions = [
# 'sphinx.ext.autodoc',
# #'sphinx.ext.intersphinx',
# 'oslosphinx'
#]
extensions = []
# autodoc generation is a bit aggressive and a nuisance when doing heavy
# text edit cycles.
# execute "export SPHINX_DEBUG=1" in your terminal to disable
# The suffix of source filenames.
source_suffix = '.rst'
# The master toctree document.
master_doc = 'index'
# General information about the project.
project = u'Workload-Reference-Architecture'
bug_tag = u"workload-ref-archs"
copyright = u'2017, OpenStack Foundation'
# We ask git for the SHA checksum
# The git SHA checksum is used by "log-a-bug"
giturl = u'https://git.openstack.org/cgit/openstack/workload-ref-archs/tree/doc/source'
git_cmd = ["/usr/bin/git", "log", "-1"]
last_commit = subprocess.Popen(git_cmd, stdout=subprocess.PIPE)
first_line_cmd = ["head", "-n1"]
gitsha = subprocess.Popen(first_line_cmd, stdin=last_commit.stdout,
stdout=subprocess.PIPE).communicate()[0].split()[-1].strip()
# tag that reported bugs will be tagged with
# source tree
# pwd = os.getcwd()
# html_context allows us to pass arbitrary values into the html template
#html_context = {"pwd": pwd, "gitsha": gitsha}
html_context = {"gitsha": gitsha, "bug_tag": bug_tag,
"giturl": giturl,
"bug_project": "workload-ref-archs"}
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
exclude_patterns = []
# If true, '()' will be appended to :func: etc. cross-reference text.
# add_function_parentheses = True
# If true, the current module name will be prepended to all description
# unit titles (such as .. function::).
# add_module_names = True
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx'
# Must set this variable to include year, month, day, hours, and minutes.
html_last_updated_fmt = '%Y-%m-%d %H:%M'
# -- Options for HTML output --------------------------------------------------
# The theme to use for HTML and HTML Help pages. Major themes that come with
# Sphinx are currently 'default' and 'sphinxdoc'.
# html_theme_path = ["."]
# html_theme = '_theme'
# html_static_path = ['static']
html_theme = 'openstackdocs'
html_theme_path = [openstackdocstheme.get_html_theme_path()]
# If false, no index is generated.
html_use_index = False
# If true, links to the reST sources are added to the pages.
# This one is needed for "Report a bug".
html_show_sourcelink = False
# If true, publish source files
html_copy_source = False
# Output file base name for HTML help builder.
htmlhelp_basename = '%sdoc' % project
latex_elements = {
# The paper size ('letterpaper' or 'a4paper').
# 'papersize': 'letterpaper',
# The font size ('10pt', '11pt' or '12pt').
# 'pointsize': '10pt',
# Additional stuff for the LaTeX preamble.
# 'preamble': '',
}
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title, author, documentclass
# [howto/manual]).
latex_documents = [
('index',
'%s.tex' % project,
u'OpenStack Workload Reference Architecture',
u'OpenStack Enterprise Working Group', 'manual'),
]
# Example configuration for intersphinx: refer to the Python standard library.
#intersphinx_mapping = {'http://docs.python.org/': None}
# -- Options for Internationalization output ------------------------------
locale_dirs = ['locale/']
# -- Options for PDF output --------------------------------------------------
#pdf_documents = [
# ('index', u'openstack-workload-ref-archs-documentation',
# u'OpenStack Workload Reference Architectures',
# u'OpenStack contributors')
#]

View File

@ -1,4 +0,0 @@
============
Contributing
============
.. include:: ../../CONTRIBUTING.rst

View File

@ -1,27 +0,0 @@
=============================================
Workload Reference Architecture for OpenStack
=============================================
Abstract
~~~~~~~~
The OpenStack Enterprise Work Group is developing a series of 'tenant-level'
reference architectures to help users to learn and understand how to deploy
different types of workloads on an OpenStack Cloud. These reference
architectures describe the list of OpenStack services that can be used to
support the various workloads and provide sample code to bootstrap the workload
environment on an OpenStack Cloud.
Contents
~~~~~~~~
.. toctree::
:maxdepth: 2
web-applications/web-applications.rst
big-data/big-data.rst
Search in the reference architectures
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
* :ref:`search`

View File

@ -1,12 +0,0 @@
============
Installation
============
At the command line::
$ pip install workload-ref-archs
Or, if you have virtualenvwrapper installed::
$ mkvirtualenv workload-ref-archs
$ pip install workload-ref-archs

View File

@ -1 +0,0 @@
.. include:: ../../README.rst

View File

@ -1,7 +0,0 @@
========
Usage
========
To use workload-ref-archs in a project::
import workload-ref-archs

Binary file not shown.

Before

Width:  |  Height:  |  Size: 18 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 29 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 52 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 63 KiB

View File

@ -1,94 +0,0 @@
Web Applications Reference Architecture Sample Heat Template
============================================================
These heat templates deploy WordPress on a 3-Tier LAMP architecture. There are
two versions of the primary template, one which creates a static environment
which does not require ceilometer, and one which provides autoscaling of the
web and application tiers based on CPU load, which does require ceilometer.
**The WordPress 3-Tier LAMP Architecture Sample**
====== ====================== =====================================
Tier Function Details
====== ====================== =====================================
Web Reverse Proxy Server Apache + mod_proxy
App WordPress Server Apache, PHP, MySQL Client, WordPress
Data Database Server MySQL
====== ====================== =====================================
**NOTE:** The sample WordPress application was tested with CentOS7 and
Ubuntu Trusty and Ubuntu Xenial.
-----------------
Heat File Details
-----------------
The template uses a nested structure, with two different primary yaml files,
both of which utilize the same 4 nested files. The templates were tested using
Newton release of OpenStack, and Ubuntu server 14.04 and Centos7.
**WebAppStatic.yaml:** If you want a static environment, run this yaml file.
This will create a static environment, with two load balanced web servers, and
two load balanced application servers, and a single database server using
cinder block storage for the database files.
REQUIRED PARAMETERS:
* ssh_key_name, image_id, public_network_id
OPTIONAL PARAMETERS:
* db_instance_flavor, app_instance_flavor, web_instance_flavor,
db_server_name, app_server_name, web_server_name, dns_nameserver
**WebAppAutoScaling.yaml:** If you want a dynamic autoscaling environment,
run this yaml file. This yaml files sets up heat autoscaling groups.
REQUIRED PARAMETERS:
* ssh_key_name, image_id, public_network_id
OPTIONAL PARAMETERS:
* db_instance_flavor, app_instance_flavor, web_instance_flavor,
db_server_name, app_server_name, web_server_name, dns_nameserver
The following 4 yaml files are called by the primary files above, and are by
default expected to be in a nested subdirectory:
**setup_net_sg.yaml:** This file creates 3 separate private networks, one for
each tier. In addition it creates two load balancers (using neutron LBaaS V2),
one which has a public IP that connects the web private network to the public
network, and one with a private IP that connects the web network to the
application network. The template also creates a router connecting the
application network to the database network. In addition to the networks and
routers, the template creates 3 security groups, one for each of the tiers.
**heat_web_tier.yaml:** This template file launches the web tier nodes.
In addition to launching instances, it installs and configures Apache and
Apache modproxy which is used to redirect traffic to the application nodes.
**heat_app_tier.yaml:** This template file launches the application tier nodes.
In addition to launching the instances, it installs Apache, PHP, MySQL client,
and finally WordPress.
**heat_sql_tier.yaml:** This template file launches the database tier node and
installs MySQL. In addition it creates a cinder block device to store the
database files. The template also creates the required users and databases for
the WordPress application.
-------------------------------
Running the heat template files
-------------------------------
First you need to source your credential file. You may download a copy of the
credential file from Horizon under Project>Compute>Access & Security>API Access
**Example to setup the static environment**::
openstack stack create --template WebAppStatic.yaml --parameter ssh_key_name=mykey --parameter image_id=ubuntu --parameter dns_nameserver="8.8.8.8,8.8.4.4" --parameter public_network_id=external_network ThreeTierLAMP
**Example to setup the autoscaling environment**::
openstack stack create --template WebAppAutoScaling.yaml --parameter ssh_key_name=mykey --parameter image_id=centos --parameter dns_nameserver="8.8.8.8,8.8.4.4" --parameter public_network_id=external_network ThreeTierLAMP

View File

@ -1,332 +0,0 @@
heat_template_version: 2016-10-14
#The value of heat_template_version tells Heat not only the format of the template but also features that will be validated and supported
#2016-10-14 represents the Newton release
description: >
This is the main Heat template for the Web Applications Workload Reference
Architecture created by the Enterprise Working Group.
This template contains the autoscaling code and calls nested templates which actually do the
majority of the work. Ceilometer is required in order to run this template.
This file calls the following yaml files in a ./nested subdirectory
setup_net_sg.yaml sets up the security groups and networks for Web, App, and Database
heat_app_tier.yaml starts up application servers and does on-the-fly builds
heat_web_tier.yaml starts up web servers and does on-the-fly builds
heat_sql_tier.yaml starts up mysql server and does on-the-fly builds.
NOTE: This serves as a guide to new users and is not meant for production deployment.
REQUIRED YAML FILES:
setup_net_sg.yaml, heat_app_tier.yaml, heat_sql_tier.yaml, heat_web_tier.yaml
REQUIRED PARAMETERS:
ssh_key_name, image_id, public_network_id
OPTIONAL PARAMETERS:
db_instance_flavor, app_instance_flavor, web_instance_flavor, db_server_name, app_server_name, web_server_name, dns_nameserver
#Created by: Craig Sterrett 3/23/2016
#Updated by: Craig Sterrett 1/3/2017 to support LBaaS V2 and Newton
######################################
#The parameters section allows for specifying input parameters to the template
parameters:
ssh_key_name:
type: string
label: SSH Key Name
description: REQUIRED PARAMETER -Name of an existing SSH KeyPair to enable SSH access to instances.
constraints:
- custom_constraint: nova.keypair
description: Must already exist on your cloud
image_id:
type: string
label: Image ID
description: >
REQUIRED PARAMETER - The image id to be used for the compute instance. Please specify
your own Image ID/Name in your project/tenant. This could be modified to use different
images for each tier.
constraints:
- custom_constraint: glance.image
description: Must be a valid image on your cloud
public_network_id:
type: string
label: Public Network
description: >
REQUIRED PARAMETER - The public network name or id used to access the internet.
This will fail if this is not a true public network
constraints:
- custom_constraint: neutron.network
description: Must be a valid network on your cloud
db_instance_flavor:
type: string
label: Database server instance flavor
description: The flavor type to use for db server.
default: m1.small
constraints:
- custom_constraint: nova.flavor
description: Must be a valid flavor provided by your cloud provider.
app_instance_flavor:
type: string
label: Application server instance flavor
description: The flavor type to use for app servers.
default: m1.small
constraints:
- custom_constraint: nova.flavor
description: Must be a valid flavor provided by your cloud provider.
web_instance_flavor:
type: string
label: Web server instance flavor
description: The flavor type to use for web servers.
default: m1.small
constraints:
- custom_constraint: nova.flavor
description: Must be a valid flavor provided by your cloud provider.
db_server_name:
type: string
label: Server Name
description: Name of the database servers
default: db_server
app_server_name:
type: string
label: Server Name
description: Name of the application servers
default: app_server
web_server_name:
type: string
label: Server Name
description: Name of the web servers
default: web_server
dns_nameserver:
type: comma_delimited_list
label: DNS Name Server
description: The IP address of a DNS nameserver in list format
default: 8.8.8.8,8.8.4.4
######################################
#The resources section defines actual resources that make up a stack deployed from the HOT template (for instance compute instances, networks, storage volumes).
resources:
####################
#Setup Networking and Security Group
#Call the setup_net_sg.yaml file
network_setup:
type: nested/setup_net_sg.yaml
properties:
public_network_id: { get_param: public_network_id }
dns_nameserver: { get_param: dns_nameserver }
####################
##Kick off a Database server
launch_db_server:
type: nested/heat_sql_tier.yaml
properties:
ssh_key_name: { get_param: ssh_key_name }
server_name: { get_param: db_server_name }
instance_flavor: { get_param: db_instance_flavor }
image_id: { get_param: image_id }
private_network_id: {get_attr: [network_setup, db_private_network_id]}
security_group: {get_attr: [network_setup, db_security_group_id]}
####################
#Autoscaling for the app servers
app_autoscale_group:
type: OS::Heat::AutoScalingGroup
properties:
desired_capacity: 2
min_size: 1
max_size: 5
resource:
type: nested/heat_app_tier.yaml
properties:
ssh_key_name:
get_param: ssh_key_name
server_name:
get_param: app_server_name
instance_flavor:
get_param: app_instance_flavor
image_id:
get_param: image_id
private_network_id: {get_attr: [network_setup, app_private_network_id]}
private_subnet_id: {get_attr: [network_setup, app_private_subnet_id]}
security_group: {get_attr: [network_setup, app_security_group_id]}
pool_name: {get_attr: [network_setup, app_lbaas_pool_name]}
db_server_ip: {get_attr: [launch_db_server, instance_ip]}
#created unique tag to be used by ceilometer to identify meters specific to the app nodes
#without some unique metadata tag, ceilometer will group together all resources in the tenant
metadata: {"metering.autoscale_group_name": "app_autoscale_group"}
####################
app_scaleup_policy:
type: OS::Heat::ScalingPolicy
properties:
adjustment_type: change_in_capacity
auto_scaling_group_id: { get_resource: app_autoscale_group }
#cooldown prevents duplicate alarms while instances spin up. Set the value large
#enough to allow for instance to startup and begin taking requests.
cooldown: 900
scaling_adjustment: 1
app_cpu_alarm_high:
type: OS::Ceilometer::Alarm
properties:
meter_name: cpu_util
statistic: avg
#period needs to be greater than the sampling rate in the pipleine.config file in /etc/ceilometer
period: 600
evaluation_periods: 1
#Alarms if CPU utilization for ALL app nodes averaged together exceeds 50%
threshold: 50
repeat_actions: true
alarm_actions:
- {get_attr: [app_scaleup_policy, alarm_url]}
#Collect data only on servers with the autoscale_group_name metadata set to app_autoscale_group
#Otherwise ceilometer would look at all servers in the project
matching_metadata: {'metadata.user_metadata.autoscale_group_name': "app_autoscale_group"}
comparison_operator: gt
app_scaledown_policy:
type: OS::Heat::ScalingPolicy
properties:
adjustment_type: change_in_capacity
auto_scaling_group_id: { get_resource: app_autoscale_group }
#cooldown prevents duplicate alarms while instances shut down. Set the value large
#enough to allow for instance to shutdown and things stabilize.
cooldown: 900
scaling_adjustment: -1
app_cpu_alarm_low:
type: OS::Ceilometer::Alarm
properties:
meter_name: cpu_util
statistic: avg
#period needs to be greater than the sampling rate in the pipleine.config file in /etc/ceilometer
period: 600
evaluation_periods: 1
#Alarms if CPU utilization for ALL app nodes averaged together drops below 20%
threshold: 20
repeat_actions: true
alarm_actions:
- {get_attr: [app_scaledown_policy, alarm_url]}
#Collect data only on servers with the autoscale_group_name metadata set to app_autoscale_group
#Otherwise ceilometer would look at all servers in the project
matching_metadata: {'metadata.user_metadata.autoscale_group_name': "app_autoscale_group"}
comparison_operator: lt
####################
#Autoscaling for the web servers
web_autoscale_group:
type: OS::Heat::AutoScalingGroup
properties:
desired_capacity: 2
min_size: 1
max_size: 5
resource:
type: nested/heat_web_tier.yaml
properties:
ssh_key_name:
get_param: ssh_key_name
server_name:
get_param: web_server_name
instance_flavor:
get_param: web_instance_flavor
image_id:
get_param: image_id
private_network_id: {get_attr: [network_setup, web_private_network_id]}
private_subnet_id: {get_attr: [network_setup, web_private_subnet_id]}
app_lbaas_vip: {get_attr: [network_setup, app_lbaas_IP]}
security_group: {get_attr: [network_setup, web_security_group_id]}
pool_name: {get_attr: [network_setup, web_lbaas_pool_name]}
metadata: {"metering.autoscale_group_name": "web_autoscale_group"}
####################
web_scaleup_policy:
type: OS::Heat::ScalingPolicy
properties:
adjustment_type: change_in_capacity
auto_scaling_group_id: { get_resource: web_autoscale_group }
cooldown: 900
scaling_adjustment: 1
web_cpu_alarm_high:
type: OS::Ceilometer::Alarm
properties:
meter_name: cpu_util
statistic: avg
period: 600
evaluation_periods: 1
threshold: 50
repeat_actions: true
alarm_actions:
- {get_attr: [web_scaleup_policy, alarm_url]}
matching_metadata: {'metadata.user_metadata.autoscale_group_name': "web_autoscale_group"}
comparison_operator: gt
web_scaledown_policy:
type: OS::Heat::ScalingPolicy
properties:
adjustment_type: change_in_capacity
auto_scaling_group_id: { get_resource: web_autoscale_group }
cooldown: 900
scaling_adjustment: -1
web_cpu_alarm_low:
type: OS::Ceilometer::Alarm
properties:
meter_name: cpu_util
statistic: avg
period: 600
evaluation_periods: 1
threshold: 20
repeat_actions: true
alarm_actions:
- {get_attr: [web_scaledown_policy, alarm_url]}
matching_metadata: {'metadata.user_metadata.autoscale_group_name': "web_autoscale_group"}
comparison_operator: lt
######################################
#The outputs section defines output parameters that should be available to the user after a stack has been created.
outputs:
web_lbaas_ip:
description: >
This is the floating IP assigned to the WEB LoadBalancer.
value: {get_attr: [network_setup, web_lbaas_IP]}
app_lbaas_ip:
description: >
This is the floating IP assigned to the Application LoadBalancer.
value: {get_attr: [network_setup, app_lbaas_IP]}
web_scale_up_url:
description: >
This URL is the webhook to scale up the WEB autoscaling group. You
can invoke the scale-up operation by doing an HTTP POST to this
URL; no body nor extra headers are needed. You do need to be authenticated
Example: source openrc; curl -X POST "<url>"
value: {get_attr: [web_scaleup_policy, alarm_url]}
web_scale_down_url:
description: >
This URL is the webhook to scale down the WEB autoscaling group.
value: {get_attr: [web_scaledown_policy, alarm_url]}
app_scale_up_url:
description: >
This URL is the webhook to scale up the application autoscaling group. You
can invoke the scale-up operation by doing an HTTP POST to this
URL; no body nor extra headers are needed.
value: {get_attr: [app_scaleup_policy, alarm_url]}
app_scale_down_url:
description: >
This URL is the webhook to scale down the application autoscaling group.
value: {get_attr: [app_scaledown_policy, alarm_url]}

View File

@ -1,204 +0,0 @@
heat_template_version: 2016-10-14
#The value of heat_template_version tells Heat not only the format of the template but also features that will be validated and supported
#2016-10-14 represents the Newton release
description: >
This is the main Heat template for the Web Applications Workload Reference
Architecture created by the Enterprise Working Group.
This version of the template does not include autoscaling, and does not require ceilometer.
This template calls multiple nested templates which actually do the
majority of the work. This file calls the following yaml files in a ./nested subdirectory
setup_net_sg.yaml sets up the security groups and networks for Web, App, and Database
heat_app_tier.yaml starts up application servers and does on-the-fly builds
heat_web_tier.yaml starts up web servers and does on-the-fly builds
heat_sql_tier.yaml starts up mysql server and does on-the-fly builds.
NOTE: This serves as a guide to new users and is not meant for production deployment.
REQUIRED YAML FILES:
setup_net_sg.yaml, heat_app_tier.yaml, heat_sql_tier.yaml, heat_web_tier.yaml
REQUIRED PARAMETERS:
ssh_key_name, image_id, public_network_id
OPTIONAL PARAMETERS:
db_instance_flavor, app_instance_flavor, web_instance_flavor, db_server_name, app_server_name, web_server_name, dns_nameserver
#Created by: Craig Sterrett 3/23/2016
#Updated by: Craig Sterrett 1/3/2017 to support LBaaS V2 and Newton
######################################
#The parameters section allows for specifying input parameters to the template
parameters:
ssh_key_name:
type: string
label: SSH Key Name
description: REQUIRED PARAMETER - Name of an existing SSH KeyPair to enable SSH access to instances.
constraints:
- custom_constraint: nova.keypair
description: Must already exist on your cloud
image_id:
type: string
label: Image ID
description: >
REQUIRED PARAMETER - The image id to be used for the compute instance. Please specify
your own Image ID/Name in your project/tenant. This could be modified to use different
images for each tier.
constraints:
- custom_constraint: glance.image
description: Must be a valid image on your cloud
public_network_id:
type: string
label: Public Network
description: >
REQUIRED PARAMETER - The public network name or id used to access the internet.
This will fail if this is not a true public network
constraints:
- custom_constraint: neutron.network
description: Must be a valid network on your cloud
db_instance_flavor:
type: string
label: Database server instance flavor
description: The flavor type to use for db server.
default: m1.small
constraints:
- custom_constraint: nova.flavor
description: Must be a valid flavor provided by your cloud provider.
app_instance_flavor:
type: string
label: Application server instance flavor
description: The flavor type to use for app servers.
default: m1.small
constraints:
- custom_constraint: nova.flavor
description: Must be a valid flavor provided by your cloud provider.
web_instance_flavor:
type: string
label: Web server instance flavor
description: The flavor type to use for web servers.
default: m1.small
constraints:
- custom_constraint: nova.flavor
description: Must be a valid flavor provided by your cloud provider.
db_server_name:
type: string
label: Server Name
description: Name of the database servers
default: db_server
app_server_name:
type: string
label: Server Name
description: Name of the application servers
default: app_server
web_server_name:
type: string
label: Server Name
description: Name of the web servers
default: web_server
dns_nameserver:
type: comma_delimited_list
label: DNS Name Server
description: The IP address of a DNS nameserver in list format
default: 8.8.8.8,8.8.4.4
######################################
#The resources section defines actual resources that make up a stack deployed from the HOT template (for instance compute instances, networks, storage volumes).
resources:
####################
#Setup Networking and Security Group
#Call the setup_net_sg.yaml file
network_setup:
type: nested/setup_net_sg.yaml
properties:
public_network_id: { get_param: public_network_id }
dns_nameserver: { get_param: dns_nameserver }
####################
##Kick off a Database server
launch_db_server:
type: nested/heat_sql_tier.yaml
properties:
ssh_key_name: { get_param: ssh_key_name }
server_name: { get_param: db_server_name }
instance_flavor: { get_param: db_instance_flavor }
image_id: { get_param: image_id }
private_network_id: {get_attr: [network_setup, db_private_network_id]}
security_group: {get_attr: [network_setup, db_security_group_id]}
####################
##Kick off two application servers
#Utilizing Heat resourcegroup to kick off multiple copies
app_server_resource_group:
type: OS::Heat::ResourceGroup
properties:
count: 2
resource_def:
type: nested/heat_app_tier.yaml
properties:
ssh_key_name:
get_param: ssh_key_name
server_name:
get_param: app_server_name
instance_flavor:
get_param: app_instance_flavor
image_id:
get_param: image_id
private_network_id: {get_attr: [network_setup, app_private_network_id]}
private_subnet_id: {get_attr: [network_setup, app_private_subnet_id]}
security_group: {get_attr: [network_setup, app_security_group_id]}
pool_name: {get_attr: [network_setup, app_lbaas_pool_name]}
db_server_ip: {get_attr: [launch_db_server, instance_ip]}
#Just passing something for metadata, it's not used in this script but is used in
#the autoscaling script
metadata: {"metering.stack": {get_param: "OS::stack_id"}}
####################
##Kick off two web servers
#Utilizing Heat resourcegroup to kick off multiple copies
web_server_resource_group:
type: OS::Heat::ResourceGroup
properties:
count: 2
resource_def:
type: nested/heat_web_tier.yaml
properties:
ssh_key_name:
get_param: ssh_key_name
server_name:
get_param: web_server_name
instance_flavor:
get_param: web_instance_flavor
image_id:
get_param: image_id
private_network_id: {get_attr: [network_setup, web_private_network_id]}
private_subnet_id: {get_attr: [network_setup, web_private_subnet_id]}
app_lbaas_vip: {get_attr: [network_setup, app_lbaas_IP]}
security_group: {get_attr: [network_setup, web_security_group_id]}
pool_name: {get_attr: [network_setup, web_lbaas_pool_name]}
#Just passing something for metadata, it's not used in this script but is used in
#the autoscaling script
metadata: {"metering.stack": {get_param: "OS::stack_id"}}
######################################
#The outputs section defines output parameters that should be available to the user after a stack has been created.
outputs:
web_lbaas_ip:
description: >
This is the floating IP assigned to the WEB LoadBalancer.
value: {get_attr: [network_setup, web_lbaas_IP]}
app_lbaas_ip:
description: >
This is the floating IP assigned to the Application LoadBalancer.
value: {get_attr: [network_setup, app_lbaas_IP]}

View File

@ -1,167 +0,0 @@
heat_template_version: 2016-10-14
description: >
This is a nested Heat used by the Web Applications Workload Reference Architecture
created by the Enterprise Working Group. These templates demonstrate a sample
LAMP architecture supporting Wordpress. This template file launches the application
tier nodes, and installs Apache, PHP, MySQL client, and finally WordPress.
This serves as a guide to new users and is not meant for production deployment.
#Created by: Craig Sterrett 3/23/2016
#Updated by: Craig Sterrett 1/3/2017 to support LBaaS V2 and Newton
parameters:
ssh_key_name:
type: string
label: SSH Key Name
description: REQUIRED PARAMETER -Name of an existing SSH KeyPair to enable SSH access to instances.
default: cloudkey
constraints:
- custom_constraint: nova.keypair
description: Must already exist on your cloud
server_name:
type: string
label: Server Name
description: REQUIRED PARAMETER - Name of the instance to spin up.
default: App_Server
instance_flavor:
type: string
label: Instance Flavor
description: The flavor type to use for each server.
default: m1.small
constraints:
- custom_constraint: nova.flavor
description: Must be a valid flavour provided by your cloud provider.
image_id:
type: string
label: Image ID
description: >
REQUIRED PARAMETER - The image id to be used for the compute instance. Please specify
your own Image ID in your project/tenant.
constraints:
- custom_constraint: glance.image
description: Must be a valid image on your cloud
private_network_id:
type: string
default: App_Tier_private_network
description: The private Application network that will be utilized for all App servers
constraints:
- custom_constraint: neutron.network
description: Must be a valid private network on your cloud
private_subnet_id:
type: string
description: Private subnet of the LBaaS Pool
default: private_subnet
constraints:
- custom_constraint: neutron.subnet
description: Must be a valid private subnet on your cloud
security_group:
type: string
default: Workload_App_SG
description: The Application security group that will be utilized for all App servers
pool_name:
type: string
description: LBaaS Pool to join
constraints:
- custom_constraint: neutron.lbaas.pool
description: Must be a LBaaS pool on your cloud
db_server_ip:
type: string
description: Database Server IP
metadata:
type: json
resources:
app_server:
type: OS::Nova::Server
properties:
name: { get_param: server_name }
image: { get_param: image_id }
flavor: { get_param: instance_flavor }
key_name: { get_param: ssh_key_name }
metadata: { get_param: metadata }
networks:
- network: { get_param: private_network_id }
security_groups:
- { get_param: security_group }
user_data_format: RAW
user_data:
str_replace:
params:
$db_server_ip: { get_param: db_server_ip }
template: |
#!/bin/bash -v
#use apt-get for Debian/ubuntu, and yum for centos/fedora
if apt-get -v &> /dev/null
then
apt-get update -y
apt-get upgrade -y
if lsb_release -a | grep xenial
then
apt-get -y install apache2 php libapache2-mod-php php-mysql php-gd mysql-client
apt-get -y install policycoreutils
ufw app info "Apache Full"
fi
if lsb_release -a | grep -i trusty
then
#Install PHP5, and mysql
apt-get -y install apache2 php5 libapache2-mod-php5 php5-mysql php5-gd mysql-client
fi
elif which yum &> /dev/null
then
yum update -y
#Install PHP5, and mysql
setenforce 0
yum install -y php php-mysql
yum install -y wget
yum install php-gd
fi
# download and install wordpress
wget http://wordpress.org/latest.tar.gz
tar -xzf latest.tar.gz
# configure wordpress
cp wordpress/wp-config-sample.php wordpress/wp-config.php
sed -i 's/database_name_here/wordpress/' wordpress/wp-config.php
sed -i 's/username_here/wordpress_user/' wordpress/wp-config.php
sed -i 's/password_here/wordpress/' wordpress/wp-config.php
sed -i 's/localhost/$db_server_ip/' wordpress/wp-config.php
# install a copy of the configured wordpress into apache's www directory
rm /var/www/html/index.html
cp -R wordpress/* /var/www/html/
# give apache ownership of the application files
chown -R www-data:www-data /var/www/html/
chown -R apache:apache /var/www/html/
chmod -R g+w /var/www/html/
#Allow remote database connection
setsebool -P httpd_can_network_connect=1
systemctl restart httpd.service
systemctl restart apache2
Pool_Member:
type: OS::Neutron::LBaaS::PoolMember
properties:
pool: {get_param: pool_name}
address: {get_attr: [app_server, first_address]}
protocol_port: 80
subnet: {get_param: private_subnet_id}
outputs:
app_private_ip:
description: Private IP address of the Web node
value: { get_attr: [app_server, first_address] }
lb_member:
description: LoadBalancer member details.
value: { get_attr: [Pool_Member, show] }

View File

@ -1,222 +0,0 @@
heat_template_version: 2016-10-14
description: >
This is a nested Heat used by the Web Applications Workload Reference Architecture
created by the Enterprise Working Group. These templates demonstrate a sample
LAMP architecture supporting Wordpress. This template file launches the database
tier node, creates a cinder block device to store the database files and creates
the required users and databases for the WordPress application.
This serves as a guide to new users and is not meant for production deployment.
#Created by: Craig Sterrett 3/23/2016
#Updated by: Craig Sterrett 1/3/2017 to support LBaaS V2 and Newton
parameters:
ssh_key_name:
type: string
label: SSH Key Name
description: REQUIRED PARAMETER -Name of an existing SSH KeyPair to enable SSH access to instances.
default: cloudkey
constraints:
- custom_constraint: nova.keypair
description: Must already exist on your cloud
server_name:
type: string
label: Server Name
description: REQUIRED PARAMETER - Name of the instance to spin up.
hidden: false
default: DB_Server
instance_flavor:
type: string
label: Instance Flavor
description: The flavor type to use for each server.
default: m1.small
constraints:
- custom_constraint: nova.flavor
description: Must be a valid flavour provided by your cloud provider.
image_id:
type: string
label: Image ID
description: >
REQUIRED PARAMETER - The image id to be used for the compute instance. Please specify
your own Image ID in your project/tenant.
constraints:
- custom_constraint: glance.image
description: Must be a valid image on your cloud
private_network_id:
type: string
default: DB_Tier_private_network
description: The private database network that will be utilized for all DB servers
constraints:
- custom_constraint: neutron.network
description: Must be a valid private network on your cloud
security_group:
type: string
default: Workload_DB_SG
description: The database security group that will be utilized for all DB servers
db_name:
type: string
description: MYSQL database name
default: wordpress
constraints:
- length: { min: 1, max: 64 }
description: db_name must be between 1 and 64 characters
- allowed_pattern: '[a-zA-Z][a-zA-Z0-9]*'
description: >
db_name must begin with a letter and contain only alphanumeric
characters
db_username:
type: string
description: MYSQL database admin account username
default: wordpress_user
hidden: true
db_password:
type: string
description: MYSQL database admin account password
default: wordpress
hidden: true
constraints:
- length: { min: 1, max: 41 }
description: db_password must be between 1 and 41 characters
- allowed_pattern: '[a-zA-Z0-9]*'
description: db_password must contain only alphanumeric characters
db_root_password:
type: string
description: Root password for MySQL
default: admin
hidden: true
constraints:
- length: { min: 1, max: 41 }
description: db_root_password must be between 1 and 41 characters
- allowed_pattern: '[a-zA-Z0-9]*'
description: db_root_password must contain only alphanumeric characters
db_volume_size:
type: string
description: Database cinder volume size (in GB) for database files
default: 2
hidden: true
resources:
#Setup a cinder volume for storage of the datbase files
db_files_volume:
type: OS::Cinder::Volume
properties:
size: { get_param: db_volume_size }
name: DB_Files
db_volume_attachment:
type: OS::Cinder::VolumeAttachment
properties:
volume_id: { get_resource: db_files_volume }
instance_uuid: { get_resource: MYSQL_instance }
#Install MySQL and setup wordpress DB and set usernames and passwords
MYSQL_instance:
type: OS::Nova::Server
properties:
name: { get_param: server_name }
image: { get_param: image_id }
flavor: { get_param: instance_flavor }
key_name: { get_param: ssh_key_name }
networks:
- network: { get_param: private_network_id }
security_groups:
- { get_param: security_group }
user_data_format: RAW
user_data:
str_replace:
template: |
#!/bin/bash -v
#make mount point for cinder volume and prepare volume
mkdir /mnt/db_files
chown mysql:mysql /mnt/db_files
volume_path="/dev/disk/by-id/virtio-$(echo volume_id | cut -c -20)"
echo ${volume_path}
mkfs.ext4 ${volume_path}
echo "${volume_path} /mnt/db_files ext4 defaults 1 2" >> /etc/fstab
mount /mnt/db_files
#use apt-get for Debian/ubuntu, and yum for centos/fedora
if apt-get -v &> /dev/null
then
apt-get update -y
apt-get upgrade -y
#Next line stops mysql install from popping up request for root password
export DEBIAN_FRONTEND=noninteractive
apt-get install -q -y --force-yes mariadb-server
touch /var/log/mariadb/mariadb.log
chown mysql:mysql /var/log/mariadb/mariadb.log
#Ubuntu mysql install blocks remote access by default
sed -i 's/bind-address/#bind-address/' /etc/mysql/my.cnf
service mysql stop
#Move the database to the cinder device
mv -f /var/lib/mysql /mnt/db_files/
#edit data file location in the mysql config file
sed -i 's/\/var\/lib\/mysql/\/mnt\/db_files\/mysql/' /etc/mysql/my.cnf
sed -i 's/\/var\/lib\/mysql/\/mnt\/db_files\/mysql/' /etc/mysql/mariadb.conf.d/50-server.cnf
sed -i 's/127.0.0.1/0.0.0.0/' /etc/mysql/mariadb.conf.d/50-server.cnf
service mysql start
elif which yum &> /dev/null
then
yum update -y
setenforce 0
yum -y install mariadb-server mariadb
systemctl start mariadb
systemctl stop mariadb
chown mysql:mysql /mnt/db_files
touch /var/log/mariadb/mariadb.log
chown mysql:mysql /var/log/mariadb/mariadb.log
#Move the database to the cinder device
mv -f /var/lib/mysql /mnt/db_files/
#edit data file location in the mysql config file
sed -i 's/\/var\/lib\/mysql/\/mnt\/db_files\/mysql/' /etc/my.cnf
#need to modify the socket info for the clients
echo "[client]" >> /etc/my.cnf
echo "socket=/mnt/db_files/mysql/mysql.sock" >> /etc/my.cnf
systemctl start mariadb
systemctl enable mariadb
fi
# Setup MySQL root password and create a user and add remote privs to app subnet
mysqladmin -u root password db_rootpassword
# create wordpress database
cat << EOF | mysql -u root --password=db_rootpassword
CREATE DATABASE db_name;
CREATE USER 'db_user'@'localhost';
SET PASSWORD FOR 'db_user'@'localhost'=PASSWORD("db_password");
GRANT ALL PRIVILEGES ON db_name.* TO 'db_user'@'localhost' IDENTIFIED BY 'db_password';
CREATE USER 'db_user'@'%';
SET PASSWORD FOR 'db_user'@'%'=PASSWORD("db_password");
GRANT ALL PRIVILEGES ON db_name.* TO 'db_user'@'%' IDENTIFIED BY 'db_password';
FLUSH PRIVILEGES;
EOF
params:
db_rootpassword: { get_param: db_root_password }
db_name: { get_param: db_name }
db_user: { get_param: db_username }
db_password: { get_param: db_password }
volume_id: {get_resource: db_files_volume }
outputs:
completion:
description: >
MYSQL Setup is complete, login username and password are
value:
str_replace:
template: >
Database Name=$dbName, Database Admin Acct=$dbAdmin
params:
$dbName: { get_param: db_name }
$dbAdmin: { get_param: db_username }
instance_ip:
description: IP address of the deployed compute instance
value: { get_attr: [MYSQL_instance, first_address] }

View File

@ -1,165 +0,0 @@
heat_template_version: 2016-10-14
description: >
This is a nested Heat used by the Web Applications Workload Reference Architecture
created by the Enterprise Working Group. These templates demonstrate a sample
LAMP architecture supporting Wordpress. This template installs and configures
Apache and Apache modproxy which is used to redirect traffic to the application nodes.
This serves as a guide to new users and is not meant for production deployment.
#Created by: Craig Sterrett 3/23/2016
#Updated by: Craig Sterrett 1/3/2017 to support LBaaS V2 and Newton
parameters:
ssh_key_name:
type: string
label: SSH Key Name
description: REQUIRED PARAMETER -Name of an existing SSH KeyPair to enable SSH access to instances.
default: cloudkey
constraints:
- custom_constraint: nova.keypair
description: Must already exist on your cloud
server_name:
type: string
label: Server Name
description: REQUIRED PARAMETER - Name of the instance to spin up.
hidden: false
default: Web_Server
instance_flavor:
type: string
label: Instance Flavor
description: The flavor type to use for each server.
default: m1.small
constraints:
- custom_constraint: nova.flavor
description: Must be a valid flavour provided by your cloud provider.
image_id:
type: string
label: Image ID
description: >
REQUIRED PARAMETER - The image id to be used for the compute instance. Please specify
your own Image ID in your project/tenant.
constraints:
- custom_constraint: glance.image
description: Must be a valid image on your cloud
private_network_id:
type: string
default: Web_Tier_private_network
description: The private Web network that will be utilized for all web servers
constraints:
- custom_constraint: neutron.network
description: Must be a valid private network on your cloud
private_subnet_id:
type: string
description: Private subnet of the LBaaS Pool
default: private_subnet
constraints:
- custom_constraint: neutron.subnet
description: Must be a valid private subnet on your cloud
security_group:
type: string
default: Workload_Web_SG
description: The Web security group that will be utilized for all web servers
pool_name:
type: string
description: LBaaS Pool to join
constraints:
- custom_constraint: neutron.lbaas.pool
description: Must be a LBaaS pool on your cloud
app_lbaas_vip:
type: string
description: Application LBaaS virtual IP
metadata:
type: json
resources:
web_server:
type: OS::Nova::Server
properties:
name: { get_param: server_name }
image: { get_param: image_id }
flavor: { get_param: instance_flavor }
key_name: { get_param: ssh_key_name }
metadata: { get_param: metadata }
networks:
- network: { get_param: private_network_id }
security_groups:
- { get_param: security_group }
user_data_format: RAW
user_data:
str_replace:
params:
$app_lbaas_vip: { get_param: app_lbaas_vip }
template: |
#!/bin/bash -v
#centos has this "security" feature in sudoers to keep scripts from sudo, comment it out
sed -i '/Defaults \+requiretty/s/^/#/' /etc/sudoers
#use apt-get for Debian/ubuntu, and yum for centos/fedora
if apt-get -v &> /dev/null
then
apt-get update -y
apt-get upgrade -y
#Install Apache
apt-get -y --force-yes install apache2
apt-get install y libapache2-mod-proxy-html libxml2-dev
apt-get install -y build-essential
a2enmod proxy
a2enmod proxy_http
a2enmod rewrite
a2enmod proxy_ajp
a2enmod deflate
a2enmod headers
a2enmod proxy_connect
a2enmod proxy_html
cat > /etc/apache2/sites-enabled/000-default.conf << EOL
<VirtualHost *:*>
ProxyPreserveHost On
ProxyPass / http://$app_lbaas_vip/ Keepalive=On
ProxyPassReverse / http://$app_lbaas_vip/
ServerName localhost
</VirtualHost>
EOL
echo `hostname -I` `hostname` >> /etc/hosts
/etc/init.d/apache2 restart
elif which yum &> /dev/null
then
yum update -y
#Install Apache
yum install -y httpd
yum install -y wget
cat >> /etc/httpd/conf/httpd.conf << EOL
<VirtualHost *:*>
ProxyPreserveHost On
ProxyPass / http://$app_lbaas_vip/
ProxyPassReverse / http://$app_lbaas_vip/
ServerName localhost
</VirtualHost>
EOL
service httpd restart
fi
Pool_Member:
type: OS::Neutron::LBaaS::PoolMember
properties:
pool: {get_param: pool_name}
address: {get_attr: [web_server, first_address]}
protocol_port: 80
subnet: {get_param: private_subnet_id}
outputs:
web_private_ip:
description: Private IP address of the Web node
value: { get_attr: [web_server, first_address] }
lb_member:
description: LoadBalancer member details.
value: { get_attr: [Pool_Member, show] }

View File

@ -1,356 +0,0 @@
heat_template_version: 2016-10-14
description: >
This is a nested Heat used by the Web Applications Workload Reference Architecture
created by the Enterprise Working Group. These templates demonstrate a sample
LAMP architecture supporting Wordpress. This template file creates 3 separate
private networks, two load balancers(LBaaS V2), and creates 3 security groups.
This serves as a guide to new users and is not meant for production deployment.
REQUIRED PARAMETERS:
public_network_id
#Created by: Craig Sterrett 3/23/2016
#Updated by: Craig Sterrett 1/3/2017 to support LBaaS V2 and Newton
parameters:
public_network_id:
type: string
label: Public Network
description: >
REQUIRED PARAMETER - The public network name or id used to access the internet.
This will fail if this is not a true public network
constraints:
- custom_constraint: neutron.network
description: Must be a valid public network on your cloud
dns_nameserver:
type: comma_delimited_list
label: DNS Name Server
description: The IP address of a DNS nameserver
default: 8.8.8.8,8.8.4.4
#####################################################
resources:
#Create 3 private Networks, one for each Tier
# create a private network/subnet for the web servers
web_private_network:
type: OS::Neutron::Net
properties:
name: Web_Tier_private_network
web_private_network_subnet:
type: OS::Neutron::Subnet
#depends_on makes sure that a resource is created prior to a get_resource command
#otherwise you may have race conditions and a stack will work sometimes and not others
depends_on: [web_private_network]
properties:
cidr: 192.168.100.0/24
#start IP allocation at .10 to allow room for the hard coded port IPs
allocation_pools: [{ "start": 192.168.100.10, "end": 192.168.100.200 }]
#Need to define default gateway in order for LBaaS namespace to pick it up
#If you let neutron grant a default gateway IP, then the LBaaS namespace may
#not pick it up and you will have routing issues
gateway_ip: 192.168.100.4
#Need to add a route from web network to app network otherwise everything will go
#out the default route
host_routes: [{"destination": 192.168.101.0/24, "nexthop": 192.168.100.5}]
network: { get_resource: web_private_network }
name: Web_Tier_private_subnet
dns_nameservers: { get_param: dns_nameserver }
enable_dhcp: true
# create a router between the public/external network and the web network
public_router:
type: OS::Neutron::Router
properties:
name: PublicWebRouter
external_gateway_info:
network: { get_param: public_network_id }
# attach the web private network to the public router
public_router_interface:
type: OS::Neutron::RouterInterface
#Make sure the public router and web subnet have been created first
depends_on: [public_router, web_private_network_subnet]
properties:
router: { get_resource: public_router }
subnet: { get_resource: web_private_network_subnet }
#############################
# create a private network/subnet for the Application servers
App_private_network:
type: OS::Neutron::Net
properties:
name: App_Tier_private_network
App_private_network_subnet:
type: OS::Neutron::Subnet
depends_on: [App_private_network]
properties:
cidr: 192.168.101.0/24
#start IP allocation at .10 to allow room for the hard coded gateway IPs
allocation_pools: [{ "start": 192.168.101.10, "end": 192.168.101.200 }]
#Need to define default gateway in order for LBaaS namespace to pick it up
#If you let neutron grant a default gateway IP, then the LBaaS namespace may
#not pick it up and you will have routing issues
gateway_ip: 192.168.101.5
#This routing information will get passed to the instances as they startup
#Provide both the routes to the DB network and to the web network
host_routes: [{"destination": 192.168.100.0/24, "nexthop": 192.168.101.5}, {"destination": 192.168.102.0/24, "nexthop": 192.168.101.6}, {"destination": 0.0.0.0/24, "nexthop": 192.168.100.4}]
network: { get_resource: App_private_network }
name: App_Tier_private_subnet
dns_nameservers: { get_param: dns_nameserver }
enable_dhcp: true
# create a router linking App and Web network
App_router:
type: OS::Neutron::Router
properties:
name: "AppWebRouter"
external_gateway_info: {"network": { get_param: public_network_id }, "enable_snat": True}
# Create a port connecting the App_router to the App network
web_router_app_port:
type: OS::Neutron::Port
depends_on: [App_private_network]
properties:
name: "App_Net_Port"
network: { get_resource: App_private_network }
#Assign the default gateway address
#The default gateway will get set as the default route in the LBaaS namespace
fixed_ips: [{"ip_address": 192.168.101.5}]
# Create a port connecting the App_router to the Web network
web_router_web_port:
type: OS::Neutron::Port
depends_on: [web_private_network]
properties:
name: "Web_Net_Port"
network: { get_resource: web_private_network }
fixed_ips: [{"ip_address": 192.168.100.5}]
App_router_interface1:
type: OS::Neutron::RouterInterface
depends_on: [App_router, web_router_app_port]
properties:
router: { get_resource: App_router }
port: { get_resource: web_router_app_port }
App_router_interface2:
type: OS::Neutron::RouterInterface
depends_on: [App_router, web_router_web_port]
properties:
router: { get_resource: App_router }
port: { get_resource: web_router_web_port }
##############################
#Create two Load Balancers one for the Web tier with a public IP and one for the App Tier
#with only private network access
#LBaaS V2 Load Balancer for Web Tier
Web_Tier_LoadBalancer:
type: OS::Neutron::LBaaS::LoadBalancer
depends_on: [web_private_network_subnet,public_router_interface]
properties:
name: Web_LoadBalancer
vip_subnet: {get_resource: web_private_network_subnet}
#LBaaS V2 Listener for Web server pool
Web_Tier_Listener:
type: OS::Neutron::LBaaS::Listener
depends_on: [Web_Tier_LoadBalancer]
properties:
protocol_port: 80
protocol: TCP
loadbalancer: {get_resource: Web_Tier_LoadBalancer }
#LBaaS V2 Pool for Web server pool
Web_Server_Pool:
type: OS::Neutron::LBaaS::Pool
depends_on: [Web_Tier_Listener]
properties:
lb_algorithm: ROUND_ROBIN
listener: {get_resource: Web_Tier_Listener }
protocol: TCP
# Floating_IP:
Web_Network_Floating_IP:
type: OS::Neutron::FloatingIP
depends_on: [Web_Tier_LoadBalancer,public_router_interface]
properties:
floating_network: {get_param: public_network_id}
port_id: { get_attr: [ Web_Tier_LoadBalancer, vip_port_id ] }
#****************************************
#LBaaS V2 Load Balancer for App Tier
App_Tier_LoadBalancer:
type: OS::Neutron::LBaaS::LoadBalancer
depends_on: [App_private_network_subnet]
properties:
name: App_LoadBalancer
vip_subnet: {get_resource: App_private_network_subnet}
#LBaaS V2 Listener for App server pool
App_Tier_Listener:
type: OS::Neutron::LBaaS::Listener
depends_on: [App_Tier_LoadBalancer]
properties:
protocol_port: 80
protocol: TCP
loadbalancer: {get_resource: App_Tier_LoadBalancer }
#LBaaS V2 Pool for App server pool
App_Server_Pool:
type: OS::Neutron::LBaaS::Pool
depends_on: [App_Tier_Listener]
properties:
lb_algorithm: ROUND_ROBIN
listener: {get_resource: App_Tier_Listener }
protocol: TCP
#############################
# create a private network/subnet for the Database servers
DB_private_network:
type: OS::Neutron::Net
properties:
name: DB_Tier_private_network
DB_private_network_subnet:
type: OS::Neutron::Subnet
depends_on: [DB_private_network]
properties:
cidr: 192.168.102.0/24
#start IP allocation at .10 to allow room for the hard coded gateway IPs
allocation_pools: [{ "start": 192.168.102.10, "end": 192.168.102.200 }]
gateway_ip: 192.168.102.6
network: { get_resource: DB_private_network }
dns_nameservers: { get_param: dns_nameserver }
enable_dhcp: true
# create a router linking Database and App network
DB_router:
type: OS::Neutron::Router
properties:
name: "AppDBRouter"
external_gateway_info: {"network": { get_param: public_network_id }, "enable_snat": True}
# Create a port connecting the db_router to the db network
db_router_db_port:
type: OS::Neutron::Port
depends_on: [DB_private_network]
properties:
network: { get_resource: DB_private_network }
name: "DB_Net_Port"
fixed_ips: [{"ip_address": 192.168.102.6}]
# Create a port connecting the db_router to the app network
db_router_app_port:
type: OS::Neutron::Port
depends_on: [App_private_network]
properties:
network: { get_resource: App_private_network }
name: "DB_Router_App_Port"
fixed_ips: [{"ip_address": 192.168.101.6}]
# Now lets add our ports to our router
db_router_interface1:
type: OS::Neutron::RouterInterface
depends_on: [DB_router,db_router_db_port]
properties:
router: { get_resource: DB_router }
port: { get_resource: db_router_db_port }
db_router_interface2:
type: OS::Neutron::RouterInterface
depends_on: [DB_router,db_router_app_port]
properties:
router: { get_resource: DB_router }
port: { get_resource: db_router_app_port }
#################
#Create separate security groups for each Tier
# create a specific web security group that routes just web and ssh traffic
web_security_group:
type: OS::Neutron::SecurityGroup
properties:
description: A application specific security group that passes ports 22 and 80
name: Workload_Web_SG
rules:
- protocol: tcp
port_range_min: 22
port_range_max: 22
- protocol: tcp
port_range_min: 80
port_range_max: 80
# create a specific application layer security group that routes database port 3306 traffic, web and ssh
app_security_group:
type: OS::Neutron::SecurityGroup
properties:
description: A application specific security group that passes ports 22, 80 and 3306
name: Workload_App_SG
rules:
- protocol: tcp
port_range_min: 22
port_range_max: 22
- protocol: tcp
port_range_min: 80
port_range_max: 80
- protocol: tcp
port_range_min: 3306
port_range_max: 3306
# create a specific database security group that routes just database port 3306 traffic and ssh
db_security_group:
type: OS::Neutron::SecurityGroup
properties:
description: A database specific security group that just passes port 3306 and 22 for ssh
name: Workload_DB_SG
rules:
- protocol: tcp
port_range_min: 22
port_range_max: 22
- protocol: tcp
port_range_min: 3306
port_range_max: 3306
outputs:
#Return a bunch of values so we can use them later in the Parent Heat template when we spin up servers
db_private_network_id:
description: Database private network ID
value: {get_resource: DB_private_network}
web_private_network_id:
description: Web private network ID
value: {get_resource: web_private_network}
web_private_subnet_id:
description: Web private subnet ID
value: {get_resource: web_private_network_subnet}
app_private_network_id:
description: App private network ID
value: {get_resource: App_private_network}
app_private_subnet_id:
description: App private subnet ID
value: {get_resource: App_private_network_subnet}
db_security_group_id:
description: Database security group ID
value: {get_resource: db_security_group}
app_security_group_id:
description: App security group ID
value: {get_resource: app_security_group}
web_security_group_id:
description: Web security group ID
value: {get_resource: web_security_group}
web_lbaas_pool_name:
description: Name of Web LBaaS Pool
value: {get_resource: Web_Server_Pool}
app_lbaas_pool_name:
description: Name of App LBaaS Pool
value: {get_resource: App_Server_Pool}
web_lbaas_IP:
description: Public floating IP assigned to web LBaaS
value: { get_attr: [ Web_Network_Floating_IP, floating_ip_address ] }
app_lbaas_IP:
description: Internal floating IP assigned to app LBaaS
value: {get_attr: [ App_Tier_LoadBalancer, vip_address]}

View File

@ -1,449 +0,0 @@
OpenStack Workload Reference Architecture: Web Applications
===========================================================
Introduction
------------
Web applications are the most prevalent applications in business today. They
are driven by user interaction over the Internet using a web browser front-end.
Common web applications include webmail, online retail sales, online auctions,
online banking, instant messaging services, and more.
Web applications are typically characterized by IT resource requirements that
fluctuate with usage, predictably or unpredictably. Failure to respond to
either can impact customer satisfaction and sales. An automatically scaling web
application and underlying infrastructure can be essential. Unlike a
traditional, static environment, cloud computing allows IT resources to scale
dynamically, both up and down, based on the application-generated load
(CPU utilization, memory, etc.).
The OpenStack cloud platform offers auto-scaling for web applications as well
as a comprehensive platform for all IT applications, offering agility and
cost-effectiveness. OpenStack is open source cloud software that controls large
pools of compute, storage, and networking resources throughout a datacenter,
all managed through a dashboard or API. Thousands of enterprises use OpenStack
to run their businesses every day.
Intended for enterprise IT architects, this reference architecture describes
the architecture and services required by a simple three-tier web application,
using popular LAMP software on an OpenStack cloud. LAMP consists of Linux,
Apache, MySQL, and PHP/Python/Perl and is considered by many as the platform of
choice for development and deployment of high performance web applications.
We identify and recommend the required and optional OpenStack services for both
a static virtualized implementation and a fully dynamic auto-scaling
implementation. Lastly, we will provide tested implementation files you can
use to install and instantiate an OpenStack web application environment using
Wordpress as the sample application. These files are Heat templates that will
create the virtual servers for each tier, networking, load balancing, and
optionally, auto-scaling.
Figure 1: Three-tier web application architecture overview
.. figure:: figures/figure01.png
:alt: Figure 1: Three-tier web application architecture overview
OpenStack for Web Applications
------------------------------
A three-tier web application consists of the web presentation, the
application, and persistent database tiers.
- Web presentation tier cluster of web servers that will be used to render
either static or dynamically generated content for the web browser.
- Application tier cluster of application servers that will be used to
process content and business logic.
- Database tier cluster of database servers that store data persistently.
An OpenStack cloud is powered by many different services (also known as
projects). Utilizing only the **core services**, a three-tier web services
application can be deployed in a virtualized environment that can be
**manually** scaled up and down as required with minimal effort.
**Optional services** can be added for more functionality:
- OpenStack Orchestration service (Heat project) allows automating workload
deployment.
- Together, Orchestration and Telemetry (Ceilometer) enable dynamic scaling as
load increases and decreases.
- OpenStack Database service (Trove) provides Database-as-a-Service (DBaaS) to
automate database provisioning and administration. Trove is an option for
web applications on OpenStack but is not used in this basic reference
architecture.
Figure 2 shows the core and optional services in relation to one another, and
the services to confirm are available in your OpenStack cloud.
Figure 2. Logical representation of OpenStack services for web applications
.. figure:: figures/figure02.png
:alt: Figure 2. Logical representation of OpenStack services for web applications
Brief descriptions of the core and optional services used for simple
three-tier web applications follow. The `OpenStack Project Navigator <http://www.openstack.org/software/project-navigator/>`_
provides additional information.
.. list-table:: **Core services**
:widths: 20 50
* - Compute (Nova)
- Manages the life cycle of compute instances, including spawning,
scheduling, and decommissioning of virtual machines (VMs) on demand.
* - Image Service (Glance)
- Stores and retrieves VM disk images. Used by OpenStack Compute during
instance provisioning.
* - Block Storage (Cinder)
- Virtualizes the management of block storage devices and provides a
self-service API to request and use those resources regardless of the
physical storage location or device type. Supports popular storage
devices.
* - Networking (Neutron)
- Enables network connectivity as a service for other OpenStack services,
such as OpenStack Compute. Provides an API to define networks and their
attachments. Supports popular networking vendors and technologies. Also
provides LBaaS and Firewall-as-a-Service (FWaaS).
* - Identity Service (Keystone)
- Provides authentication and authorization for the other OpenStack
services.
* - Object Storage (Swift)
- Stores and retrieves arbitrary unstructured data objects via a RESTful
HTTP-based API. Highly fault-tolerant with data replication and
scale-out architecture.
* - Dashboard (Horizon)
- Provides an extensible web-based self-service portal to interact with
underlying OpenStack services, such as launching an instance, assigning
IP addresses, or configuring access controls.
.. list-table:: **Optional services**
:widths: 20 50
* - Orchestration (Heat)
- Orchestrates multiple composite cloud applications by using either the
native HOT template format or the AWS CloudFormation template format,
through both an OpenStack-native REST API and a
CloudFormation-compatible Query API.
* - Telemetry (Ceilometer)
- Monitors and meters the OpenStack cloud for billing, benchmarking,
scalability, and statistical purposes.
* - Database (Trove)
- A database-as-a-service that provisions relational and non-relational
database engines.
Figure 3 illustrates the basic functional interaction between these services.
For further details: `OpenStack Conceptual Architecture Diagram <http://docs.openstack.org/admin-guide/common/get-started-conceptual-architecture.html>`_.
Figure 3. Functional interaction between OpenStack components
.. figure:: figures/figure03.png
:alt: Figure 3. Functional interaction between OpenStack components
Structuring an OpenStack Web Application
----------------------------------------
Generally a three-tier web application consists of a web presentation tier,
application tier, and persistent database tier. This chapter discusses these
and additional architectural components and considerations for an
OpenStack-based web application.
.. list-table::
:widths: 20 50
:header-rows: 1
* - Architectural Components
- Description
* - Web presentation tier
- A cluster of web server used to render static or dynamically generated
content for the web browser.
* - Application tier
- A cluster of application servers used to process content and business
logic.
* - Database tier
- A cluster of database servers used to store data persistently.
* - Load balancers
- Two load balancers are required to equally distribute load. The first
load balancer distributes the web traffic at the presentation tier. A
separate load balancer is required to distribute the load among the
application servers.
* - Relational Database Management System (RDBMS)
- The database tier used in this example uses a master/slave RDBMS
configuration. Data is kept in persistent block storage and backed-up
periodically.
* - Firewalls
- For security, a set of firewall rules must be enforced at each tier.
* - Network configuration
- The network must be configured to filter unnecessary traffic at
different tiers.
* - Auto-scaling
- Auto-scaling is desirable to automatically respond to unexpected
traffic spikes and resume to normal operation when the load decreases.
Figure 4: OpenStack web application architecture
.. figure:: figures/figure04.png
:alt: Figure 4: OpenStack web application architecture
Load balancing
**************
Load balancing can be based on round robin, least connections, or random. If the
application is not cloud-native and needs to maintain session state,
Load-Balancing-as-a-Service (LBaaS) can be configured to always direct the
equest to the same VMs. Neutron allows for proprietary and open-source LBaaS
technologies to drive load balancing of requests, allowing the OpenStack
operator to choose. Neutron LBaaS V1.0, is used for this reference
architecture. V2.0 is available with the OpenStack Liberty release and supports
Octavia as well as HAProxy backends. An alternative to Neutron LBaaS is to
setup a software load balancer by launching instances with HAProxy.
Image management
****************
There are multiple options and tools to provide configuration of servers when
spawning instances of the web, application, and database VMs. On-the-fly
configuration allows greater flexibility but can increase spawning time. The
images can also be pre-configured to contain all of the files, packages and
patches required to boot a fully operational instance. Pre-configuration can
reduce instance build time, but includes its own set of problems, such as
patching and keeping licenses up to date. For this example, the orchestration
features built into Heat are used to spawn and configure the three tiers of
servers on-the-fly.
Persistent storage
******************
Similar to an external hard drive, Cinder volumes are persistent block-storage
virtual devices that may be mounted and dismounted from the VM by the operating
system. Cinder volumes can be attached to only one instance at a time. This
reference architecture creates and attaches a Cinder volume to the database VM
to meet the data persistency requirements for the database tier. In the case of
a database VM failure, a new VM can be created and the Cinder volume can be
re-attached to the new VM.
Swift provides highly available, distributed, eventually-consistent
object/BLOB storage. Unlike a physical device, Swift storage is never mounted
to the instance. Objects and metadata are created, modified, and obtained using
the Object Storage API, which is implemented as a set of REpresentational State
Transfer (REST) web services. If the web application requires hosting of static
content (e.g. image, video), use Swift to store it, and configure Swift to
serve the content over HTTP. In this reference architecture, Swift is also used
for storing and archiving the database backup files.
Network subnets
***************
For this workload, Neutron is used to create multiple subnets, one for each
tier: a web subnet, an application subnet, and a data subnet. Neutron routers
are created to route traffic between the subnets.
Network security
****************
Filtering of inbound traffic is done through the use of security groups.
Different security groups can be created and applied to the instances in each
tier to filter unnecessary network traffic. OpenStack security groups allow
specification of multiple rules to allow/deny traffic from certain protocols,
ports, or IP addresses or ranges. One or more security groups can be applied
to each instance. All OpenStack projects have a "default" security group, which
is applied to instances that have no other security group defined. Unless
changed, the default security group denies all incoming traffic.
Orchestration
*************
Heat uses template files to automate the deployment of complex cloud
applications and environments. Orchestration is more than just standing up
virtual servers. It can also be used to install software, apply patches,
configure networking and security, and more. The Heat templates provided with
this reference architecture allow the user to quickly and automatically setup
and configure a LAMP-based web services environment.
Auto-scaling
************
The ability to scale horizontally is one of the greatest advantages of cloud
computing. Using a combination of Heat orchestration and Ceilometer, an
OpenStack cloud can be configured to automatically launch additional VMs for
the web and application tiers when demand exceeds preset thresholds. Ceilometer
performs the system resource monitoring and can be configured to alarm when
thresholds are exceeded. Heat then responds to the alarm according to the
configured scale-up policy. Scaling can also be done in the opposite direction,
reducing resources when the demand is low, saving money.
Demonstration and Sample Code
-----------------------------
This section describes the Heat templates provided as resources for this
workload. They have been created for reference and training and are not
intended to be used unmodified in a production environment.
The Heat templates demonstrate how to configure and deploy WordPress, a
popular web application, on a three-tier LAMP architecture. There are two
versions of the primary template: one that creates a static environment
(manual scaling) and one that integrates with Ceilometer to provide
auto-scaling of the web and application tiers based on CPU load.
The Heat templates can be downloaded from
http://www.openstack.org/software/sample-configs#webapplications
.. list-table::
:widths: 10 20 25
:header-rows: 1
* - Tier
- Function
- Details
* - Web
- Reverse Proxy Server
- Apache + mod_proxy
* - App
- WordPress Server
- Apache, PHP, MySQL Client, WordPress
* - Data
- Database Server
- MySQL
Heat file details
*****************
The Heat template uses a nested structure, with two different primary yaml
files, both of which use the same four nested files. The files contain inline
comments identifying possible issues and pitfalls when setting up the
environment. The templates were tested using Mitaka release of OpenStack, and
Ubuntu server 14.04 and Centos 7.
**WebAppStatic.yaml:** Run this yaml file for a static environment. It creates
a static environment with two load-balanced web servers, two load-balanced
application servers, and a single database server using Cinder block storage
for the database. This yaml file utilizes Heat resource groups to call
heat_app_tier.yaml and heat_web_tier.yaml, launching multiple copies of the web
and application servers.
**WebAppAutoScaling.yaml:** For a dynamic auto-scaling environment, run this
yaml file. It sets up Heat auto-scaling groups and Ceilometer alarms for both
the web and application tiers. The high-CPU Ceilometer alarms are configured by
default to add an instance when the average CPU utilization is greater than 50%
over a five-minute period. The low CPU alarms are configured to remove an
instance when the average CPU utilization drops below 20%. When configuring
Ceilometer CPU alarms, it's important to keep in mind that the alarm by default
looks at the average CPU utilization over all instances in the OpenStack
project or tenant. Metadata can be used to create unique tags to identify
groups of nodes, and then have the alarm trigger only when the average CPU
utilization of the group exceeds the threshold. Ceilometer does not look at the
CPU utilization on each of the instances; only the average utilization is
reported. Another very important tip: ensure the selected "period" used to
monitor the nodes is greater than the sampling rate configured in
/etc/ceilometer/pipeline.config file. If the sampling rate is higher than the
period, the alarm will never be activated.
The following yaml files are called by the primary files above:
- **setup_net_sg.yaml:** This is the first file called by the main templates.
This file creates three separate private networks, one for each tier. In
addition, it creates two load balancers (using Neutron LBaaS V1.0): one with
a public IP that connects the web tier private network to the public network,
and one with a private IP that connects the web tier network to the
application tier network. The template also creates a router connecting the
application network to the database network. In addition to the networks and
routers, the template creates three security groups, one for each of the
tiers.
- **heat_web_tier.yaml:** This template file launches the web tier nodes. In
addition to launching instances, it installs and configures Apache and Apache
modproxy, which is used to redirect traffic to the application nodes.
- **heat_app_tier.yaml:** This template file launches the application tier
nodes. In addition to launching the instances, it installs Apache, PHP, MySQL
client, and finally WordPress.
- **heat_sql_tier.yaml:** This template file launches the database tier node.
It also creates a Cinder block device to store the database files, and the
required users and databases for the WordPress application.
Scope and Assumptions
---------------------
The Heat templates provided and described above assume that the three-tier web
application workload is deployed in a single-region, single-zone OpenStack
environment. If the actual application requires higher SLA commitment, it is
recommended to deploy OpenStack in a multi-zone, multi-region environment. This
deployment is out of the scope of this reference architecture and will be
described in a separate one.
As mentioned, Trove is not used in this implementation at this time. Trove is
OpenStack DBaaS that provisions relational and non-relational database engines.
An update to this reference architecture to include Trove is under
consideration.
Another OpenStack service that would be suitable for the
three-tier architecture would be Neutron Firewall-as-a-Service (FWaaS). FWaaS
operates at the perimeter by filtering traffic at the Neutron router. This
distinguishes it from security groups, which operate at the instance level.
FWaaS is also under consideration for a future update.
Summary
-------
There are many strategies for deploying a three-tier web application and there
are choices for each OpenStack deployment. This reference architecture is meant
to serve as a general guide to be used to deploy the LAMP stack on an OpenStack
cloud using core and selected optional services. The Heat orchestration service
is used; however, popular third-party deployment products such as Chef, Puppet,
or Ansible can also be used. Other OpenStack services can be selected to
enhance this basic architecture with additional capabilities.
This document shows how easily and quickly a three-tier LAMP and Wordpress
environment can be implemented using just a few OpenStack services. We offer
the Heat templates to help you get started and become familiar with OpenStack.
These additional resources are recommended to delve into more depth on overall
OpenStack cloud architecture, and the components and services covered in this
reference architecture. The vibrant, global OpenStack community and ecosystem
can be invaluable for their experience and advice. Visit openstack.org to get
started or click on these resources to begin designing your OpenStack-based web
applications.
.. list-table::
:widths: 25 50
:header-rows: 1
* - Resource
- Overview
* - `OpenStack Marketplace`_
- One-stop resource to the skilled global ecosystem for distributions,
drivers, training, services and more.
* - `OpenStack Architecture Design Guide`_
- Guidelines for designing an OpenStack cloud architecture for common use
cases. With examples.
* - `OpenStack Networking Guide`_
- How to deploy and manage OpenStack Networking (Neutron).
* - `OpenStack Security Guide`_
- Best practices and conceptual information about securing an OpenStack
cloud.
* - `OpenStack High Availability Guide`_
- Installing and configuring OpenStack for high availability.
* - `Complete OpenStack documentation`_
- Index to all documentation, for every role and step in planning and
operating an OpenStack cloud.
* - `Community Application Catalog`_
- Download this LAMP/WordPress sample application and other free OpenStack
applications here.
* - `Welcome to the community!`_
- Join mailing lists and IRC chat channels, find jobs and events, access
the source code and more.
* - `User groups`_
- Find a user group near you, attend meetups and hackathons—or organize
one!
* - `OpenStack events`_
- Global schedule of events including the popular OpenStack Summits and
regional OpenStack Days.
.. _OpenStack Marketplace: http://www.openstack.org/marketplace/
.. _OpenStack Architecture Design Guide: http://docs.openstack.org/arch-design/
.. _OpenStack Networking Guide: http://docs.openstack.org/mitaka/networking-guide/
.. _OpenStack Security Guide: http://docs.openstack.org/security-guide/
.. _OpenStack High Availability Guide : http://docs.openstack.org/ha-guide/
.. _Complete OpenStack Documentation: http://docs.openstack.org/
.. _Community Application Catalog: http://apps.openstack.org/
.. _Welcome to the community!: http://www.openstack.org/community/
.. _User groups: https://groups.openstack.org/
.. _OpenStack events: http://www.openstack.org/community/events/

View File

@ -1,275 +0,0 @@
# -*- coding: utf-8 -*-
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Glance Release Notes documentation build configuration file, created by
# sphinx-quickstart on Tue Nov 3 17:40:50 2015.
#
# This file is execfile()d with the current directory set to its
# containing dir.
#
# Note that not all possible configuration values are present in this
# autogenerated file.
#
# All configuration values have a default; values that are commented out
# serve to show the default.
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
# sys.path.insert(0, os.path.abspath('.'))
# -- General configuration ------------------------------------------------
# If your documentation needs a minimal Sphinx version, state it here.
# needs_sphinx = '1.0'
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
# ones.
extensions = [
'oslosphinx',
'reno.sphinxext',
]
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
# The suffix of source filenames.
source_suffix = '.rst'
# The encoding of source files.
# source_encoding = 'utf-8-sig'
# The master toctree document.
master_doc = 'index'
# General information about the project.
project = u'workload-ref-archs Release Notes'
copyright = u'2016, OpenStack Foundation'
# The version info for the project you're documenting, acts as replacement for
# |version| and |release|, also used in various other places throughout the
# built documents.
#
# The short X.Y version.
# The full version, including alpha/beta/rc tags.
release = ''
# The short X.Y version.
version = ''
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
# language = None
# There are two options for replacing |today|: either, you set today to some
# non-false value, then it is used:
# today = ''
# Else, today_fmt is used as the format for a strftime call.
# today_fmt = '%B %d, %Y'
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
exclude_patterns = []
# The reST default role (used for this markup: `text`) to use for all
# documents.
# default_role = None
# If true, '()' will be appended to :func: etc. cross-reference text.
# add_function_parentheses = True
# If true, the current module name will be prepended to all description
# unit titles (such as .. function::).
# add_module_names = True
# If true, sectionauthor and moduleauthor directives will be shown in the
# output. They are ignored by default.
# show_authors = False
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx'
# A list of ignored prefixes for module index sorting.
# modindex_common_prefix = []
# If true, keep warnings as "system message" paragraphs in the built documents.
# keep_warnings = False
# -- Options for HTML output ----------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
html_theme = 'default'
# Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the
# documentation.
# html_theme_options = {}
# Add any paths that contain custom themes here, relative to this directory.
# html_theme_path = []
# The name for this set of Sphinx documents. If None, it defaults to
# "<project> v<release> documentation".
# html_title = None
# A shorter title for the navigation bar. Default is the same as html_title.
# html_short_title = None
# The name of an image file (relative to this directory) to place at the top
# of the sidebar.
# html_logo = None
# The name of an image file (within the static path) to use as favicon of the
# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
# pixels large.
# html_favicon = None
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = ['_static']
# Add any extra paths that contain custom files (such as robots.txt or
# .htaccess) here, relative to this directory. These files are copied
# directly to the root of the documentation.
# html_extra_path = []
# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
# using the given strftime format.
# html_last_updated_fmt = '%b %d, %Y'
# If true, SmartyPants will be used to convert quotes and dashes to
# typographically correct entities.
# html_use_smartypants = True
# Custom sidebar templates, maps document names to template names.
# html_sidebars = {}
# Additional templates that should be rendered to pages, maps page names to
# template names.
# html_additional_pages = {}
# If false, no module index is generated.
# html_domain_indices = True
# If false, no index is generated.
# html_use_index = True
# If true, the index is split into individual pages for each letter.
# html_split_index = False
# If true, links to the reST sources are added to the pages.
# html_show_sourcelink = True
# If true, "Created using Sphinx" is shown in the HTML footer. Default is True.
# html_show_sphinx = True
# If true, "(C) Copyright ..." is shown in the HTML footer. Default is True.
# html_show_copyright = True
# If true, an OpenSearch description file will be output, and all pages will
# contain a <link> tag referring to it. The value of this option must be the
# base URL from which the finished HTML is served.
# html_use_opensearch = ''
# This is the file name suffix for HTML files (e.g. ".xhtml").
# html_file_suffix = None
# Output file base name for HTML help builder.
htmlhelp_basename = 'GlanceReleaseNotesdoc'
# -- Options for LaTeX output ---------------------------------------------
latex_elements = {
# The paper size ('letterpaper' or 'a4paper').
# 'papersize': 'letterpaper',
# The font size ('10pt', '11pt' or '12pt').
# 'pointsize': '10pt',
# Additional stuff for the LaTeX preamble.
# 'preamble': '',
}
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title,
# author, documentclass [howto, manual, or own class]).
latex_documents = [
('index', 'GlanceReleaseNotes.tex', u'Glance Release Notes Documentation',
u'Glance Developers', 'manual'),
]
# The name of an image file (relative to this directory) to place at the top of
# the title page.
# latex_logo = None
# For "manual" documents, if this is true, then toplevel headings are parts,
# not chapters.
# latex_use_parts = False
# If true, show page references after internal links.
# latex_show_pagerefs = False
# If true, show URL addresses after external links.
# latex_show_urls = False
# Documents to append as an appendix to all manuals.
# latex_appendices = []
# If false, no module index is generated.
# latex_domain_indices = True
# -- Options for manual page output ---------------------------------------
# One entry per manual page. List of tuples
# (source start file, name, description, authors, manual section).
man_pages = [
('index', 'glancereleasenotes', u'Glance Release Notes Documentation',
[u'Glance Developers'], 1)
]
# If true, show URL addresses after external links.
# man_show_urls = False
# -- Options for Texinfo output -------------------------------------------
# Grouping the document tree into Texinfo files. List of tuples
# (source start file, target name, title, author,
# dir menu entry, description, category)
texinfo_documents = [
('index', 'GlanceReleaseNotes', u'Glance Release Notes Documentation',
u'Glance Developers', 'GlanceReleaseNotes',
'One line description of project.',
'Miscellaneous'),
]
# Documents to append as an appendix to all manuals.
# texinfo_appendices = []
# If false, no module index is generated.
# texinfo_domain_indices = True
# How to display URL addresses: 'footnote', 'no', or 'inline'.
# texinfo_show_urls = 'footnote'
# If true, do not generate a @detailmenu in the "Top" node's menu.
# texinfo_no_detailmenu = False
# -- Options for Internationalization output ------------------------------
locale_dirs = ['locale/']

View File

@ -1,8 +0,0 @@
============================================
workload-ref-archs Release Notes
============================================
.. toctree::
:maxdepth: 1
unreleased

View File

@ -1,5 +0,0 @@
==============================
Current Series Release Notes
==============================
.. release-notes::

View File

@ -1,5 +0,0 @@
# The order of packages is significant, because pip processes them in the order
# of appearance. Changing the order has an impact on the overall integration
# process, which may cause wedges in the gate later.
pbr>=1.8 # Apache-2.0

View File

@ -1,51 +0,0 @@
[metadata]
name = workload-ref-archs
summary = OpenStack Enterprise WG workload reference architectures.
description-file =
README.rst
author = OpenStack Enterprise Working Group
author-email = user-committee@lists.openstack.org
home-page = http://www.openstack.org/
classifier =
Environment :: OpenStack
Intended Audience :: Information Technology
Intended Audience :: System Administrators
License :: OSI Approved :: Apache Software License
Operating System :: POSIX :: Linux
Programming Language :: Python
Programming Language :: Python :: 2
Programming Language :: Python :: 2.7
Programming Language :: Python :: 3
Programming Language :: Python :: 3.6
Programming Language :: Python :: 3.7
[files]
packages =
workload-ref-archs
[build_sphinx]
source-dir = doc/source
build-dir = doc/build
all_files = 1
[upload_sphinx]
upload-dir = doc/build/html
[compile_catalog]
directory = workload-ref-archs/locale
domain = workload-ref-archs
[update_catalog]
domain = workload-ref-archs
output_dir = workload-ref-archs/locale
input_file = workload-ref-archs/locale/workload-ref-archs.pot
[extract_messages]
keywords = _ gettext ngettext l_ lazy_gettext
mapping_file = babel.cfg
output_file = workload-ref-archs/locale/workload-ref-archs.pot
[build_releasenotes]
all_files = 1
build-dir = releasenotes/build
source-dir = releasenotes/source

View File

@ -1,29 +0,0 @@
# Copyright (c) 2013 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# THIS FILE IS MANAGED BY THE GLOBAL REQUIREMENTS REPO - DO NOT EDIT
import setuptools
# In python < 2.7.4, a lazy loading of package `pbr` will break
# setuptools if some other modules registered functions in `atexit`.
# solution from: http://bugs.python.org/issue15881#msg170215
try:
import multiprocessing # noqa
except ImportError:
pass
setuptools.setup(
setup_requires=['pbr>=1.8'],
pbr=True)

View File

@ -1,17 +0,0 @@
# The order of packages is significant, because pip processes them in the order
# of appearance. Changing the order has an impact on the overall integration
# process, which may cause wedges in the gate later.
pbr>=1.8 # Apache-2.0
doc8 # Apache-2.0
Pygments
sphinx>=1.6.2 # BSD
openstackdocstheme>=1.16.0 # Apache-2.0
nwdiag
sphinxcontrib-nwdiag
# For translations
Babel>=2.3.4,!=2.4.0 # BSD

46
tox.ini
View File

@ -1,46 +0,0 @@
[tox]
minversion = 2.0
envlist = py37,py36,py27,pypy,pep8
skipsdist = True
[testenv]
usedevelop = True
install_command = pip install -c{env:UPPER_CONSTRAINTS_FILE:https://releases.openstack.org/constraints/upper/master} {opts} {packages}
setenv =
VIRTUAL_ENV={envdir}
PYTHONWARNINGS=default::DeprecationWarning
deps = -r{toxinidir}/test-requirements.txt
commands = python setup.py test --slowest --testr-args='{posargs}'
[testenv:pep8]
basepython = python3
commands = flake8 {posargs}
[testenv:venv]
basepython = python3
commands = {posargs}
[testenv:cover]
basepython = python3
commands = python setup.py test --coverage --testr-args='{posargs}'
[testenv:docs]
basepython = python3
commands = python setup.py build_sphinx
[testenv:releasenotes]
basepython = python3
commands =
sphinx-build -a -E -W -d releasenotes/build/doctrees -b html releasenotes/source releasenotes/build/html
[testenv:debug]
basepython = python3
commands = oslo_debug_helper {posargs}
[flake8]
# E123, E125 skipped as they are invalid PEP-8.
show-source = True
ignore = E123,E125
builtins = _
exclude=.venv,.git,.tox,dist,doc,*lib/python*,*egg,build

View File

@ -1,19 +0,0 @@
# -*- coding: utf-8 -*-
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import pbr.version
__version__ = pbr.version.VersionInfo(
'workload-ref-archs').version_string()

View File

@ -1,100 +0,0 @@
eCommerce Sample Heat Template
==============================
These heat templates deploy OpenCart on a 3-Tier LAMP architecture. There are
two versions of the primary template, one which creates a static environment
which does not require ceilometer, and one which provides autoscaling of the
web and services layers based on CPU load, which does require ceilometer.
**The OpenCart 3-Tier LAMP Architecture Sample**
======== ====================== ====================================
Layer Function Details
======== ====================== ====================================
Web Reverse Proxy Server Apache + mod_proxy
Services Application Server Apache, PHP, MySQL Client, OpenCart
Data Database Server MySQL
======== ====================== ====================================
-----------------
Heat File Details
-----------------
The template uses a nested structure, with two different primary yaml files,
both of which utilize the same 4 nested files. The templates were tested
using the Newton release of OpenStack, and Ubuntu Trusty and Xenial, and Centos7.
**eCommerceStatic.yaml:** If you want a static environment, run this yaml
file. This will create a static environment, with two load balanced web
servers, and two load balanced application servers, and a single database
server using cinder block storage for the database files.
REQUIRED PARAMETERS:
* ssh_key_name, image_id, public_network_id
OPTIONAL PARAMETERS:
* db_instance_flavor, app_instance_flavor, web_instance_flavor,
db_server_name, db_name, db_username, db_password, db_root_password,
app_server_name, web_server_name, admin_username, admin_password,
admin_email, dns_nameserver
**eCommerceAutoScaling.yaml:** If you want a dynamic autoscaling environment,
run this yaml file. This yaml files sets up heat autoscaling groups.
REQUIRED PARAMETERS:
* ssh_key_name, image_id, public_network_id
OPTIONAL PARAMETERS:
* db_instance_flavor, app_instance_flavor, web_instance_flavor,
db_server_name, db_name, db_username, db_password, db_root_password,
app_server_name, web_server_name, admin_username, admin_password,
admin_email, dns_nameserver
The following 4 yaml files are called by the primary files above, and are by
default expected to be in a nested subdirectory:
**setup_network.yaml:**
This file creates 3 separate private networks, one for each tier. In
addition it creates two load balancers (using neutron LBaaS V2), one which
has a public IP that connects the web private network to the public
network, and one with a private IP that connects the web network to the
services network. The template also creates a router connecting the
services network to the database network. In addition to the networks and
routers, the template creates 3 security groups, one for each of the tiers.
**launch_web_layer.yaml:**
This template file launches the web layer nodes. In addition to launching
instances, it installs and configures Apache and Apache modproxy which is
used to redirect traffic to the application nodes.
**launch_services_layer.yaml:**
This template file launches the services layer nodes. In addition to
launching the instances, it installs Apache, PHP, MySQL client, and
OpenCart.
**launch_sql_layer.yaml:**
This template file launches the database layer node and installs MySQL.
In addition it creates a cinder block device to store the database files.
The template also creates the required users and databases for the OpenCart
application.
-------------------------------
Running the heat template files
-------------------------------
First you need to source your credential file. You may download a copy of the
credential file from Horizon under Project>Compute>Access & Security>API
Access
**Example to setup the static environment**::
openstack stack create --template eCommerceStatic.yaml --parameter ssh_key_name=mykey --parameter image_id=ubuntu --parameter dns_nameserver="8.8.8.8,8.8.4.4" --parameter public_network_id=external_network OpenCart
**Example to setup the autoscaling environment**::
openstack stack create --template eCommerceAutoScaling.yaml --parameter ssh_key_name=mykey --parameter image_id=centos --parameter dns_nameserver="8.8.8.8,8.8.4.4" --parameter public_network_id=external_network OpenCart

View File

@ -1,383 +0,0 @@
heat_template_version: 2016-10-14
#The value of heat_template_version tells Heat not only the format of the template but also features that will be validated and supported
#2016-04-08 represents the Newton release
description: >
This is the main Heat template for the eCommerce Workload Architecture created by the
Enterprise Working Group. This template contains the autoscaling code and calls nested
templates which actually do the majority of the work. Ceilometer is required in order to
run this template. This file calls the following yaml files in a ./nested subdirectory
setup_network.yaml sets up the security groups and networks for Web, App, and Database
launch_services_layer.yaml starts up application servers and does on-the-fly builds
launch_web_layer.yaml starts up web servers and does on-the-fly builds
launch_sql_layer.yaml starts up mysql server and does on-the-fly builds.
NOTE: This serves as a guide to new users and is not meant for production deployment.
REQUIRED YAML FILES:
setup_network.yaml, launch_services_layer.yaml, launch_sql_layer.yaml, launch_web_layer.yaml
REQUIRED PARAMETERS:
ssh_key_name, image_id, public_network_id
OPTIONAL PARAMETERS:
db_instance_flavor, app_instance_flavor, web_instance_flavor, db_server_name, db_name
db_username, db_password, db_root_password, app_server_name, web_server_name, admin_username
admin_password, admin_email, dns_nameserver
#Created by: Craig Sterrett 9/27/2016
#Updated by: Craig Sterrett 1/3/2017 to support LBaaS V2 and Newton
#####################################################
#The parameters section allows for specifying input parameters that have to be provided when instantiating the template
parameters:
ssh_key_name:
type: string
label: SSH Key Name
description: REQUIRED PARAMETER -Name of an existing SSH KeyPair to enable SSH access to instances.
constraints:
- custom_constraint: nova.keypair
description: Must already exist on your cloud
image_id:
type: string
label: Image ID
description: >
REQUIRED PARAMETER - The image id to be used for the compute instance. Please specify
your own Image ID in your project/tenant. This could be modified to use different
images for each tier.
constraints:
- custom_constraint: glance.image
description: Must be a valid image on your cloud
public_network_id:
type: string
label: Public Network
description: >
REQUIRED PARAMETER - The public network name or id used to access the internet.
This will fail if this is not a true public network
constraints:
- custom_constraint: neutron.network
description: Must be a valid network on your cloud
db_instance_flavor:
type: string
label: Database server instance flavor
description: The flavor type to use for db server.
default: m1.small
constraints:
- custom_constraint: nova.flavor
description: Must be a valid flavor provided by your cloud provider.
app_instance_flavor:
type: string
label: Application server instance flavor
description: The flavor type to use for app servers.
default: m1.small
constraints:
- custom_constraint: nova.flavor
description: Must be a valid flavor provided by your cloud provider.
web_instance_flavor:
type: string
label: Web server instance flavor
description: The flavor type to use for web servers.
default: m1.small
constraints:
- custom_constraint: nova.flavor
description: Must be a valid flavor provided by your cloud provider.
db_server_name:
type: string
label: Server Name
description: Name of the database servers
default: db_server
db_name:
type: string
label: Database Name
description: Name of the OpenCart database
default: opencart
db_username:
type: string
label: Database username
description: Name of the OpenCart database user
default: opencartuser
db_password:
type: string
label: Database username password
description: db_username password
default: opencart
db_root_password:
type: string
label: Database root user password
description: db root user password
default: opencart
app_server_name:
type: string
label: Server Name
description: Name of the application servers
default: app_server
web_server_name:
type: string
label: Server Name
description: Name of the web servers
default: web_server
admin_username:
type: string
description: Username for OpenCart Admin page
default: admin
admin_password:
type: string
description: Password for OpenCart admin user
default: admin
admin_email:
type: string
description: email address for OpenCart Admin user
default: youremail@example.com
dns_nameserver:
type: comma_delimited_list
label: DNS Name Server
description: The IP address of a DNS nameserver in list format
default: 8.8.8.8,8.8.4.4
#####################################################
#The resources section defines actual resources that make up a stack deployed from the HOT template (for instance compute instances, networks, storage volumes).
resources:
#################################
#Setup Networking and Security Group
#Call the setup_network.yaml file
network_setup:
type: nested/setup_network.yaml
properties:
public_network_id: { get_param: public_network_id }
dns_nameserver: { get_param: dns_nameserver }
#################################
##Kick off a Database server
launch_db_server:
type: nested/launch_sql_layer.yaml
properties:
ssh_key_name: { get_param: ssh_key_name }
server_name: { get_param: db_server_name }
instance_flavor: { get_param: db_instance_flavor }
image_id: { get_param: image_id }
private_network_id: {get_attr: [network_setup, db_private_network_id]}
security_group: {get_attr: [network_setup, db_security_group_id]}
db_name: { get_param: db_name}
db_username: {get_param: db_username}
db_password: {get_param: db_password}
db_root_password: {get_param: db_root_password}
#################################
#Autoscaling for the app servers
app_autoscale_group:
type: OS::Heat::AutoScalingGroup
properties:
desired_capacity: 2
min_size: 1
max_size: 5
resource:
type: nested/launch_services_layer.yaml
properties:
ssh_key_name:
get_param: ssh_key_name
server_name:
get_param: app_server_name
instance_flavor:
get_param: app_instance_flavor
image_id:
get_param: image_id
private_network_id: {get_attr: [network_setup, app_private_network_id]}
private_subnet_id: {get_attr: [network_setup, app_private_subnet_id]}
public_network_name: {get_param: public_network_id}
security_group: {get_attr: [network_setup, app_security_group_id]}
pool_name: {get_attr: [network_setup, app_lbaas_pool_name]}
db_server_ip: {get_attr: [launch_db_server, instance_ip]}
database_name: {get_param: db_name}
db_username: {get_param: db_username}
db_password: {get_param: db_password}
admin_username: {get_param: admin_username}
admin_password: {get_param: admin_password}
admin_email: {get_param: admin_email}
#created unique tag to be used by ceilometer to identify meters specific to the app nodes
#without some unique metadata tag, ceilometer will group together all resources in the tenant
metadata: {"metering.autoscale_group_name": "app_autoscale_group"}
#################################
app_scaleup_policy:
type: OS::Heat::ScalingPolicy
depends_on: [app_autoscale_group]
properties:
adjustment_type: change_in_capacity
auto_scaling_group_id: { get_resource: app_autoscale_group }
#cooldown prevents duplicate alarms while instances spin up. Set the value large
#enough to allow for instance to startup and begin taking requests.
#cooldown: 900
cooldown: 240
scaling_adjustment: 1
app_cpu_alarm_high:
type: OS::Ceilometer::Alarm
depends_on: [app_autoscale_group]
properties:
meter_name: cpu_util
statistic: avg
#period needs to be greater than the sampling rate in the pipleine.config file in /etc/ceilometer
period: 120
evaluation_periods: 1
#Alarms if CPU utilization for ALL app nodes averaged together exceeds 50%
threshold: 50
repeat_actions: true
alarm_actions:
- {get_attr: [app_scaleup_policy, alarm_url]}
#Collect data only on servers with the autoscale_group_name metadata set to app_autoscale_group
#Otherwise ceilometer would look at all servers in the project
matching_metadata: {'metadata.user_metadata.autoscale_group_name': "app_autoscale_group"}
comparison_operator: gt
app_scaledown_policy:
type: OS::Heat::ScalingPolicy
properties:
adjustment_type: change_in_capacity
auto_scaling_group_id: { get_resource: app_autoscale_group }
#cooldown prevents duplicate alarms while instances shut down. Set the value large
#enough to allow for instance to shutdown and things stabilize.
cooldown: 240
scaling_adjustment: -1
app_cpu_alarm_low:
type: OS::Ceilometer::Alarm
properties:
meter_name: cpu_util
statistic: avg
#period needs to be greater than the sampling rate in the pipleine.config file in /etc/ceilometer
period: 120
evaluation_periods: 1
#Alarms if CPU utilization for ALL app nodes averaged together drops below 20%
threshold: 20
repeat_actions: true
alarm_actions:
- {get_attr: [app_scaledown_policy, alarm_url]}
#Collect data only on servers with the autoscale_group_name metadata set to app_autoscale_group
#Otherwise ceilometer would look at all servers in the project
matching_metadata: {'metadata.user_metadata.autoscale_group_name': "app_autoscale_group"}
comparison_operator: lt
#################################
#Autoscaling for the web servers
web_autoscale_group:
type: OS::Heat::AutoScalingGroup
properties:
desired_capacity: 2
min_size: 1
max_size: 5
resource:
type: nested/launch_web_layer.yaml
properties:
ssh_key_name:
get_param: ssh_key_name
server_name:
get_param: web_server_name
instance_flavor:
get_param: web_instance_flavor
image_id:
get_param: image_id
private_network_id: {get_attr: [network_setup, web_private_network_id]}
private_subnet_id: {get_attr: [network_setup, web_private_subnet_id]}
app_lbaas_vip: {get_attr: [network_setup, app_lbaas_IP]}
security_group: {get_attr: [network_setup, web_security_group_id]}
pool_name: {get_attr: [network_setup, web_lbaas_pool_name]}
metadata: {"metering.autoscale_group_name": "web_autoscale_group"}
#################################
web_scaleup_policy:
type: OS::Heat::ScalingPolicy
depends_on: [web_autoscale_group]
properties:
adjustment_type: change_in_capacity
auto_scaling_group_id: { get_resource: web_autoscale_group }
cooldown: 240
scaling_adjustment: 1
web_cpu_alarm_high:
type: OS::Ceilometer::Alarm
properties:
meter_name: cpu_util
statistic: avg
period: 120
evaluation_periods: 1
threshold: 50
repeat_actions: true
alarm_actions:
- {get_attr: [web_scaleup_policy, alarm_url]}
matching_metadata: {'metadata.user_metadata.autoscale_group_name': "web_autoscale_group"}
comparison_operator: gt
web_scaledown_policy:
type: OS::Heat::ScalingPolicy
depends_on: [web_autoscale_group]
properties:
adjustment_type: change_in_capacity
auto_scaling_group_id: { get_resource: web_autoscale_group }
cooldown: 240
scaling_adjustment: -1
web_cpu_alarm_low:
type: OS::Ceilometer::Alarm
properties:
meter_name: cpu_util
statistic: avg
period: 120
evaluation_periods: 1
threshold: 20
repeat_actions: true
alarm_actions:
- {get_attr: [web_scaledown_policy, alarm_url]}
matching_metadata: {'metadata.user_metadata.autoscale_group_name': "web_autoscale_group"}
comparison_operator: lt
#####################################################
#The outputs section defines output parameters that should be available to the user after a stack has been created.
outputs:
web_lbaas_ip:
description: Floating IP address of fractal application
value:
list_join: ['', ['The OpenCart Web page is available here http://', get_attr: [network_setup, web_lbaas_IP]]]
web_scale_up_url:
description: >
This URL is the webhook to scale up the WEB autoscaling group. You
can invoke the scale-up operation by doing an HTTP POST to this
URL; no body nor extra headers are needed. You do need to be authenticated
Example: source openrc; curl -X POST "<url>"
value: {get_attr: [web_scaleup_policy, alarm_url]}
web_scale_down_url:
description: >
This URL is the webhook to scale down the WEB autoscaling group.
value: {get_attr: [web_scaledown_policy, alarm_url]}
app_scale_up_url:
description: >
This URL is the webhook to scale up the application autoscaling group. You
can invoke the scale-up operation by doing an HTTP POST to this
URL; no body nor extra headers are needed.
value: {get_attr: [app_scaleup_policy, alarm_url]}
app_scale_down_url:
description: >
This URL is the webhook to scale down the application autoscaling group.
value: {get_attr: [app_scaledown_policy, alarm_url]}

View File

@ -1,253 +0,0 @@
heat_template_version: 2016-10-14
#The value of heat_template_version tells Heat not only the format of the template but also features that will be validated and supported
#2016-04-08 represents the Newton release
description: >
This is the main Heat template for the eCommerce Workload Architecture created by the
Enterprise Working Group. This version of the template does not include autoscaling,
and does not require ceilometer.
This template calls multiple nested templates which actually do the
majority of the work. This file calls the following yaml files in a ./nested subdirectory
setup_network.yaml sets up the security groups and networks for Web, App, and Database
launch_services_layer.yaml starts up application servers and does on-the-fly builds
launch_web_layer.yaml starts up web servers and does on-the-fly builds
launch_sql_layer.yaml starts up mysql server and does on-the-fly builds.
NOTE: This serves as a guide to new users and is not meant for production deployment.
REQUIRED YAML FILES:
setup_network.yaml, launch_services_layer.yaml, launch_sql_layer.yaml, launch_web_layer.yaml
REQUIRED PARAMETERS:
ssh_key_name, image_id, public_network_id
OPTIONAL PARAMETERS:
db_instance_flavor, app_instance_flavor, web_instance_flavor, db_server_name, db_name
db_username, db_password, db_root_password, app_server_name, web_server_name, admin_username
admin_password, admin_email, dns_nameserver
#Created by: Craig Sterrett 9/27/2016
#Updated by: Craig Sterrett 1/3/2017 to support LBaaS V2 and Newton
######################################
#The parameters section allows for specifying input parameters that have to be provided when instantiating the template
parameters:
ssh_key_name:
type: string
label: SSH Key Name
description: REQUIRED PARAMETER -Name of an existing SSH KeyPair to enable SSH access to instances.
constraints:
- custom_constraint: nova.keypair
description: Must already exist on your cloud
image_id:
type: string
label: Image ID
description: >
REQUIRED PARAMETER - The image id to be used for the compute instance. Please specify
your own Image ID in your project/tenant. This could be modified to use different
images for each tier.
constraints:
- custom_constraint: glance.image
description: Must be a valid image on your cloud
public_network_id:
type: string
label: Public Network
description: >
REQUIRED PARAMETER - The public network name or id used to access the internet.
This will fail if this is not a true public network
constraints:
- custom_constraint: neutron.network
description: Must be a valid network on your cloud
db_instance_flavor:
type: string
label: Database server instance flavor
description: The flavor type to use for db server.
default: m1.small
constraints:
- custom_constraint: nova.flavor
description: Must be a valid flavor provided by your cloud provider.
app_instance_flavor:
type: string
label: Application server instance flavor
description: The flavor type to use for app servers.
default: m1.small
constraints:
- custom_constraint: nova.flavor
description: Must be a valid flavor provided by your cloud provider.
web_instance_flavor:
type: string
label: Web server instance flavor
description: The flavor type to use for web servers.
default: m1.small
constraints:
- custom_constraint: nova.flavor
description: Must be a valid flavor provided by your cloud provider.
db_server_name:
type: string
label: Server Name
description: Name of the database servers
default: db_server
db_name:
type: string
label: Database Name
description: Name of the OpenCart database
default: opencart
db_username:
type: string
label: Database username
description: Name of the OpenCart database user
default: opencartuser
db_password:
type: string
label: Database username password
description: db_username password
default: opencart
db_root_password:
type: string
label: Database root user password
description: db root user password
default: opencart
app_server_name:
type: string
label: Server Name
description: Name of the application servers
default: app_server
web_server_name:
type: string
label: Server Name
description: Name of the web servers
default: web_server
admin_username:
type: string
description: Username for OpenCart Admin page
default: admin
admin_password:
type: string
description: Password for OpenCart admin user
default: admin
admin_email:
type: string
description: email address for OpenCart Admin user
default: youremail@example.com
dns_nameserver:
type: comma_delimited_list
label: DNS Name Server
description: The IP address of a DNS nameserver in list format
default: 8.8.8.8,8.8.4.4
######################################
#The resources section defines actual resources that make up a stack deployed from
#the HOT template (for instance compute instances, networks, storage volumes).
resources:
####################
#Setup Networking and Security Group
#Call the setup_network.yaml file
network_setup:
type: nested/setup_network.yaml
properties:
public_network_id: { get_param: public_network_id }
dns_nameserver: { get_param: dns_nameserver }
####################
##Kick off a Database server
launch_db_server:
type: nested/launch_sql_layer.yaml
properties:
ssh_key_name: { get_param: ssh_key_name }
server_name: { get_param: db_server_name }
instance_flavor: { get_param: db_instance_flavor }
image_id: { get_param: image_id }
private_network_id: {get_attr: [network_setup, db_private_network_id]}
security_group: {get_attr: [network_setup, db_security_group_id]}
db_name: { get_param: db_name}
db_username: {get_param: db_username}
db_password: {get_param: db_password}
db_root_password: {get_param: db_root_password}
####################
##Kick off two application servers
#Utilizing Heat resourcegroup to kick off multiple copies
app_server_resource_group:
type: OS::Heat::ResourceGroup
properties:
count: 2
resource_def:
type: nested/launch_services_layer.yaml
properties:
ssh_key_name:
get_param: ssh_key_name
server_name:
get_param: app_server_name
instance_flavor:
get_param: app_instance_flavor
image_id:
get_param: image_id
private_network_id: {get_attr: [network_setup, app_private_network_id]}
private_subnet_id: {get_attr: [network_setup, app_private_subnet_id]}
public_network_name: {get_param: public_network_id}
security_group: {get_attr: [network_setup, app_security_group_id]}
pool_name: {get_attr: [network_setup, app_lbaas_pool_name]}
db_server_ip: {get_attr: [launch_db_server, instance_ip]}
database_name: {get_param: db_name}
db_username: {get_param: db_username}
db_password: {get_param: db_password}
admin_username: {get_param: admin_username}
admin_password: {get_param: admin_password}
admin_email: {get_param: admin_email}
#Just passing something for metadata, it's not used in this script but is used in
#the autoscaling script
metadata: {"metering.stack": {get_param: "OS::stack_id"}}
####################
##Kick off two web servers
#Utilizing Heat resourcegroup to kick off multiple copies
web_server_resource_group:
type: OS::Heat::ResourceGroup
properties:
count: 2
resource_def:
type: nested/launch_web_layer.yaml
properties:
ssh_key_name:
get_param: ssh_key_name
server_name:
get_param: web_server_name
instance_flavor:
get_param: web_instance_flavor
image_id:
get_param: image_id
private_network_id: {get_attr: [network_setup, web_private_network_id]}
private_subnet_id: {get_attr: [network_setup, web_private_subnet_id]}
app_lbaas_vip: {get_attr: [network_setup, app_lbaas_IP]}
security_group: {get_attr: [network_setup, web_security_group_id]}
pool_name: {get_attr: [network_setup, web_lbaas_pool_name]}
#Just passing something for metadata, it's not used in this script but is used in
#the autoscaling script
metadata: {"metering.stack": {get_param: "OS::stack_id"}}
######################################
#The outputs section defines output parameters that should be available to the user after a stack has been created.
outputs:
web_lbaas_ip:
description: Floating IP address of fractal application
value:
list_join: ['', ['The OpenCart Web page is available here http://', get_attr: [network_setup, web_lbaas_IP]]]

View File

@ -1,229 +0,0 @@
heat_template_version: 2016-10-14
description: >
This is a nested Heat used by the E-Commerce Architecture Workload reference document
created by the Enterprise Working Group. These templates demonstrate a sample
LAMP architecture running OpenCart. This template file launches the application
tier nodes, and installs Apache, PHP, MySQL client, and finally OpenCart.
This serves as a guide to new users and is not meant for production deployment.
#Created by: Craig Sterrett 9/23/2016
#Updated by: Craig Sterrett 1/3/2017 to support LBaaS V2 and Newton
#####################################################
parameters:
ssh_key_name:
type: string
label: SSH Key Name
description: REQUIRED PARAMETER -Name of an existing SSH KeyPair to enable SSH access to instances.
constraints:
- custom_constraint: nova.keypair
description: Must already exist on your cloud
server_name:
type: string
label: Server Name
description: REQUIRED PARAMETER - Name of the instance to spin up.
instance_flavor:
type: string
label: Instance Flavor
description: The flavor type to use for each server.
constraints:
- custom_constraint: nova.flavor
description: Must be a valid flavour provided by your cloud provider.
image_id:
type: string
label: Image ID
description: >
REQUIRED PARAMETER - The image id to be used for the compute instance. Please specify
your own Image ID in your project/tenant.
constraints:
- custom_constraint: glance.image
description: Must be a valid image on your cloud
private_network_id:
type: string
default: Services_Layer_private_network
description: The private Application network that will be utilized for all App servers
constraints:
- custom_constraint: neutron.network
description: Must be a valid private network on your cloud
private_subnet_id:
type: string
description: Private subnet of the LBaaS Pool
default: private_subnet
constraints:
- custom_constraint: neutron.subnet
description: Must be a valid private subnet on your cloud
public_network_name:
type: string
description: Public network name where we can get a floating IP from
constraints:
- custom_constraint: neutron.network
description: Must be a valid private network on your cloud
security_group:
type: string
default: Services_Layer_SG
description: The Application security group that will be utilized for all App servers
pool_name:
type: string
description: LBaaS Pool to join
constraints:
- custom_constraint: neutron.lbaas.pool
description: Must be a LBaaS pool on your cloud
db_server_ip:
type: string
description: Database Server IP
database_name:
type: string
description: Name of OpenCart Database
db_username:
type: string
description: Opencart database username
db_password:
type: string
description: Opencart database password (for db_username above)
admin_username:
type: string
description: Username for OpenCart Admin page
admin_password:
type: string
description: Password for OpenCart admin user
admin_email:
type: string
description: email address for OpenCart Admin user
metadata:
type: json
#####################################################
resources:
app_server:
type: OS::Nova::Server
properties:
name: { get_param: server_name }
image: { get_param: image_id }
flavor: { get_param: instance_flavor }
key_name: { get_param: ssh_key_name }
metadata: { get_param: metadata }
networks:
- network: { get_param: private_network_id }
security_groups:
- { get_param: security_group }
user_data_format: RAW
user_data:
str_replace:
params:
$db_server_ip: { get_param: db_server_ip }
$db_name: {get_param: database_name}
$db_username: {get_param: db_username}
$db_password: {get_param: db_password}
$admin_username: {get_param: admin_username}
$admin_password: {get_param: admin_password}
$admin_email: {get_param: admin_email}
$floating_ip: {get_attr: [ app_floating_ip, floating_ip_address ] }
template: |
#!/bin/bash -v
#use apt-get for Debian/ubuntu, and yum for centos/fedora
if apt-get -v &> /dev/null
then
apt-get update -y
apt-get upgrade -y
if lsb_release -a | grep xenial
then
apt-get -y install apache2 php php-mcrypt php-curl libapache2-mod-php php-mysql php-gd mysql-client
apt-get -y install policycoreutils
ufw app info "Apache Full"
fi
if lsb_release -a | grep -i trusty
then
#Install PHP5, and mysql
apt-get -y install apache2 php5 php5-mcrypt php5-curl libapache2-mod-php5 php5-mysql php5-gd mysql-client
fi
apt-get -y install unzip
elif which yum &> /dev/null
then
yum update -y
#Install PHP5, and mysql
setenforce 0
yum install -y httpd
systemctl start httpd
systemctl enable httpd
yum install -y epel-release
yum -y install php php-mysql php-gd php-ldap php-odbc php-pear php-xml php-xmlrpc php-mbstring php-snmp php-soap php-mcrypt curl zlib
yum install -y wget
yum install -y unzip
fi
# install OpenCart
# download opencart
wget "https://www.opencart.com/index.php?route=cms/download/download&download_id=47" -O opencart.zip
unzip opencart.zip -d ./opencart
# setup OpenCart
mv -v ./opencart/upload/* /var/www/html
# rename OpenCart config files to config.php
cp /var/www/html/config-dist.php /var/www/html/config.php
cp /var/www/html/admin/config-dist.php /var/www/html/admin/config.php
rm /var/www/html/index.html
# give apache user ownership of the files
if apt-get -v &> /dev/null
then
chown -R www-data:www-data /var/www
mv -i /etc/php5/conf.d/mcrypt.ini /etc/php5/mods-available/
php5enmod mcrypt
service apache2 restart
elif which yum &> /dev/null
then
chown -R apache:apache /var/www/
chmod -R g+w /var/www/html/
#Allow remote database connection
setsebool -P httpd_can_network_connect=1
systemctl restart httpd.service
fi
#Configure OpenCart
php /var/www/html/install/cli_install.php install --db_hostname $db_server_ip --db_username $db_username --db_password $db_password --db_database $db_name --db_driver mysqli --db_port 3306 --username $admin_username --password $admin_password --email $admin_email --http_server http://$floating_ip/
rm -r /var/www/html/install
Pool_Member:
type: OS::Neutron::LBaaS::PoolMember
properties:
pool: {get_param: pool_name}
address: {get_attr: [app_server, first_address]}
protocol_port: 80
subnet: {get_param: private_subnet_id}
app_floating_ip:
type: OS::Neutron::FloatingIP
properties:
floating_network_id: { get_param: public_network_name }
associate_app_floating_ip:
type: OS::Nova::FloatingIPAssociation
depends_on: [app_floating_ip,app_server]
properties:
floating_ip: { get_resource: app_floating_ip }
server_id: { get_resource: app_server }
#####################################################
outputs:
app_private_ip:
description: Private IP address of the Web node
value: { get_attr: [app_server, first_address] }
lb_member:
description: LoadBalancer member details.
value: { get_attr: [Pool_Member, show] }

View File

@ -1,201 +0,0 @@
heat_template_version: 2016-10-14
description: >
This is a nested Heat used by the eCommerce Architecture Workload reference document
created by the Enterprise Working Group. These templates demonstrate a sample
LAMP architecture running OpenCart. This template file launches the database
tier node, creates a cinder block device to store the database files and creates
the required users and databases for the OpenCart application.
This serves as a guide to new users and is not meant for production deployment.
#Created by: Craig Sterrett 9/23/2016
#Updated by: Craig Sterrett 1/3/2017 to support LBaaS V2 and Newton
#####################################################
parameters:
ssh_key_name:
type: string
label: SSH Key Name
description: REQUIRED PARAMETER -Name of an existing SSH KeyPair to enable SSH access to instances.
constraints:
- custom_constraint: nova.keypair
description: Must already exist on your cloud
server_name:
type: string
label: Server Name
description: REQUIRED PARAMETER - Name of the instance to spin up.
default: DB_Server
instance_flavor:
type: string
label: Instance Flavor
description: The flavor type to use for each server.
constraints:
- custom_constraint: nova.flavor
description: Must be a valid flavour provided by your cloud provider.
image_id:
type: string
label: Image ID
description: >
REQUIRED PARAMETER - The image id to be used for the compute instance. Please specify
your own Image ID in your project/tenant.
constraints:
- custom_constraint: glance.image
description: Must be a valid image on your cloud
private_network_id:
type: string
default: DB_Tier_private_network
description: The private database network that will be utilized for all DB servers
constraints:
- custom_constraint: neutron.network
description: Must be a valid private network on your cloud
security_group:
type: string
default: Workload_DB_SG
description: The database security group that will be utilized for all DB servers
db_name:
type: string
description: MYSQL database name
db_username:
type: string
description: OpenCart database username
db_password:
type: string
description: OpenCart database password
db_root_password:
type: string
description: Root password for MySQL
db_volume_size:
type: string
description: Database cinder volume size (in GB) for database files
default: 2
hidden: true
#####################################################
resources:
#Setup a cinder volume for storage of the datbase files
db_files_volume:
type: OS::Cinder::Volume
properties:
size: { get_param: db_volume_size }
name: DB_Files
db_volume_attachment:
type: OS::Cinder::VolumeAttachment
depends_on: [db_files_volume,MYSQL_instance]
properties:
volume_id: { get_resource: db_files_volume }
instance_uuid: { get_resource: MYSQL_instance }
#Install MySQL and setup wordpress DB and set usernames and passwords
MYSQL_instance:
type: OS::Nova::Server
depends_on: [db_files_volume]
properties:
name: { get_param: server_name }
image: { get_param: image_id }
flavor: { get_param: instance_flavor }
key_name: { get_param: ssh_key_name }
networks:
- network: { get_param: private_network_id }
security_groups:
- { get_param: security_group }
user_data_format: RAW
user_data:
str_replace:
template: |
#!/bin/bash -v
#make mount point for cinder volume and prepare volume
mkdir /mnt/db_files
chown mysql:mysql /mnt/db_files
volume_path="/dev/disk/by-id/virtio-$(echo volume_id | cut -c -20)"
echo ${volume_path}
mkfs.ext4 ${volume_path}
echo "${volume_path} /mnt/db_files ext4 defaults 1 2" >> /etc/fstab
mount /mnt/db_files
#use apt-get for Debian/ubuntu, and yum for centos/fedora
if apt-get -v &> /dev/null
then
apt-get update -y
apt-get upgrade -y
#Next line stops mysql install from popping up request for root password
export DEBIAN_FRONTEND=noninteractive
apt-get install -q -y --force-yes mariadb-server
touch /var/log/mariadb/mariadb.log
chown mysql:mysql /var/log/mariadb/mariadb.log
#Ubuntu mysql install blocks remote access by default
sed -i 's/bind-address/#bind-address/' /etc/mysql/my.cnf
service mysql stop
#Move the database to the cinder device
mv -f /var/lib/mysql /mnt/db_files/
#edit data file location in the mysql config file
sed -i 's/\/var\/lib\/mysql/\/mnt\/db_files\/mysql/' /etc/mysql/my.cnf
sed -i 's/\/var\/lib\/mysql/\/mnt\/db_files\/mysql/' /etc/mysql/mariadb.conf.d/50-server.cnf
sed -i 's/127.0.0.1/0.0.0.0/' /etc/mysql/mariadb.conf.d/50-server.cnf
service mysql start
elif which yum &> /dev/null
then
yum update -y
setenforce 0
yum -y install mariadb-server mariadb
systemctl start mariadb
systemctl stop mariadb
chown mysql:mysql /mnt/db_files
touch /var/log/mariadb/mariadb.log
chown mysql:mysql /var/log/mariadb/mariadb.log
#Move the database to the cinder device
mv -f /var/lib/mysql /mnt/db_files/
#edit data file location in the mysql config file
sed -i 's/\/var\/lib\/mysql/\/mnt\/db_files\/mysql/' /etc/my.cnf
#need to modify the socket info for the clients
echo "[client]" >> /etc/my.cnf
echo "socket=/mnt/db_files/mysql/mysql.sock" >> /etc/my.cnf
systemctl start mariadb
systemctl enable mariadb
fi
# Setup MySQL root password and create a user and add remote privs to app subnet
mysqladmin -u root password db_rootpassword
# create OpenCart database
cat << EOF | mysql -u root --password=db_rootpassword
CREATE DATABASE db_name;
CREATE USER 'db_user'@'localhost';
SET PASSWORD FOR 'db_user'@'localhost'=PASSWORD("db_password");
GRANT ALL PRIVILEGES ON db_name.* TO 'db_user'@'localhost' IDENTIFIED BY 'db_password';
CREATE USER 'db_user'@'%';
SET PASSWORD FOR 'db_user'@'%'=PASSWORD("db_password");
GRANT ALL PRIVILEGES ON db_name.* TO 'db_user'@'%' IDENTIFIED BY 'db_password';
FLUSH PRIVILEGES;
EOF
params:
db_rootpassword: { get_param: db_root_password }
db_name: { get_param: db_name }
db_user: { get_param: db_username }
db_password: { get_param: db_password }
volume_id: {get_resource: db_files_volume }
#####################################################
outputs:
completion:
description: >
MYSQL Setup is complete, login username and password are
value:
str_replace:
template: >
Database Name=$dbName, Database Admin Acct=$dbAdmin
params:
$dbName: { get_param: db_name }
$dbAdmin: { get_param: db_username }
instance_ip:
description: IP address of the deployed compute instance
value: { get_attr: [MYSQL_instance, first_address] }

View File

@ -1,167 +0,0 @@
heat_template_version: 2016-10-14
description: >
This is a nested Heat used by the E-Commerce Architecture Workload reference document
created by the Enterprise Working Group. These templates demonstrate a sample
LAMP architecture supporting Wordpress. This template installs and configures
Apache and Apache modproxy which is used to redirect traffic to the application nodes.
This serves as a guide to new users and is not meant for production deployment.
#Created by: Craig Sterrett 3/23/2016
#Updated by: Craig Sterrett 1/3/2017 to support LBaaS V2 and Newton
#####################################################
parameters:
ssh_key_name:
type: string
label: SSH Key Name
description: REQUIRED PARAMETER -Name of an existing SSH KeyPair to enable SSH access to instances.
default: cloudkey
constraints:
- custom_constraint: nova.keypair
description: Must already exist on your cloud
server_name:
type: string
label: Server Name
description: REQUIRED PARAMETER - Name of the instance to spin up.
default: Web_Server
instance_flavor:
type: string
label: Instance Flavor
description: The flavor type to use for each server.
default: m1.small
constraints:
- custom_constraint: nova.flavor
description: Must be a valid flavour provided by your cloud provider.
image_id:
type: string
label: Image ID
description: >
REQUIRED PARAMETER - The image id to be used for the compute instance. Please specify
your own Image ID in your project/tenant.
constraints:
- custom_constraint: glance.image
description: Must be a valid image on your cloud
private_network_id:
type: string
default: Web_Tier_private_network
description: The private Web network that will be utilized for all web servers
constraints:
- custom_constraint: neutron.network
description: Must be a valid private network on your cloud
private_subnet_id:
type: string
description: Private subnet of the LBaaS Pool
default: private_subnet
constraints:
- custom_constraint: neutron.subnet
description: Must be a valid private subnet on your cloud
security_group:
type: string
default: Workload_Web_SG
description: The Web security group that will be utilized for all web servers
pool_name:
type: string
description: LBaaS Pool to join
constraints:
- custom_constraint: neutron.lbaas.pool
description: Must be a LBaaS pool on your cloud
app_lbaas_vip:
type: string
description: Application LBaaS virtual IP
metadata:
type: json
#####################################################
resources:
web_server:
type: OS::Nova::Server
properties:
name: { get_param: server_name }
image: { get_param: image_id }
flavor: { get_param: instance_flavor }
key_name: { get_param: ssh_key_name }
metadata: { get_param: metadata }
networks:
- network: { get_param: private_network_id }
security_groups:
- { get_param: security_group }
user_data_format: RAW
user_data:
str_replace:
params:
$app_lbaas_vip: { get_param: app_lbaas_vip }
template: |
#!/bin/bash -v
#centos has this "security" feature in sudoers to keep scripts from sudo, comment it out
sed -i '/Defaults \+requiretty/s/^/#/' /etc/sudoers
#use apt-get for Debian/ubuntu, and yum for centos/fedora
if apt-get -v &> /dev/null
then
apt-get update -y
apt-get upgrade -y
#Install Apache
apt-get -y --force-yes install apache2
apt-get install y libapache2-mod-proxy-html libxml2-dev
apt-get install -y build-essential
a2enmod proxy
a2enmod proxy_http
a2enmod rewrite
a2enmod proxy_ajp
a2enmod deflate
a2enmod headers
a2enmod proxy_connect
a2enmod proxy_html
cat > /etc/apache2/sites-enabled/000-default.conf << EOL
<VirtualHost *:*>
ProxyPreserveHost On
ProxyPass / http://$app_lbaas_vip/ Keepalive=On
ProxyPassReverse / http://$app_lbaas_vip/
ServerName localhost
</VirtualHost>
EOL
echo `hostname -I` `hostname` >> /etc/hosts
/etc/init.d/apache2 restart
elif which yum &> /dev/null
then
yum update -y
#Install Apache
yum install -y httpd
yum install -y wget
cat >> /etc/httpd/conf/httpd.conf << EOL
<VirtualHost *:*>
ProxyPreserveHost On
ProxyPass / http://$app_lbaas_vip/
ProxyPassReverse / http://$app_lbaas_vip/
ServerName localhost
</VirtualHost>
EOL
service httpd restart
fi
Pool_Member:
type: OS::Neutron::LBaaS::PoolMember
properties:
pool: {get_param: pool_name}
address: {get_attr: [web_server, first_address]}
protocol_port: 80
subnet: {get_param: private_subnet_id}
#####################################################
outputs:
web_private_ip:
description: Private IP address of the Web node
value: { get_attr: [web_server, first_address] }
lb_member:
description: LoadBalancer member details.
value: { get_attr: [Pool_Member, show] }

View File

@ -1,360 +0,0 @@
heat_template_version: 2016-10-14
description: >
This is a nested Heat used by the eCommerce Workload reference document
created by the Enterprise Working Group. These templates demonstrate a sample
LAMP architecture supporting the shopping cart application <???>. This template file creates 3 separate
private networks, two load balancers(LBaaS Vc), and creates 3 security groups.
This serves as a guide to new users and is not meant for production deployment.
REQUIRED PARAMETERS:
public_network_id
#Created by: Craig Sterrett 3/23/2016
#Updated by: Craig Sterrett 1/5/2017 to support LBaaS V2 and Newton
#####################################################
parameters:
public_network_id:
type: string
label: Public Network
description: >
REQUIRED PARAMETER - The public network name or id used to access the internet.
This will fail if this is not a true public network
constraints:
- custom_constraint: neutron.network
description: Must be a valid public network on your cloud
dns_nameserver:
type: comma_delimited_list
label: DNS Name Server
description: The IP address of a DNS nameserver
default: 8.8.8.8,8.8.4.4
#####################################################
resources:
#Create 3 private Networks, one for each Tier
#create a private network/subnet for the web servers
web_private_network:
type: OS::Neutron::Net
properties:
name: eCommerce_Web_network
web_private_network_subnet:
type: OS::Neutron::Subnet
#depends_on makes sure that a resource is created prior to a get_resource command
#otherwise you may have race conditions and a stack will work sometimes and not others
depends_on: [web_private_network]
properties:
cidr: 192.168.100.0/24
#start IP allocation at .10 to allow room for the hard coded port IPs
allocation_pools: [{ "start": 192.168.100.10, "end": 192.168.100.200 }]
#Need to define default gateway in order for LBaaS namespace to pick it up
#If you let neutron grant a default gateway IP, then the LBaaS namespace may
#not pick it up and you will have routing issues
gateway_ip: 192.168.100.4
#This routing information will get passed to the instances as they startup
#Provide the routes to the App network otherwise everything will try to go out the
#default gateway
host_routes: [{"destination": 192.168.101.0/24, "nexthop": 192.168.100.5}]
network: { get_resource: web_private_network }
name: eCommerce_Web_subnet
dns_nameservers: { get_param: dns_nameserver }
enable_dhcp: true
# create a router between the public/external network and the web network
public_router:
type: OS::Neutron::Router
properties:
name: PublicWebRouter
external_gateway_info:
network: { get_param: public_network_id }
# attach the web private network to the public router
public_router_interface:
type: OS::Neutron::RouterInterface
#Make sure the public router and web subnet have been created first
depends_on: [public_router, web_private_network_subnet]
properties:
router: { get_resource: public_router }
subnet: { get_resource: web_private_network_subnet }
#################################
# create a private network/subnet for the Application servers
App_private_network:
type: OS::Neutron::Net
properties:
name: eCommerce_Services_network
App_private_network_subnet:
type: OS::Neutron::Subnet
depends_on: [App_private_network]
properties:
cidr: 192.168.101.0/24
#start IP allocation at .10 to allow room for the hard coded gateway IPs
allocation_pools: [{ "start": 192.168.101.10, "end": 192.168.101.200 }]
#Need to define default gateway in order for LBaaS namespace to pick it up
#If you let neutron grant a default gateway IP, then the LBaaS namespace may
#not pick it up and you will have routing issues
gateway_ip: 192.168.101.5
#This routing information will get passed to the instances as they startup
#Provide both the routes to the DB network and to the web network
host_routes: [{"destination": 192.168.100.0/24, "nexthop": 192.168.101.5}, {"destination": 192.168.102.0/24, "nexthop": 192.168.101.6}, {"destination": 0.0.0.0/24, "nexthop": 192.168.100.4}]
network: { get_resource: App_private_network }
name: eCommerce_Services_subnet
dns_nameservers: { get_param: dns_nameserver }
enable_dhcp: true
# create a router linking App and Web network
App_router:
type: OS::Neutron::Router
properties:
name: "AppWebRouter"
external_gateway_info: {"network": { get_param: public_network_id }, "enable_snat": True}
# Create a port connecting the App_router to the App network
web_router_app_port:
type: OS::Neutron::Port
properties:
name: "App_Net_Port"
network: { get_resource: App_private_network }
#Assign the default gateway address
#The default gateway will get set as the default route in the LBaaS namespace
fixed_ips: [{"ip_address": 192.168.101.5}]
# Create a port connecting the App_router to the Web network
web_router_web_port:
type: OS::Neutron::Port
depends_on: [web_private_network]
properties:
name: "Web_Net_Port"
network: { get_resource: web_private_network }
fixed_ips: [{"ip_address": 192.168.100.5}]
App_router_interface1:
type: OS::Neutron::RouterInterface
depends_on: [App_router, web_router_app_port]
properties:
router: { get_resource: App_router }
port: { get_resource: web_router_app_port }
App_router_interface2:
type: OS::Neutron::RouterInterface
depends_on: [App_router, web_router_web_port]
properties:
router: { get_resource: App_router }
port: { get_resource: web_router_web_port }
#################################
#Create two Load Balancers one for the Web tier with a public IP and one for the App Tier
#with only private network access
#LBaaS V2 Load Balancer for Web Tier
Web_Tier_LoadBalancer:
type: OS::Neutron::LBaaS::LoadBalancer
depends_on: [web_private_network_subnet,public_router_interface]
properties:
name: Web_LoadBalancer
vip_subnet: {get_resource: web_private_network_subnet}
#LBaaS V2 Listener for Web server pool
Web_Tier_Listener:
type: OS::Neutron::LBaaS::Listener
depends_on: [Web_Tier_LoadBalancer]
properties:
protocol_port: 80
protocol: TCP
loadbalancer: {get_resource: Web_Tier_LoadBalancer }
#LBaaS V2 Pool for Web server pool
Web_Server_Pool:
type: OS::Neutron::LBaaS::Pool
depends_on: [Web_Tier_Listener]
properties:
lb_algorithm: ROUND_ROBIN
listener: {get_resource: Web_Tier_Listener }
protocol: TCP
# Floating_IP:
Web_Network_Floating_IP:
type: OS::Neutron::FloatingIP
depends_on: [Web_Tier_LoadBalancer,public_router_interface]
properties:
floating_network: {get_param: public_network_id}
port_id: { get_attr: [ Web_Tier_LoadBalancer, vip_port_id ] }
#################################
#LBaaS V2 Load Balancer for App Tier
App_Tier_LoadBalancer:
type: OS::Neutron::LBaaS::LoadBalancer
depends_on: [App_private_network_subnet]
properties:
name: App_LoadBalancer
vip_subnet: {get_resource: App_private_network_subnet}
#LBaaS V2 Listener for App server pool
App_Tier_Listener:
type: OS::Neutron::LBaaS::Listener
depends_on: [App_Tier_LoadBalancer]
properties:
protocol_port: 80
protocol: TCP
loadbalancer: {get_resource: App_Tier_LoadBalancer }
#LBaaS V2 Pool for App server pool
App_Server_Pool:
type: OS::Neutron::LBaaS::Pool
depends_on: [App_Tier_Listener]
properties:
lb_algorithm: ROUND_ROBIN
listener: {get_resource: App_Tier_Listener }
protocol: TCP
#################################
# create a private network/subnet for the Database servers
DB_private_network:
type: OS::Neutron::Net
properties:
name: eCommerce_Database_network
DB_private_network_subnet:
type: OS::Neutron::Subnet
depends_on: [DB_private_network]
properties:
cidr: 192.168.102.0/24
gateway_ip: 192.168.102.6
allocation_pools: [{ "start": 192.168.102.10, "end": 192.168.102.200 }]
host_routes: [{"destination": 192.168.101.0/24, "nexthop": 192.168.102.6}]
network: { get_resource: DB_private_network }
dns_nameservers: { get_param: dns_nameserver }
name: eCommerce_DB_subnet
enable_dhcp: true
# Create a router linking Database and App network
DB_router:
type: OS::Neutron::Router
properties:
name: "AppDBRouter"
external_gateway_info: {"network": { get_param: public_network_id }, "enable_snat": True}
# Create a port connecting the db_router to the db network
db_router_db_port:
type: OS::Neutron::Port
depends_on: [DB_private_network]
properties:
network: { get_resource: DB_private_network }
name: "DB_Net_Port"
fixed_ips: [{"ip_address": 192.168.102.6}]
# Create a port connecting the db_router to the app network
db_router_app_port:
type: OS::Neutron::Port
depends_on: [App_private_network]
properties:
network: { get_resource: App_private_network }
name: "DB_Router_App_Port"
fixed_ips: [{"ip_address": 192.168.101.6}]
# Now lets add our ports to our router
db_router_interface1:
type: OS::Neutron::RouterInterface
depends_on: [DB_router,db_router_db_port]
properties:
router: { get_resource: DB_router }
port: { get_resource: db_router_db_port }
db_router_interface2:
type: OS::Neutron::RouterInterface
depends_on: [DB_router,db_router_app_port]
properties:
router: { get_resource: DB_router }
port: { get_resource: db_router_app_port }
#################################
#Create separate security groups for each Tier
# create a specific web security group that routes just web and ssh traffic
web_security_group:
type: OS::Neutron::SecurityGroup
properties:
description: A application specific security group that passes ports 22 and 80
name: eCommerce_Web_SG
rules:
- protocol: tcp
port_range_min: 22
port_range_max: 22
- protocol: tcp
port_range_min: 80
port_range_max: 80
# create a specific application layer security group that routes database port 3306 traffic, web and ssh
app_security_group:
type: OS::Neutron::SecurityGroup
properties:
description: A application specific security group that passes ports 22, 80 and 3306
name: eCommerce_Services_SG
rules:
- protocol: tcp
port_range_min: 22
port_range_max: 22
- protocol: tcp
port_range_min: 80
port_range_max: 80
- protocol: tcp
port_range_min: 3306
port_range_max: 3306
# create a specific database security group that routes just database port 3306 traffic and ssh
db_security_group:
type: OS::Neutron::SecurityGroup
properties:
description: A database specific security group that just passes port 3306 and 22 for ssh
name: eCommerce_Database_SG
rules:
- protocol: tcp
port_range_min: 22
port_range_max: 22
- protocol: tcp
port_range_min: 3306
port_range_max: 3306
#####################################################
outputs:
#Return a bunch of values so we can use them later in the Parent Heat template when we spin up servers
db_private_network_id:
description: Database private network ID
value: {get_resource: DB_private_network}
web_private_network_id:
description: Web private network ID
value: {get_resource: web_private_network}
web_private_subnet_id:
description: Web private subnet ID
value: {get_resource: web_private_network_subnet}
app_private_network_id:
description: App private network ID
value: {get_resource: App_private_network}
app_private_subnet_id:
description: App private subnet ID
value: {get_resource: App_private_network_subnet}
db_security_group_id:
description: Database security group ID
value: {get_resource: db_security_group}
app_security_group_id:
description: App security group ID
value: {get_resource: app_security_group}
web_security_group_id:
description: Web security group ID
value: {get_resource: web_security_group}
web_lbaas_pool_name:
description: Name of Web LBaaS Pool
value: {get_resource: Web_Server_Pool}
app_lbaas_pool_name:
description: Name of App LBaaS Pool
value: {get_resource: App_Server_Pool}
web_lbaas_IP:
description: Public floating IP assigned to web LBaaS
value: { get_attr: [ Web_Network_Floating_IP, floating_ip_address ] }
app_lbaas_IP:
description: Internal floating IP assigned to app LBaaS
value: {get_attr: [ App_Tier_LoadBalancer, vip_address]}

View File

@ -1,23 +0,0 @@
# -*- coding: utf-8 -*-
# Copyright 2010-2011 OpenStack Foundation
# Copyright (c) 2013 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslotest import base
class TestCase(base.BaseTestCase):
"""Test case base class for all unit tests."""

View File

@ -1,28 +0,0 @@
# -*- coding: utf-8 -*-
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
test_workload-ref-archs
----------------------------------
Tests for `workload-ref-archs` module.
"""
from workload-ref-archs.tests import base
class TestWorkload-ref-archs(base.TestCase):
def test_something(self):
pass

View File

@ -1,70 +0,0 @@
OpenStack Workload Reference Architecture: <workload-name>
==========================================================
Introduction
------------
This section provides detailed information on what the workload is.
Please include a high-level overview diagram whenever possible.
OpenStack for <workload>
------------------------
This section provides detailed information on which OpenStack services
are required to support this workload.
Please include all **core** and **optional** services.
Please provide a brief description of each services used in this document.
For example:
.. list-table:: **Core services**
:widths: 20 50
* - Compute (Nova)
- Manages the life cycle of compute instances, including spawning,
scheduling, and decommissioning of virtual machines (VMs) on demand.
* - Object Storage (Swift)
- Stores and retrieves arbitrary unstructured data objects via a RESTful
HTTP-based API. Highly fault-tolerant with data replication and
scale-out architecture.
Please include a logical representation diagram and a functional interaction
diagram of the OpenStack services whenever possible.
Structuring a <workload> with OpenStack
---------------------------------------
This section provides detailed information on the workload requirements and
specify how each of the OpenStack services (mentioned in previous section)
are used to satisfy the workload requirements.
Please include a diagram of the deployment architecture whenever possible.
Demonstration and Sample Code
-----------------------------
Every workload must be accompanied by at least one sample code that can be
used to provision such workload environment. This can be either Heat template,
Murano packages, or other code (e.g. ansible)
This section provides a brief summary of the sample code. You do not need to
explain every single step in the sample code. However, please provide
sufficient information to explain what the sample code is trying to achieve.
Scope and Assumptions
---------------------
This section describes the specific scope, limitation or assumption made for
this workload reference architecture.
For example: The Heat template provided for this reference architecture
assumes that the web application workload is deployed in a single-region,
single-zone OpenStack environment. The deployment in a multi-zone/multi-region
environment is outside the scope of this document.
Summary
-------
This section concludes the document.