diff --git a/openstack-firstapp/README.rst b/openstack-firstapp/README.rst index 6993bc31f..ee514bac2 100644 --- a/openstack-firstapp/README.rst +++ b/openstack-firstapp/README.rst @@ -1,33 +1,31 @@ -**************************************** +======================================== Writing your First OpenStack Application -**************************************** +======================================== This repo contains the "Writing your First OpenStack Application" tutorial. The tutorials works with an application that can be found at: - https://github.com/stackforge/faafo --------------------------------- - /bin --------------------------------- + +/bin +~~~~ This document was initially written in 'sprint' style. /bin contains some useful scripts for the sprint, such as pads2files which faciliates the creation of files from an etherpad server using its API. --------------------------------- - /doc --------------------------------- +/doc +~~~~ /doc contains a playground for the actual tutorial documentation It's RST, built with sphinx. -The RST source includes conditional output logic, so specifying +The RST source includes conditional output logic, so specifying:: -tox -e libcloud + tox -e libcloud will invoke sphinx-build with -t libcloud, meaning sections marked .. only:: libcloud in the RST will be built, while others @@ -36,9 +34,9 @@ won't. sphinx and openstackdoctheme are needed to build the docs --------------------------------- - /samples --------------------------------- + +/samples +~~~~~~~~ The code samples provided in the guide are sourced from files in this directory. There is a sub-directory for each SDK. diff --git a/openstack-firstapp/doc/source/section1.rst b/openstack-firstapp/doc/source/section1.rst index 4ee2c3d68..6ad1a7640 100644 --- a/openstack-firstapp/doc/source/section1.rst +++ b/openstack-firstapp/doc/source/section1.rst @@ -23,7 +23,7 @@ Deploying applications in a cloud environment can be very different from the traditional siloed approachyou see in traditional IT, so in addition to learning to deploy applications on OpenStack, you will also learn some best practices for cloud application development. Overall, this guide covers the following: - + * :doc:`/section1` - The most basic cloud application -- creating and destroying virtual resources * :doc:`/section2` - The architecture of a sample cloud-based application * :doc:`/section3` - The importance of message queues @@ -64,11 +64,11 @@ anyone with a programming background. If you're a developer for an alternate toolkit and would like to see this book support it, great! Please feel free to submit alternate code snippets, or to -contact any of the authors or members of the Documentation team to coordinate. +contact any of the authors or members of the Documentation team to coordinate. Although this guide (initially) covers only Libcloud, you actually have several choices when it comes to building an application for an OpenStack cloud. -These choices include: +These choices include: ============= ============= ================================================================= ==================================================== Language Name Description URL @@ -99,7 +99,7 @@ You should have a project (tenant) with a quota of at least openSUSE-based distributions, so you'll need to be creating instances using one of these operating systems. -Interact with the cloud itself, you will also need to have +Interact with the cloud itself, you will also need to have .. only:: dotnet @@ -151,15 +151,15 @@ OpenStack Dashboard. To download this file, log into the Horizon dashboard and click Project->Access & Security->API Access->Download OpenStack RC file. If you choose this route, be aware that the "auth URL" doesn't include the path. In other words, if your openrc.sh file shows: - + .. code-block:: bash export OS_AUTH_URL=http://controller:5000/v2.0 - + the actual auth URL will be .. code-block:: python - + http://controller:5000 @@ -188,7 +188,7 @@ libcloud. :end-before: step-2 .. only:: openstacksdk - + .. code-block:: python from openstack import connection @@ -237,7 +237,7 @@ running some API calls: You should see a result something like: .. code-block:: python - + @@ -254,11 +254,11 @@ You can also get information on the various flavors: .. literalinclude:: ../../samples/libcloud/section1.py :start-after: step-3 :end-before: step-4 - + This code should produce output something like: - + .. code-block:: python - + @@ -301,7 +301,7 @@ image you have chosen to work with in the previous section: :end-before: step-5 You should see output something like this: - + .. code-block:: python @@ -322,11 +322,11 @@ Next tell the script what flavor you want to use: :end-before: step-6 You should see output something like this: - + .. code-block:: python - + Now you're ready to actually launch the instance. Booting an instance @@ -342,8 +342,8 @@ Now that you have selected an image and flavor, use it to create an instance. this is the case if you see an error stating 'Exception: 400 Bad Request Multiple possible networks found, use a Network ID to be more specific.' See :doc:`/appendix` for details. - -Start by creating the instance. + +Start by creating the instance. .. note:: An instance may be called a 'node' or 'server' by your SDK. @@ -360,7 +360,7 @@ Start by creating the instance. :end-before: step-7 You should see output something like: - + .. code-block:: python @@ -427,7 +427,7 @@ cloud resources. .. literalinclude:: ../../samples/libcloud/section1.py :start-after: step-8 :end-before: step-9 - + If you then list the instances again, you'll see that the instance no longer appears. @@ -608,7 +608,7 @@ Full example code ----------------- Here's every code snippet into a single file, in case you want to run it all in one, or -you are so experienced you don't need instruction ;) If you are going to use this, +you are so experienced you don't need instruction ;) If you are going to use this, don't forget to set your authentication information and the flavor and image ID. .. only:: libcloud diff --git a/openstack-firstapp/doc/source/section2.rst b/openstack-firstapp/doc/source/section2.rst index b5397180d..83eb01090 100644 --- a/openstack-firstapp/doc/source/section2.rst +++ b/openstack-firstapp/doc/source/section2.rst @@ -32,11 +32,11 @@ referenced in the previous section. .. only:: node .. warning:: This section has not yet been completed for the pkgcloud SDK - + .. only:: openstacksdk .. warning:: This section has not yet been completed for the OpenStack SDK - + .. only:: phpopencloud .. warning:: This section has not yet been completed for the PHP-OpenCloud SDK @@ -72,7 +72,7 @@ Fault Tolerance In cloud programming, there's a well-known analogy known as "cattle vs pets". If you haven't heard it before, it goes like this: - + When you're dealing with pets, you name them and care for them and if they get sick, you nurse them back to health. Nursing pets back to health can be difficult and very time consuming. When you're dealing with cattle, you attach a numbered tag to their ear and if they get sick you put them down and move on. @@ -82,12 +82,12 @@ servers, cared for by operations staff dedicated to keeping them healthy. If som servers, the staff's job was to do whatever it took to make it right again and save the server and the application. In cloud programming, it's very different. Rather than large, expensive servers, you're dealing with virtual -machines that are literally disposable; if something goes wrong, you shut it down and spin up a new one. There's +machines that are literally disposable; if something goes wrong, you shut it down and spin up a new one. There's still operations staff, but rather than nursing individual servers back to health, their job is to monitor the health of the overall system. There are definite advantages to this architecture. It's easy to get a "new" server, without any of the issues -that inevitably arise when a server has been up and running for months, or even years. +that inevitably arise when a server has been up and running for months, or even years. As with classical infrastructure, failures of the underpinning cloud infrastructure (hardware, networks, and software) are unavoidable. When you're designing for the cloud, it's crucial that your application is designed for an environment where failures @@ -99,7 +99,7 @@ Fault tolerance is essential to the cloud-based application. Automation ~~~~~~~~~~ -If an application is meant to automatically scale up and down to meet demand, it is not feasible have any manual +If an application is meant to automatically scale up and down to meet demand, it is not feasible have any manual steps in the process of deploying any component of the application. Automation also decreases the time to recovery for your application in the event of component failures, increasing fault tolerance and resilience. @@ -127,7 +127,7 @@ services. The Fractal app uses a so-called `work queue `_ (or task queue) to distribute tasks to the worker servies. -Message queues work in a way similar to a queue (or a line, for those of us on the other side of the ocean) in a bank being +Message queues work in a way similar to a queue (or a line, for those of us on the other side of the ocean) in a bank being served by multiple clerks. The message queue in our application provides a feed of work requests that can be taken one-at-a-time by worker services, whether there is a single worker service or hundreds of them. @@ -155,7 +155,7 @@ way of accessing the API to view the created fractal images, and a simple comman :figclass: align-center -There are also multiple storage backends (to store the generated fractal images) and a database +There are also multiple storage backends (to store the generated fractal images) and a database component (to store the state of tasks), but we'll talk about those in :doc:`/section4` and :doc:`/section5` respectively. How the Fractals app interacts with OpenStack @@ -172,7 +172,7 @@ The Magic Revisited ------------------- So what exactly was that request doing at the end of the previous section? -Let's look at it again. (Note that in this subsection, we're just explaining what +Let's look at it again. (Note that in this subsection, we're just explaining what you've already done in the previous section; you don't need to execute these commands again.) .. only:: libcloud @@ -206,7 +206,7 @@ for some guidance on which username you should use when SSHing. If you still ha :end-before: step-3 -Once the instance is created, cloud-init downloads and executes a script called :code:`install.sh`. +Once the instance is created, cloud-init downloads and executes a script called :code:`install.sh`. This script installs the Fractals app. Cloud-init is capable of consuming a number of different types of data, not just bash scripts. You can even provide multiple types of data. You can find further information about @@ -290,8 +290,8 @@ the address of the instance's internal network interface address. Your cloud pro To use a Floating IP, you must first allocate an IP to your project, then associate it to your instance's network interface. -.. note:: - +.. note:: + Allocating a Floating IP address to an instance does not change the IP address of the instance, it causes OpenStack to establish the network translation rules to allow an *additional* IP address. @@ -326,7 +326,7 @@ Now that you have an unused floating IP address allocated to your project, attac .. literalinclude:: ../../samples/libcloud/section2.py :start-after: step-10 :end-before: step-11 - + That brings us to where we ended up at the end of :doc:`/section1`. But where do we go from here? Splitting services across multiple instances @@ -354,7 +354,7 @@ Parameter Description Values .. todo:: https://bugs.launchpad.net/openstack-manuals/+bug/1439918 .. only:: libcloud - + .. literalinclude:: ../../samples/libcloud/section2.py :start-after: step-11 @@ -373,7 +373,7 @@ Next, start a second instance, which will be the worker instance: .. literalinclude:: ../../samples/libcloud/section2.py :start-after: step-12 :end-before: step-13 - + Notice that you've added this instance to the worker_group, so it can access the controller. As you can see from the parameters passed to the installation script, you are specifying that this is the worker instance, but you're also passing the address of the API instance and the message @@ -410,7 +410,7 @@ Now you can SSH into the instance: .. note:: Replace :code:`IP_WORKER_1` with the IP address of the worker instance and USERNAME to the appropriate username. -Once you've logged in, check to see whether the worker service process is running as expected. +Once you've logged in, check to see whether the worker service process is running as expected. You can find the logs of the worker service in the directory :code:`/var/log/supervisor/`. :: @@ -428,7 +428,7 @@ Now log into the controller instance, :code:`app-controller`, also with SSH, usi .. note:: Replace :code:`IP_CONTROLLER` with the IP address of the controller instance and USERNAME to the appropriate username. -Check to see whether the API service process is running like expected. You can find the logs for the API service +Check to see whether the API service process is running like expected. You can find the logs for the API service in the directory :code:`/var/log/supervisor/`. :: @@ -436,7 +436,7 @@ in the directory :code:`/var/log/supervisor/`. controller # ps ax | grep faafo-api 17209 ? Sl 0:19 /usr/bin/python /usr/local/bin/faafo-api -Now call the Fractal app's command line interface (:code:`faafo`) to request a few new fractals. +Now call the Fractal app's command line interface (:code:`faafo`) to request a few new fractals. The following command will request a few fractals with random parameters: :: @@ -448,7 +448,7 @@ Watch :code:`top` on the worker instance. Right after calling :code:`faafo` the :: - PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND + PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 17210 root 20 0 157216 39312 5716 R 98.8 3.9 12:02.15 faafo-worker To show the details of a specific fractal use the subcommand :code:`show` of the Faafo CLI. @@ -475,7 +475,7 @@ There are more commands available; find out more details about them with :code:` .. note:: The application stores the generated fractal images directly in the database used by the API service instance. Storing image files in database is not good practice. We're doing it here as an example only as an easy - way to allow multiple instances to have access to the data. For best practice, we recommend storing + way to allow multiple instances to have access to the data. For best practice, we recommend storing objects in Object Storage, which is covered in :doc:`section4`. Next Steps @@ -484,7 +484,7 @@ Next Steps You should now have a basic understanding of the architecture of cloud-based applications. In addition, you now have had practice starting new instances, automatically configuring them at boot, and even modularizing an application so that you may use multiple instances to run it. These are the basic -steps for requesting and using compute resources in order to run your application on an OpenStack cloud. +steps for requesting and using compute resources in order to run your application on an OpenStack cloud. From here, you should go to :doc:`/section3` to learn how to scale the application further. Alternately, you may jump to any of these sections: @@ -500,7 +500,7 @@ Full example code ----------------- Here's every code snippet into a single file, in case you want to run it all in one, or -you are so experienced you don't need instruction ;) If you are going to use this, +you are so experienced you don't need instruction ;) If you are going to use this, don't forget to set your authentication information and the flavor and image ID. .. only:: libcloud diff --git a/openstack-firstapp/doc/source/section3.rst b/openstack-firstapp/doc/source/section3.rst index f763cd721..1ad6decc1 100644 --- a/openstack-firstapp/doc/source/section3.rst +++ b/openstack-firstapp/doc/source/section3.rst @@ -21,14 +21,14 @@ In section 2, we talked about various aspects of the application architecture, s as building in a modular fashion, creating an API, and so on. Now you'll see why those are so important. By creating a modular application with decoupled services, it is possible to identify components that cause application performance bottlenecks -and scale them out. +and scale them out. Just as importantly, you can also remove resources when they are no longer necessary. -It is very difficult to overstate the cost savings that this feature can bring, as +It is very difficult to overstate the cost savings that this feature can bring, as compared to traditional infrastructure. Of course, just having access to additional resources is only part of the battle; -while it's certainly possible to manually add or destroy resources, you'll get more +while it's certainly possible to manually add or destroy resources, you'll get more value -- and more responsiveness -- if the application simply requests new resources automatically when it needs them. @@ -37,7 +37,7 @@ and highlights some of the choices we've made that facilitate scalability in the app's architecture. We'll progressively ramp up to use up to about 6 instances, so ensure -that your cloud account has appropriate quota to handle that many. +that your cloud account has appropriate quota to handle that many. In the previous section, we used two virtual machines - one 'control' service and one 'worker'. In our application, the speed at which fractals can be generated depends on the number of workers. @@ -194,7 +194,7 @@ to distribute tasks. Instead, we'll need to introduce some kind of load balancin to share incoming requests between the different API services. One simple way might be to give half of our friends one address and half the other, but that's certainly -not a sustainable solution. Instead, we can do that automatically using a `DNS round robin `_. +not a sustainable solution. Instead, we can do that automatically using a `DNS round robin `_. However, OpenStack networking can provide Load Balancing as a Service, which we'll explain in :doc:`/section7`. .. todo:: Add a note that we demonstrate this by using the first API instance for the workers and the second API instance for the load simulation. @@ -321,7 +321,7 @@ You should now be fairly confident about starting new instance, and about segreg As mentioned in :doc:`/section2` the generated fractals images will be saved on the local filesystem of the API service instances. Because we now have multiple API instances up and running the generated fractal images will be spreaded accross multiple API services, stored on local instance filesystems. This ends in a lot of :code:`IOError: [Errno 2] No such file or directory` exceptions when trying to download a fractal image from an API service instance not holding the fractal -image on its local filesystem. +image on its local filesystem. From here, you should go to :doc:`/section4` to learn how to use Object Storage to solve this problem in a elegant way. Alternately, you may jump to any of these sections: @@ -335,14 +335,10 @@ Full example code ----------------- Here's every code snippet into a single file, in case you want to run it all in one, or -you are so experienced you don't need instruction ;) If you are going to use this, +you are so experienced you don't need instruction ;) If you are going to use this, don't forget to set your authentication information and the flavor and image ID. .. only:: libcloud .. literalinclude:: ../../samples/libcloud/section3.py :language: python - - - - diff --git a/openstack-firstapp/doc/source/section4.rst b/openstack-firstapp/doc/source/section4.rst index 412ebccc8..48f78382d 100644 --- a/openstack-firstapp/doc/source/section4.rst +++ b/openstack-firstapp/doc/source/section4.rst @@ -10,8 +10,8 @@ Section Four: Making it Durable .. todo:: Large object support in Swift http://docs.openstack.org/developer/swift/overview_large_objects.html -This section introduces object storage. -`OpenStack Object Storage `_ +This section introduces object storage. +`OpenStack Object Storage `_ (code-named Swift) is open source software for creating redundant, scalable data storage using clusters of standardized servers to store petabytes of accessible data. It is a long-term storage system for large amounts of static data that can be @@ -20,12 +20,12 @@ like more traditional storage. There are a two key concepts to understand in the Object Storage API. The Object Storage API is organized around two types of entities: - + * Objects * Containers -Similar to the Unix programming model, an Object is a "bag of bytes" that contains data, -such as documents and images. Containers are used to group objects. +Similar to the Unix programming model, an Object is a "bag of bytes" that contains data, +such as documents and images. Containers are used to group objects. You can make many objects inside a container, and have many containers inside your account. If you think about how you traditionally make what you store durable, very quickly you should come @@ -47,7 +47,7 @@ generates. This is not scalable or durable, for a number of reasons. Because the local filesystem is ephemeral storage, if the instance is terminated, the fractal images will be lost along with the instance. Block based storage, which we'll discuss in :doc:`/section5`, -avoids that problem, but like local filesystems, it +avoids that problem, but like local filesystems, it requires administration to ensure that it does not fill up, and immediate attention if disks fail. The Object Storage service manages many of these tasks that normally would require the application owner @@ -71,14 +71,14 @@ First, let's learn how to connect to the Object Storage Endpoint: .. warning:: This section has not yet been completed for the jclouds SDK .. only:: libcloud - + .. literalinclude:: ../../samples/libcloud/section4.py :start-after: step-1 :end-before: step-2 .. warning:: - + Libcloud 0.16 and 0.17 are afflicted with a bug that means authentication to a swift endpoint can fail with `a Python exception `_. If you encounter this, you can upgrade your libcloud version, or apply a simple @@ -104,7 +104,7 @@ To begin to store objects, we must first make a container. Call yours :code:`fractals`: .. only:: libcloud - + .. literalinclude:: ../../samples/libcloud/section4.py :start-after: step-2 :end-before: step-3 @@ -119,22 +119,22 @@ You should now be able to see this container appear in a listing of all containers in your account: .. only:: libcloud - + .. literalinclude:: ../../samples/libcloud/section4.py :start-after: step-3 :end-before: step-4 - + You should see output such as: .. code-block:: python - + [] The next logical step is to upload an object. Find a photo of a goat online, name it :code:`goat.jpg` and upload it to your container :code:`fractals`: .. only:: libcloud - + .. literalinclude:: ../../samples/libcloud/section4.py :start-after: step-4 :end-before: step-5 @@ -143,30 +143,30 @@ List objects in your container :code:`fractals` to see if the upload was success the file to verify the md5sum is the same: .. only:: libcloud - + .. literalinclude:: ../../samples/libcloud/section4.py :start-after: step-5 :end-before: step-6 - + :: - + [] - - + + .. literalinclude:: ../../samples/libcloud/section4.py :start-after: step-6 :end-before: step-7 - + :: - + - + .. literalinclude:: ../../samples/libcloud/section4.py :start-after: step-7 :end-before: step-8 - + :: - + 7513986d3aeb22659079d1bf3dc2468b @@ -174,7 +174,7 @@ the file to verify the md5sum is the same: Finally, let's clean up by deleting our test object: .. only:: libcloud - + .. literalinclude:: ../../samples/libcloud/section4.py :start-after: step-8 :end-before: step-9 @@ -186,7 +186,7 @@ Finally, let's clean up by deleting our test object: .. literalinclude:: ../../samples/libcloud/section4.py :start-after: step-9 :end-before: step-10 - + :: [] @@ -199,7 +199,7 @@ So let's now use the knowledge from above to backup the images of the Fractals a Use the :code:`fractals`' container from above to put the images in: .. only:: libcloud - + .. literalinclude:: ../../samples/libcloud/section4.py :start-after: step-10 :end-before: step-11 @@ -207,11 +207,11 @@ Use the :code:`fractals`' container from above to put the images in: Next, we backup all of our existing fractals from the database to our swift container. A simple for loop takes care of that: .. only:: libcloud - + .. literalinclude:: ../../samples/libcloud/section4.py :start-after: step-11 :end-before: step-12 - + :: @@ -239,7 +239,7 @@ Ensure that you have removed all objects from the container before running this, it will fail: .. only:: libcloud - + .. literalinclude:: ../../samples/libcloud/section4.py :start-after: step-12 :end-before: step-13 @@ -258,7 +258,7 @@ This is more efficient, especially for larger files. .. only:: libcloud - + .. literalinclude:: ../../samples/libcloud/section4.py :start-after: step-13 :end-before: step-14 diff --git a/openstack-firstapp/doc/source/section5.rst b/openstack-firstapp/doc/source/section5.rst index 0416ed01a..e3350961b 100644 --- a/openstack-firstapp/doc/source/section5.rst +++ b/openstack-firstapp/doc/source/section5.rst @@ -6,7 +6,7 @@ Section Five: Block Storage going to do.) By default, data in OpenStack instances is stored on 'ephemeral' disks. These stay with the instance throughout its lifetime, but when the -instance is terminated, that storage disappears -- along with all the data stored on it. Ephemeral storage is allocated to a +instance is terminated, that storage disappears -- along with all the data stored on it. Ephemeral storage is allocated to a single instance and cannot be moved to another instance. In this section, we will introduce block storage. Block storage (sometimes referred to as volume storage) provides you @@ -20,7 +20,7 @@ configured the images to be stored in Object Storage in the previous section, wi where in Object Storage they are, and the parameters that were used to create them. Advanced users should consider how to remove the database from the architecture altogether and replace it -with metadata in the Object Storage (then contribute these steps to :doc:`section9`). Others should read +with metadata in the Object Storage (then contribute these steps to :doc:`section9`). Others should read on to learn about how to work with block storage and move the Fractal app database server to use it. Basics @@ -45,11 +45,11 @@ but first - let's cover the basics, such as creating and attaching a block stora .. only:: node .. warning:: This section has not yet been completed for the pkgcloud SDK - + .. only:: openstacksdk .. warning:: This section has not yet been completed for the OpenStack SDK - + .. only:: phpopencloud .. warning:: This section has not yet been completed for the PHP-OpenCloud SDK @@ -82,14 +82,14 @@ As always, connect to the API endpoint: To try it out, make a 1GB volume called :test'. .. only:: libcloud - + .. code-block:: python - + volume = connection.create_volume(1, 'test') print(volume) - + :: - + .. note:: The parameter :code:`size` is in GigaBytes. @@ -97,14 +97,14 @@ To try it out, make a 1GB volume called :test'. List all volumes to see if it was successful: .. only:: libcloud - + .. code-block:: python - + volumes = connection.list_volumes() print(volumes) - + :: - + [] Now that you have created a storage volume, let's attach it to an already running instance. @@ -120,9 +120,9 @@ We will also need a new security group to allow access to the database server (for mysql, port 3306) from the network: .. only:: libcloud - + .. code-block:: python - + db_group = connection.ex_create_security_group('database', 'for database service') connection.ex_create_security_group_rule(db_group, 'TCP', 3306, 3306) instance = connection.create_node(name='app-database', @@ -135,9 +135,9 @@ Using the unique identifier (UUID) for the volume, make a new volume object, the use the server object from the previous snippet and attach the volume to it at :code:`/dev/vdb`: .. only:: libcloud - + .. code-block:: python - + volume = connection.ex_get_volume('755ab026-b5f2-4f53-b34a-6d082fb36689') connection.attach_volume(instance, volume, '/dev/vdb') @@ -192,17 +192,17 @@ You can detach the volume and re-attach it elsewhere, or destroy the volume with To detach and destroy a volume: .. only:: libcloud - + .. code-block:: python - + connection.detach_volume(volume) - + :: - + True - + .. code-block:: python - + connection.destroy_volume(volume) .. note:: :code:`detach_volume` and :code:`destroy_volume` take a volume object, not a name. @@ -210,14 +210,14 @@ To detach and destroy a volume: There are also many other useful features, such as the ability to create snapshots of volumes (handy for backups): .. only:: libcloud - + .. code-block:: python - + * snapshot_name = 'test_backup_1' connnection.create_volume_snapshot('test', name='test backup 1') - + .. todo:: Do we need a note here to mention that 'test' is the volume name and not the volume object? - + You can find information about these calls and more in the `libcloud documentation `_. @@ -242,16 +242,16 @@ http://docs.openstack.org/cli-reference/content/cli_openrc.html Ensure you have an openrc.sh file, source it and then check your trove client works: :: - + $ cat openrc.sh export OS_USERNAME=your_auth_username export OS_PASSWORD=your_auth_password export OS_TENANT_NAME=your_project_name export OS_AUTH_URL=http://controller:5000/v2.0 export OS_REGION_NAME=your_region_name - + $ source openrc.sh - + $ trove --version 1.0.9 @@ -271,5 +271,3 @@ refer to the volume documentation of your SDK, or try a different step in the tu * :doc:`/section6` - to automatically orchestrate the application * :doc:`/section7` - to learn about more complex networking * :doc:`/section8` - for advice for developers new to operations - - diff --git a/openstack-firstapp/doc/source/section6.rst b/openstack-firstapp/doc/source/section6.rst index 5f83a56aa..325c864dd 100644 --- a/openstack-firstapp/doc/source/section6.rst +++ b/openstack-firstapp/doc/source/section6.rst @@ -20,32 +20,32 @@ and even users. It also provides more advanced functionality, such as instance high availability, instance auto-scaling, and nested stacks. The OpenStack Orchestration API contains the following constructs: - + * Stacks * Resources * Templates Stacks are created from Templates, which contain Resources. Resources are an abstraction in the HOT (Heat Orchestration Template) template language, which enables you to define different -cloud resources by setting the `type` attibute. +cloud resources by setting the `type` attibute. For example, you might use the Orchestration API to create two compute instances by creating a Stack and by passing a Template to the Orchestration API. -That Template would contain two Resources with the `type` attribute set to `OS::Nova::Server`. +That Template would contain two Resources with the `type` attribute set to `OS::Nova::Server`. -That's a simplistic example, of course, but the flexibility of the Resource object +That's a simplistic example, of course, but the flexibility of the Resource object enables the creation of Templates that contain all the required cloud -infrastructure to run an application, such as load balancers, block storage volumes, +infrastructure to run an application, such as load balancers, block storage volumes, compute instances, networking topology, and security policies. .. note:: The Orchestration module isn't deployed by default in every cloud. If these commands don't work, it means the Orchestration API isn't available; ask your support team for assistance. -This section introduces the `HOT templating language `_, +This section introduces the `HOT templating language `_, and takes you throughsome of the common calls you will make when working with OpenStack Orchestration. Unlike previous sections of this guide, in which you used your SDK to programmatically interact with OpenStack, in this section you'll be using the Orchestration API directly through Template files, -so we'll work from the command line. +so we'll work from the command line. Install the 'heat' commandline client by following this guide: http://docs.openstack.org/cli-reference/content/install_clients.html @@ -58,15 +58,15 @@ http://docs.openstack.org/cli-reference/content/cli_openrc.html .. warning:: the .NET SDK does not currently support OpenStack Orchestration .. only:: fog - + .. note:: fog `does support OpenStack Orchestration `_. .. only:: jclouds - + .. warning:: Jclouds does not currently support OpenStack Orchestration. See this `bug report `_. .. only:: libcloud - + .. warning:: libcloud does not currently support OpenStack Orchestration. .. only:: node @@ -74,17 +74,17 @@ http://docs.openstack.org/cli-reference/content/cli_openrc.html .. note:: Pkgcloud supports OpenStack Orchestration :D:D:D but this section is `not written yet `_ .. only:: openstacksdk - + .. warning:: OpenStack SDK does not currently support OpenStack Orchestration. .. only:: phpopencloud - + .. note:: PHP-opencloud supports orchestration :D:D:D but this section is not written yet. HOT Templating Language ----------------------- -The best place to learn about the template syntax for OpenStack Orchestration is the +The best place to learn about the template syntax for OpenStack Orchestration is the `Heat Orchestration Template (HOT) Guide `_ You should read the HOT Guide first to learn how to create basic templates, their inputs and outputs. @@ -92,11 +92,11 @@ Working with Stacks: Basics --------------------------- .. todo:: - + This section needs to have a HOT template written for deploying the Fractal Application - + .. todo:: - + Replace the hello_world.yaml templte with the Fractal template * Stack create @@ -106,7 +106,7 @@ a Nova compute instance, with a few configuration settings passed in, such as an of an image: :: - + $ wget https://raw.githubusercontent.com/openstack/heat-templates/master/hot/hello_world.yaml $ heat stack-create --template-file hello_world.yaml \ --parameters admin_pass=Test123\;key_name=test\;image=5bbe4073-90c0-4ec9-833c-092459cc4539 hello_world @@ -119,7 +119,7 @@ of an image: The resulting stack creates a Nova instance automatically, which you can see here: :: - + $ nova list +--------------------------------------+---------------------------------+--------+------------+-------------+------------------+ | ID | Name | Status | Task State | Power State | Networks | @@ -130,7 +130,7 @@ The resulting stack creates a Nova instance automatically, which you can see her Verify that the stack was successfully created using the following command: :: - + $ heat stack-list +--------------------------------------+-------------+-----------------+----------------------+ | id | stack_name | stack_status | creation_time | @@ -171,7 +171,7 @@ Working with Stacks: Advanced .. todo:: needs a heat template that uses fractal app -With the use of the Orchestration API, the Fractal app can create an autoscaling +With the use of the Orchestration API, the Fractal app can create an autoscaling group for all parts of the application, in order to dynamically provision more compute resources during periods of heavy utilization, and also terminate compute instances to scale down, as demand decreases. @@ -196,4 +196,3 @@ refer to the volume documentation of your SDK, or try a different step in the tu * :doc:`/section7` - to learn about more complex networking * :doc:`/section8` - for advice for developers new to operations * :doc:`/section9` - to see all the crazy things we think ordinary folks won't want to do ;) - diff --git a/openstack-firstapp/doc/source/section7.rst b/openstack-firstapp/doc/source/section7.rst index e682b0be2..5a1b632d9 100644 --- a/openstack-firstapp/doc/source/section7.rst +++ b/openstack-firstapp/doc/source/section7.rst @@ -33,11 +33,11 @@ database, webserver, file storage, and worker components. .. only:: node .. warning:: Pkgcloud supports the OpenStack Networking API, but this section has not been completed - + .. only:: openstacksdk .. warning:: This section has not yet been completed for the OpenStack SDK - + .. only:: phpopencloud .. warning:: PHP-OpenCloud supports the OpenStack Networking API, but this section has not been completed @@ -56,34 +56,34 @@ http://docs.openstack.org/cli-reference/content/cli_openrc.html Ensure you have an openrc.sh file, source it and then check your neutron client works: :: - + $ cat openrc.sh export OS_USERNAME=your_auth_username export OS_PASSWORD=your_auth_password export OS_TENANT_NAME=your_project_name export OS_AUTH_URL=http://controller:5000/v2.0 export OS_REGION_NAME=your_region_name - + $ source openrc.sh - + $ neutron --version 2.3.11 Networking Segmentation ----------------------- -In traditional datacenters, multiple network segments are +In traditional datacenters, multiple network segments are dedicated to specific types of network traffic. The fractal application we are building contains three types of network traffic: * public-facing wev traffic * API traffic -* internal worker traffic +* internal worker traffic -For performance reasons, it makes sense to have a network for each tier, -so that traffic from one tier does not "crowd out" other types of traffic -and cause the application to fail. In addition, having separate networks makes +For performance reasons, it makes sense to have a network for each tier, +so that traffic from one tier does not "crowd out" other types of traffic +and cause the application to fail. In addition, having separate networks makes controlling access to parts of the application easier to manage, improving the overall security of the application. @@ -109,15 +109,15 @@ Prior to this section, the network layout for the Fractal application would be s } } -In this network layout, we are assuming that the OpenStack cloud in which -you have been building your application has a public -network and tenant router that was already created in advance, either by the -administrators of the cloud you are running the Fractal application on, +In this network layout, we are assuming that the OpenStack cloud in which +you have been building your application has a public +network and tenant router that was already created in advance, either by the +administrators of the cloud you are running the Fractal application on, or by you, following the instructions in the appendix. -Many of the network concepts that are discussed in this section are +Many of the network concepts that are discussed in this section are already present in the diagram above. A tenant router provides -routing and external access for the worker nodes, and floating IP addresses +routing and external access for the worker nodes, and floating IP addresses are already associated with each node in the Fractal application cluster to facilitate external access. @@ -134,7 +134,7 @@ will be accessible by fractal aficionados worldwide, by allocating floating IPs address = "203.0.113.0/24" tenant_router [ address = "203.0.113.60"]; } - + network webserver_network{ address = "10.0.2.0/24" tenant_router [ address = "10.0.2.1"]; @@ -159,7 +159,7 @@ will be accessible by fractal aficionados worldwide, by allocating floating IPs Introduction to Tenant Networking --------------------------------- -With the OpenStack Networking API, the workflow for creating a network topology that separates the public-facing +With the OpenStack Networking API, the workflow for creating a network topology that separates the public-facing Fractals app API from the worker backend is as follows: * Create a network for the web server nodes. @@ -177,8 +177,8 @@ Fractals app API from the worker backend is as follows: Creating Networks ----------------- -We assume that the public network, with the subnet that floating IPs can be allocated from, was provisioned -for you by your cloud operator. This is due to the nature of L3 routing, where the IP address range that +We assume that the public network, with the subnet that floating IPs can be allocated from, was provisioned +for you by your cloud operator. This is due to the nature of L3 routing, where the IP address range that is used for floating IPs is configured in other parts of the operator's network, so that traffic is properly routed. .. todo:: Rework the console outputs in these sections to be more comprehensive, based on the outline above @@ -350,7 +350,7 @@ by your cloud administrator. | status | DOWN | | tenant_id | 0cb06b70ef67424b8add447415449722 | +---------------------+--------------------------------------+ - + $ neutron floatingip-create public Created a new floatingip: +---------------------+--------------------------------------+ @@ -375,10 +375,10 @@ Next we'll need to enable OpenStack to route traffic appropriately. Creating the SNAT gateway ------------------------- -Because we are using cloud-init and other tools to deploy and bootstrap the application, -the Fractal app worker instances require Source Network Address Translation (SNAT). +Because we are using cloud-init and other tools to deploy and bootstrap the application, +the Fractal app worker instances require Source Network Address Translation (SNAT). If the Fractal app worker nodes were deployed from a "golden image" -that had all the software components already installed, there would be no need to create a +that had all the software components already installed, there would be no need to create a Neutron router to provide SNAT functionality. .. todo :: nickchase doesn't understand the above paragraph. Why wouldn't it be required? @@ -397,7 +397,7 @@ Neutron router to provide SNAT functionality. | routes | | | status | ACTIVE | | tenant_id | f77bf3369741408e89d8f6fe090d29d2 | - +-----------------------+--------------------------------------+ + +-----------------------+--------------------------------------+ After creating the router, you need to set up the gateway for the router. For outbound access we will set the router's gateway as the public network. @@ -488,8 +488,8 @@ Ensure you use appropriate flavor and image values for your cloud - see :doc:`se Load Balancing -------------- -After separating the Fractal worker nodes into their own network, -the next logical step is to move the Fractal API service onto a load balancer, +After separating the Fractal worker nodes into their own network, +the next logical step is to move the Fractal API service onto a load balancer, so that multiple API workers can handle requests. By using a load balancer, the API service can be scaled out in a similar fashion to the worker nodes. @@ -502,7 +502,7 @@ Neutron LbaaS API The OpenStack Networking API provides support for creating loadbalancers, which can be used to scale the Fractal app web service. In the following example, we create two compute instances via the Compute -API, then instantiate a loadbalancer that will use a virtual IP (VIP) for accessing the web service offered by +API, then instantiate a loadbalancer that will use a virtual IP (VIP) for accessing the web service offered by the two compute nodes. The end result will be the following network topology: .. nwdiag:: @@ -528,7 +528,7 @@ libcloud support added 0.14: https://developer.rackspace.com/blog/libcloud-0-dot Let's start by looking at what's already in place. :: - + $ neutron net-list +--------------------------------------+-------------------+-----------------------------------------------------+ | id | name | subnets | @@ -574,7 +574,7 @@ Now let's go ahead and create 2 instances. +--------------------------------------+-----------------------------------------------------------------+ Confirm that they were added: - + :: $ nova list @@ -586,7 +586,7 @@ Confirm that they were added: +--------------------------------------+--------+--------+------------+-------------+------------------+ Now let's look at what ports are available: - + :: $ neutron port-list @@ -632,8 +632,8 @@ Next create additional floating IPs by specifying the fixed IP addresses they sh | tenant_id | 0cb06b70ef67424b8add447415449722 | +---------------------+--------------------------------------+ -All right, now you're ready to go ahead and create members for the load balancer pool, referencing the floating IPs: - +All right, now you're ready to go ahead and create members for the load balancer pool, referencing the floating IPs: + :: $ neutron lb-member-create --address 203.0.113.21 --protocol-port 80 mypool @@ -669,7 +669,7 @@ All right, now you're ready to go ahead and create members for the load balancer +--------------------+--------------------------------------+ You should be able to see them in the member list: - + :: $ neutron lb-member-list @@ -707,7 +707,7 @@ so that client requests are routed to another active member. Associated health monitor 663345e6-2853-43b2-9ccb-a623d5912345 Now create a virtual IP that will be used to direct traffic between the various members of the pool: - + :: $ neutron lb-vip-create --name myvip --protocol-port 80 --protocol HTTP --subnet-id 47fd3ff1-ead6-4d23-9ce6-2e66a3dae425 mypool @@ -733,7 +733,7 @@ Now create a virtual IP that will be used to direct traffic between the various +---------------------+--------------------------------------+ And confirm it's in place: - + :: $ neutron lb-vip-list @@ -761,7 +761,7 @@ nature of the application itself. tenant_router [ address = "203.0.113.60"]; loadbalancer [ address = "203.0.113.63" ]; } - + network webserver_network{ address = "10.0.2.0/24" tenant_router [ address = "10.0.2.1"]; @@ -793,5 +793,3 @@ refer to the volume documentation of your SDK, or try a different step in the tu * :doc:`/section8` - for advice for developers new to operations * :doc:`/section9` - to see all the crazy things we think ordinary folks won't want to do ;) - - diff --git a/openstack-firstapp/doc/source/section8.rst b/openstack-firstapp/doc/source/section8.rst index b750c6882..d1865d030 100644 --- a/openstack-firstapp/doc/source/section8.rst +++ b/openstack-firstapp/doc/source/section8.rst @@ -51,10 +51,10 @@ Phoenix Servers --------------- Application developers and operators who employ -`Phoenix Servers `_ +`Phoenix Servers `_ have built systems that start from a known baseline (sometimes just a specific version of an operating system) and have built tooling that will automatically -build, install, and configure a system with no manual intervention. +build, install, and configure a system with no manual intervention. Phoenix Servers, named for the mythological bird that would live its life, be consumed by fire, then rise from the ashes to live again, make it possible @@ -95,7 +95,7 @@ For example, do you: * make packaged releases that update infrequently? * big-bang test in a development environment and deploy only after major changes? -One of the latest trends in deploying scalable cloud applications is +One of the latest trends in deploying scalable cloud applications is `continuous integration `_ / `continuous deployment `_ (CI/CD). Working in a CI/CD fashion means @@ -115,5 +115,3 @@ needed to ensure that 'gold' images do not fall behind on security updates. Fail Fast --------- - - diff --git a/openstack-firstapp/doc/source/section9.rst b/openstack-firstapp/doc/source/section9.rst index 02e62e5dc..d64d45ffb 100644 --- a/openstack-firstapp/doc/source/section9.rst +++ b/openstack-firstapp/doc/source/section9.rst @@ -37,7 +37,7 @@ Use conf.d and etc.d. In earlier sections, the Fractal Application uses an install script, with parameters passed in from the metadata API, in order to bootstrap the cluster. `Etcd `_ is a "a distributed, consistent key value store for shared configuration and service discovery" -that can be used for storing configuration. Updated versions of the Fractal worker +that can be used for storing configuration. Updated versions of the Fractal worker component could be writted to connect to Etcd, or use `Confd `_ which will poll for changes from Etcd and write changes to a configuration file on the local filesystem, which the Fractal worker could use for configuration.