diff --git a/doc/source/admin/objectstorage-EC.rst b/doc/source/admin/objectstorage-EC.rst index e01179ab2d..6f1996b537 100644 --- a/doc/source/admin/objectstorage-EC.rst +++ b/doc/source/admin/objectstorage-EC.rst @@ -7,9 +7,7 @@ missing data from a set of original data. In theory, erasure coding uses less capacity with similar durability characteristics as replicas. From an application perspective, erasure coding support is transparent. Object Storage (swift) implements erasure coding as a Storage Policy. -See `Storage Policies -`_ -for more details. +See :doc:`/overview_policies` for more details. There is no external API related to erasure coding. Create a container using a Storage Policy; the interaction with the cluster is the same as any diff --git a/doc/source/admin/objectstorage-large-objects.rst b/doc/source/admin/objectstorage-large-objects.rst index 33d803bc30..5873573c4f 100644 --- a/doc/source/admin/objectstorage-large-objects.rst +++ b/doc/source/admin/objectstorage-large-objects.rst @@ -30,6 +30,5 @@ The large object is comprised of two types of objects: To find out more information on large object support, see `Large objects `_ -in the OpenStack End User Guide, or `Large Object Support -`_ +in the OpenStack End User Guide, or :doc:`/overview_large_objects` in the developer documentation. diff --git a/doc/source/admin/objectstorage-monitoring.rst b/doc/source/admin/objectstorage-monitoring.rst index dac37f794e..0542ae7269 100644 --- a/doc/source/admin/objectstorage-monitoring.rst +++ b/doc/source/admin/objectstorage-monitoring.rst @@ -4,7 +4,7 @@ Object Storage monitoring .. note:: - This section was excerpted from a blog post by `Darrell + This section was excerpted from a `blog post by Darrell Bishop `_ and has since been edited. @@ -17,8 +17,7 @@ usage and utilization, and so on is necessary, but not sufficient. Swift Recon ~~~~~~~~~~~ -The Swift Recon middleware (see -`Defining Storage Policies `_) +The Swift Recon middleware (see :ref:`cluster_telemetry_and_monitoring`) provides general machine statistics, such as load average, socket statistics, ``/proc/meminfo`` contents, as well as Swift-specific meters: @@ -127,8 +126,7 @@ after-the-fact log processing, the sending of StatsD meters is integrated into Object Storage itself. The submitted change set (see ``_) currently reports 124 meters across 15 Object Storage daemons and the tempauth middleware. Details of -the meters tracked are in the `Administrator's -Guide `_. +the meters tracked are in the :doc:`/admin_guide`. The sending of meters is integrated with the logging framework. To enable, configure ``log_statsd_host`` in the relevant config file. You diff --git a/doc/source/admin_guide.rst b/doc/source/admin_guide.rst index 21de899779..67ddd958e4 100644 --- a/doc/source/admin_guide.rst +++ b/doc/source/admin_guide.rst @@ -22,9 +22,9 @@ before planning the upgrade of your existing deployment. Following is a high level view of the very few steps it takes to configure policies once you have decided what you want to do: - #. Define your policies in ``/etc/swift/swift.conf`` - #. Create the corresponding object rings - #. Communicate the names of the Storage Policies to cluster users +#. Define your policies in ``/etc/swift/swift.conf`` +#. Create the corresponding object rings +#. Communicate the names of the Storage Policies to cluster users For a specific example that takes you through these steps, please see :doc:`policies_saio` @@ -56,6 +56,8 @@ of the downgrade. For more information see :doc:`overview_ring`. +.. highlight:: none + Removing a device from the ring:: swift-ring-builder remove / @@ -259,7 +261,7 @@ errors are detected, it will unmount the bad drive, so that Swift can work around it. The script takes a configuration file with the following settings: -[drive-audit] +``[drive-audit]`` ================== ============== =========================================== Option Default Description @@ -292,6 +294,8 @@ using a different distro or OS, some care should be taken before using in produc Preventing Disk Full Scenarios ------------------------------ +.. highlight:: cfg + Prevent disk full scenarios by ensuring that the ``proxy-server`` blocks PUT requests and rsync prevents replication to the specific drives. @@ -373,6 +377,8 @@ included, by specifying ``&include`` in your ``rsync.conf`` file: Use this in conjunction with a cron job to periodically run the script, for example: +.. highlight:: none + .. code:: # /etc/cron.d/devicecheck @@ -408,6 +414,8 @@ object names until they fall on distinct partitions. Last, and repeatedly for the life of the cluster, we need to run the swift-dispersion-report tool to check the health of each of these containers and objects. +.. highlight:: cfg + These tools need direct access to the entire cluster and to the ring files (installing them on a proxy server will probably do). Both swift-dispersion-populate and swift-dispersion-report use the same @@ -419,6 +427,8 @@ configuration file, /etc/swift/dispersion.conf. Example conf file:: auth_key = testing endpoint_type = internalURL +.. highlight:: none + There are also options for the conf file for specifying the dispersion coverage (defaults to 1%), retries, concurrency, etc. though usually the defaults are fine. If you want to use keystone v3 for authentication there are options like @@ -512,51 +522,51 @@ described in their respective documentation. The following points should be considered when selecting the feature that is most appropriate for a particular use case: - #. Global Clusters allows the distribution of object replicas across - data-centers to be controlled by the cluster operator on per-policy basis, - since the distribution is determined by the assignment of devices from - each data-center in each policy's ring file. With Container Sync the end - user controls the distribution of objects across clusters on a - per-container basis. +#. Global Clusters allows the distribution of object replicas across + data-centers to be controlled by the cluster operator on per-policy basis, + since the distribution is determined by the assignment of devices from + each data-center in each policy's ring file. With Container Sync the end + user controls the distribution of objects across clusters on a + per-container basis. - #. Global Clusters requires an operator to coordinate ring deployments across - multiple data-centers. Container Sync allows for independent management of - separate Swift clusters in each data-center, and for existing Swift - clusters to be used as peers in Container Sync relationships without - deploying new policies/rings. +#. Global Clusters requires an operator to coordinate ring deployments across + multiple data-centers. Container Sync allows for independent management of + separate Swift clusters in each data-center, and for existing Swift + clusters to be used as peers in Container Sync relationships without + deploying new policies/rings. - #. Global Clusters seamlessly supports features that may rely on - cross-container operations such as large objects and versioned writes. - Container Sync requires the end user to ensure that all required - containers are sync'd for these features to work in all data-centers. +#. Global Clusters seamlessly supports features that may rely on + cross-container operations such as large objects and versioned writes. + Container Sync requires the end user to ensure that all required + containers are sync'd for these features to work in all data-centers. - #. Global Clusters makes objects available for GET or HEAD requests in both - data-centers even if a replica of the object has not yet been - asynchronously migrated between data-centers, by forwarding requests - between data-centers. Container Sync is unable to serve requests for an - object in a particular data-center until the asynchronous sync process has - copied the object to that data-center. +#. Global Clusters makes objects available for GET or HEAD requests in both + data-centers even if a replica of the object has not yet been + asynchronously migrated between data-centers, by forwarding requests + between data-centers. Container Sync is unable to serve requests for an + object in a particular data-center until the asynchronous sync process has + copied the object to that data-center. - #. Global Clusters may require less storage capacity than Container Sync to - achieve equivalent durability of objects in each data-center. Global - Clusters can restore replicas that are lost or corrupted in one - data-center using replicas from other data-centers. Container Sync - requires each data-center to independently manage the durability of - objects, which may result in each data-center storing more replicas than - with Global Clusters. +#. Global Clusters may require less storage capacity than Container Sync to + achieve equivalent durability of objects in each data-center. Global + Clusters can restore replicas that are lost or corrupted in one + data-center using replicas from other data-centers. Container Sync + requires each data-center to independently manage the durability of + objects, which may result in each data-center storing more replicas than + with Global Clusters. - #. Global Clusters execute all account/container metadata updates - synchronously to account/container replicas in all data-centers, which may - incur delays when making updates across WANs. Container Sync only copies - objects between data-centers and all Swift internal traffic is - confined to each data-center. +#. Global Clusters execute all account/container metadata updates + synchronously to account/container replicas in all data-centers, which may + incur delays when making updates across WANs. Container Sync only copies + objects between data-centers and all Swift internal traffic is + confined to each data-center. - #. Global Clusters does not yet guarantee the availability of objects stored - in Erasure Coded policies when one data-center is offline. With Container - Sync the availability of objects in each data-center is independent of the - state of other data-centers once objects have been synced. Container Sync - also allows objects to be stored using different policy types in different - data-centers. +#. Global Clusters does not yet guarantee the availability of objects stored + in Erasure Coded policies when one data-center is offline. With Container + Sync the availability of objects in each data-center is independent of the + state of other data-centers once objects have been synced. Container Sync + also allows objects to be stored using different policy types in different + data-centers. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Checking handoff partition distribution @@ -635,6 +645,8 @@ especially :ref:`swift-recon -r ` how to check replication stats. +.. _cluster_telemetry_and_monitoring: + -------------------------------- Cluster Telemetry and Monitoring -------------------------------- @@ -644,6 +656,8 @@ object servers using the recon server middleware and the swift-recon cli. To do so update your account, container, or object servers pipelines to include recon and add the associated filter config. +.. highlight:: cfg + object-server.conf sample:: [pipeline:main] @@ -671,6 +685,8 @@ account-server.conf sample:: use = egg:swift#recon recon_cache_path = /var/cache/swift +.. highlight:: none + The recon_cache_path simply sets the directory where stats for a few items will be stored. Depending on the method of deployment you may need to create this directory manually and ensure that Swift has read/write access. @@ -782,6 +798,8 @@ For example, to obtain container replication info from all hosts in zone "3":: Reporting Metrics to StatsD --------------------------- +.. highlight:: cfg + If you have a StatsD_ server running, Swift may be configured to send it real-time operational metrics. To enable this, set the following configuration entries (see the sample configuration files):: @@ -1324,7 +1342,7 @@ found. Managing Services ----------------- -Swift services are generally managed with `swift-init`. the general usage is +Swift services are generally managed with ``swift-init``. the general usage is ``swift-init ``, where service is the Swift service to manage (for example object, container, account, proxy) and command is one of: @@ -1340,13 +1358,14 @@ reload Attempt to gracefully restart the service A graceful shutdown or reload will finish any current requests before completely stopping the old service. There is also a special case of -`swift-init all `, which will run the command for all swift services. +``swift-init all ``, which will run the command for all swift +services. In cases where there are multiple configs for a service, a specific config can be managed with ``swift-init . ``. For example, when a separate replication network is used, there might be -`/etc/swift/object-server/public.conf` for the object server and -`/etc/swift/object-server/replication.conf` for the replication services. +``/etc/swift/object-server/public.conf`` for the object server and +``/etc/swift/object-server/replication.conf`` for the replication services. In this case, the replication services could be restarted with ``swift-init object-server.replication restart``. @@ -1358,13 +1377,16 @@ On system failures, the XFS file system can sometimes truncate files it's trying to write and produce zero-byte files. The object-auditor will catch these problems but in the case of a system crash it would be advisable to run an extra, less rate limited sweep to check for these specific files. You can -run this command as follows: -`swift-object-auditor /path/to/object-server/config/file.conf once -z 1000` -"-z" means to only check for zero-byte files at 1000 files per second. +run this command as follows:: + + swift-object-auditor /path/to/object-server/config/file.conf once -z 1000 + +``-z`` means to only check for zero-byte files at 1000 files per second. At times it is useful to be able to run the object auditor on a specific -device or set of devices. You can run the object-auditor as follows: -swift-object-auditor /path/to/object-server/config/file.conf once --devices=sda,sdb +device or set of devices. You can run the object-auditor as follows:: + + swift-object-auditor /path/to/object-server/config/file.conf once --devices=sda,sdb This will run the object auditor on only the sda and sdb devices. This param accepts a comma separated list of values. @@ -1374,11 +1396,12 @@ Object Replicator ----------------- At times it is useful to be able to run the object replicator on a specific -device or partition. You can run the object-replicator as follows: -swift-object-replicator /path/to/object-server/config/file.conf once --devices=sda,sdb +device or partition. You can run the object-replicator as follows:: + + swift-object-replicator /path/to/object-server/config/file.conf once --devices=sda,sdb This will run the object replicator on only the sda and sdb devices. You can -likewise run that command with --partitions. Both params accept a comma +likewise run that command with ``--partitions``. Both params accept a comma separated list of values. If both are specified they will be ANDed together. These can only be run in "once" mode. @@ -1389,8 +1412,8 @@ Swift Orphans Swift Orphans are processes left over after a reload of a Swift server. For example, when upgrading a proxy server you would probably finish -with a `swift-init proxy-server reload` or `/etc/init.d/swift-proxy -reload`. This kills the parent proxy server process and leaves the +with a ``swift-init proxy-server reload`` or ``/etc/init.d/swift-proxy +reload``. This kills the parent proxy server process and leaves the child processes running to finish processing whatever requests they might be handling at the time. It then starts up a new parent proxy server process and its children to handle new incoming requests. This @@ -1400,16 +1423,16 @@ The orphaned child processes may take a while to exit, depending on the length of the requests they were handling. However, sometimes an old process can be hung up due to some bug or hardware issue. In these cases, these orphaned processes will hang around -forever. `swift-orphans` can be used to find and kill these orphans. +forever. ``swift-orphans`` can be used to find and kill these orphans. -`swift-orphans` with no arguments will just list the orphans it finds +``swift-orphans`` with no arguments will just list the orphans it finds that were started more than 24 hours ago. You shouldn't really check for orphans until 24 hours after you perform a reload, as some -requests can take a long time to process. `swift-orphans -k TERM` will -send the SIG_TERM signal to the orphans processes, or you can `kill --TERM` the pids yourself if you prefer. +requests can take a long time to process. ``swift-orphans -k TERM`` will +send the SIG_TERM signal to the orphans processes, or you can ``kill +-TERM`` the pids yourself if you prefer. -You can run `swift-orphans --help` for more options. +You can run ``swift-orphans --help`` for more options. ------------ @@ -1420,11 +1443,11 @@ Swift Oldies are processes that have just been around for a long time. There's nothing necessarily wrong with this, but it might indicate a hung process if you regularly upgrade and reload/restart services. You might have so many servers that you don't notice when a -reload/restart fails; `swift-oldies` can help with this. +reload/restart fails; ``swift-oldies`` can help with this. For example, if you upgraded and reloaded/restarted everything 2 days -ago, and you've already cleaned up any orphans with `swift-orphans`, -you can run `swift-oldies -a 48` to find any Swift processes still +ago, and you've already cleaned up any orphans with ``swift-orphans``, +you can run ``swift-oldies -a 48`` to find any Swift processes still around that were started more than 2 days ago and then investigate them accordingly. @@ -1436,7 +1459,7 @@ Custom Log Handlers Swift supports setting up custom log handlers for services by specifying a comma-separated list of functions to invoke when logging is setup. It does so -via the `log_custom_handlers` configuration option. Logger hooks invoked are +via the ``log_custom_handlers`` configuration option. Logger hooks invoked are passed the same arguments as Swift's get_logger function (as well as the getLogger and LogAdapter object): @@ -1470,7 +1493,6 @@ See :ref:`custom-logger-hooks-label` for sample use cases. Securing OpenStack Swift ------------------------ -Please refer to the security guides at: - -* http://docs.openstack.org/sec/ -* http://docs.openstack.org/security-guide/content/object-storage.html +Please refer to the security guide at http://docs.openstack.org/security-guide +and in particular the `Object Storage +`__ section. diff --git a/doc/source/associated_projects.rst b/doc/source/associated_projects.rst index 3077fd76b2..c59014d4c1 100644 --- a/doc/source/associated_projects.rst +++ b/doc/source/associated_projects.rst @@ -3,30 +3,31 @@ Associated Projects =================== +.. _application-bindings: Application Bindings -------------------- * OpenStack supported binding: - * `Python-SwiftClient `_ + * `Python-SwiftClient `_ * Unofficial libraries and bindings: - * `PHP-opencloud `_ - Official Rackspace PHP bindings that should work for other Swift deployments too. - * `PyRAX `_ - Official Rackspace Python bindings for CloudFiles that should work for other Swift deployments too. - * `openstack.net `_ - Official Rackspace .NET bindings that should work for other Swift deployments too. - * `RSwift `_ - R API bindings. - * `Go language bindings `_ - * `supload `_ - Bash script to upload file to cloud storage based on OpenStack Swift API. - * `libcloud `_ - Apache Libcloud - a unified interface in Python for different clouds with OpenStack Swift support. - * `SwiftBox `_ - C# library using RestSharp - * `jclouds `_ - Java library offering bindings for all OpenStack projects - * `java-openstack-swift `_ - Java bindings for OpenStack Swift - * `swift_client `_ - Small but powerful Ruby client to interact with OpenStack Swift - * `nightcrawler_swift `_ - This Ruby gem teleports your assets to a OpenStack Swift bucket/container - * `swift storage `_ - Simple OpenStack Swift storage client. - * `javaswift `_ - Collection of Java tools for Swift + * `PHP-opencloud `_ - Official Rackspace PHP bindings that should work for other Swift deployments too. + * `PyRAX `_ - Official Rackspace Python bindings for CloudFiles that should work for other Swift deployments too. + * `openstack.net `_ - Official Rackspace .NET bindings that should work for other Swift deployments too. + * `RSwift `_ - R API bindings. + * `Go language bindings `_ + * `supload `_ - Bash script to upload file to cloud storage based on OpenStack Swift API. + * `libcloud `_ - Apache Libcloud - a unified interface in Python for different clouds with OpenStack Swift support. + * `SwiftBox `_ - C# library using RestSharp + * `jclouds `_ - Java library offering bindings for all OpenStack projects + * `java-openstack-swift `_ - Java bindings for OpenStack Swift + * `swift_client `_ - Small but powerful Ruby client to interact with OpenStack Swift + * `nightcrawler_swift `_ - This Ruby gem teleports your assets to a OpenStack Swift bucket/container + * `swift storage `_ - Simple OpenStack Swift storage client. + * `javaswift `_ - Collection of Java tools for Swift Authentication -------------- diff --git a/doc/source/first_contribution_swift.rst b/doc/source/first_contribution_swift.rst index 632e1f3f0b..5eddf62ab5 100644 --- a/doc/source/first_contribution_swift.rst +++ b/doc/source/first_contribution_swift.rst @@ -6,10 +6,12 @@ First Contribution to Swift Getting Swift ------------- -Swift's source code is hosted on github and managed with git. The current -trunk can be checked out like this: +.. highlight: none - ``git clone https://github.com/openstack/swift.git`` +Swift's source code is hosted on github and managed with git. The current +trunk can be checked out like this:: + + git clone https://github.com/openstack/swift.git This will clone the Swift repository under your account. @@ -25,7 +27,7 @@ Prebuilt packages for Ubuntu and RHEL variants are available. Source Control Setup -------------------- -Swift uses `git` for source control. The OpenStack +Swift uses ``git`` for source control. The OpenStack `Developer's Guide `_ describes the steps for setting up Git and all the necessary accounts for contributing code to Swift. @@ -41,13 +43,13 @@ changes to Swift. Testing ------- -The :doc:`Development Guidelines ` describes the testing +The :doc:`Development Guidelines ` describe the testing requirements before submitting Swift code. In summary, you can execute tox from the swift home directory (where you -checked out the source code): +checked out the source code):: - ``tox`` + tox Tox will present tests results. Notice that in the beginning, it is very common to break many coding style guidelines. @@ -58,65 +60,67 @@ Proposing changes to Swift The OpenStack `Developer's Guide `_ -describes the most common `git` commands that you will need. +describes the most common ``git`` commands that you will need. Following is a list of the commands that you need to know for your first contribution to Swift: -To clone a copy of Swift: +To clone a copy of Swift:: - ``git clone https://github.com/openstack/swift.git`` + git clone https://github.com/openstack/swift.git Under the swift directory, set up the Gerrit repository. The following command -configures the repository to know about Gerrit and makes the Change-Id commit -hook get installed. You only need to do this once: +configures the repository to know about Gerrit and installs the ``Change-Id`` +commit hook. You only need to do this once:: - ``git review -s`` + git review -s To create your development branch (substitute branch_name for a name of your choice: - ``git checkout -b `` + git checkout -b -To check the files that have been updated in your branch: +To check the files that have been updated in your branch:: - ``git status`` + git status -To check the differences between your branch and the repository: +To check the differences between your branch and the repository:: - ``git diff`` + git diff -Assuming you have not added new files, you commit all your changes using: +Assuming you have not added new files, you commit all your changes using:: - ``git commit -a`` + git commit -a Read the `Summary of Git commit message structure `_ for best practices on writing the commit message. When you are ready to send -your changes for review use: +your changes for review use:: - ``git review`` + git review If successful, Git response message will contain a URL you can use to track your changes. If you need to make further changes to the same review, you can commit them -using: +using:: - ``git commit -a --amend`` + git commit -a --amend This will commit the changes under the same set of changes you issued earlier. Notice that in order to send your latest version for review, you will still -need to call: +need to call:: - ``git review`` + git review --------------------- Tracking your changes --------------------- -After you proposed your changes to Swift, you can track the review in: - -* ``_ +After proposing changes to Swift, you can track them at +https://review.openstack.org. After logging in, you will see a dashboard of +"Outgoing reviews" for changes you have proposed, "Incoming reviews" for +changes you are reviewing, and "Recently closed" changes for which you were +either a reviewer or owner. .. _post-rebase-instructions: @@ -126,23 +130,22 @@ Post rebase instructions After rebasing, the following steps should be performed to rebuild the swift installation. Note that these commands should be performed from the root of the -swift repo directory (e.g. $HOME/swift/): +swift repo directory (e.g. ``$HOME/swift/``):: - ``sudo python setup.py develop`` - - ``sudo pip install -r test-requirements.txt`` + sudo python setup.py develop + sudo pip install -r test-requirements.txt If using TOX, depending on the changes made during the rebase, you may need to rebuild the TOX environment (generally this will be the case if test-requirements.txt was updated such that a new version of a package is -required), this can be accomplished using the '-r' argument to the TOX cli: +required), this can be accomplished using the ``-r`` argument to the TOX cli:: - ``tox -r`` + tox -r You can include any of the other TOX arguments as well, for example, to run the -pep8 suite and rebuild the TOX environment the following can be used: +pep8 suite and rebuild the TOX environment the following can be used:: - ``tox -r -e pep8`` + tox -r -e pep8 The rebuild option only needs to be specified once for a particular build (e.g. pep8), that is further invocations of the same build will not require this @@ -153,9 +156,9 @@ Troubleshooting --------------- You may run into the following errors when starting Swift if you rebase -your commit using: +your commit using:: - ``git rebase`` + git rebase .. code-block:: python @@ -171,7 +174,8 @@ your commit using: File "/usr/lib/python2.7/dist-packages/pkg_resources.py", line 628, in resolve raise DistributionNotFound(req) pkg_resources.DistributionNotFound: swift==2.3.1.devXXX - (where XXX represents a dev version of Swift). + +(where XXX represents a dev version of Swift). .. code-block:: python @@ -198,7 +202,7 @@ your commit using: for prot in protocol_options] or '(no entry points)')))) LookupError: Entry point 'versioned_writes' not found in egg 'swift' (dir: /home/swift/swift; protocols: paste.filter_factory, paste.filter_app_factory; entry_points: ) -This happens because `git rebase` will retrieve code for a different version of -Swift in the development stream, but the start scripts under `/usr/local/bin` have -not been updated. The solution is to follow the steps described in the -:ref:`post-rebase-instructions` section. +This happens because ``git rebase`` will retrieve code for a different version +of Swift in the development stream, but the start scripts under +``/usr/local/bin`` have not been updated. The solution is to follow the steps +described in the :ref:`post-rebase-instructions` section. diff --git a/doc/source/getting_started.rst b/doc/source/getting_started.rst index 8308c130ae..e5ab687924 100644 --- a/doc/source/getting_started.rst +++ b/doc/source/getting_started.rst @@ -35,7 +35,7 @@ following docs will be useful: CLI client and SDK library -------------------------- -There are many clients in the `ecosystem `_. The official CLI +There are many clients in the :ref:`ecosystem `. The official CLI and SDK is python-swiftclient. * `Source code `_ diff --git a/doc/source/howto_installmultinode.rst b/doc/source/howto_installmultinode.rst index b1b75326c5..4116d8bac3 100644 --- a/doc/source/howto_installmultinode.rst +++ b/doc/source/howto_installmultinode.rst @@ -9,5 +9,7 @@ for the most up-to-date documentation. Current Install Guides ---------------------- - * `Object Storage installation guide for OpenStack Ocata `_ - * `Object Storage installation guide for OpenStack Newton `_ +* `Object Storage installation guide for OpenStack Ocata + `__ +* `Object Storage installation guide for OpenStack Newton + `__