Imported Translations from Zanata
For more information about this automatic import see: https://docs.openstack.org/i18n/latest/reviewing-translation-import.html Change-Id: I7e149c96ee4ae090e47f775000dc1e3e55a3efc2
This commit is contained in:
parent
9ded66d17f
commit
1258061410
731
doc/source/locale/en_GB/LC_MESSAGES/doc-testing.po
Normal file
731
doc/source/locale/en_GB/LC_MESSAGES/doc-testing.po
Normal file
@ -0,0 +1,731 @@
|
|||||||
|
# Andi Chandler <andi@gowling.com>, 2019. #zanata
|
||||||
|
# Andi Chandler <andi@gowling.com>, 2020. #zanata
|
||||||
|
msgid ""
|
||||||
|
msgstr ""
|
||||||
|
"Project-Id-Version: openstack-helm\n"
|
||||||
|
"Report-Msgid-Bugs-To: \n"
|
||||||
|
"POT-Creation-Date: 2020-01-13 21:14+0000\n"
|
||||||
|
"MIME-Version: 1.0\n"
|
||||||
|
"Content-Type: text/plain; charset=UTF-8\n"
|
||||||
|
"Content-Transfer-Encoding: 8bit\n"
|
||||||
|
"PO-Revision-Date: 2020-01-15 10:36+0000\n"
|
||||||
|
"Last-Translator: Andi Chandler <andi@gowling.com>\n"
|
||||||
|
"Language-Team: English (United Kingdom)\n"
|
||||||
|
"Language: en_GB\n"
|
||||||
|
"X-Generator: Zanata 4.3.3\n"
|
||||||
|
"Plural-Forms: nplurals=2; plural=(n != 1)\n"
|
||||||
|
|
||||||
|
msgid ""
|
||||||
|
"1) Initial Ceph and OpenStack deployment: Install Ceph and OpenStack charts "
|
||||||
|
"on 3 nodes (mnode1, mnode2 and mnode3). Capture Ceph cluster status as well "
|
||||||
|
"as K8s PODs status."
|
||||||
|
msgstr ""
|
||||||
|
"1) Initial Ceph and OpenStack deployment: Install Ceph and OpenStack charts "
|
||||||
|
"on 3 nodes (mnode1, mnode2 and mnode3). Capture Ceph cluster status as well "
|
||||||
|
"as K8s PODs status."
|
||||||
|
|
||||||
|
msgid ""
|
||||||
|
"1) Node reduction: Shutdown 1 of 3 nodes to simulate node failure. Capture "
|
||||||
|
"effect of node failure on Ceph as well as other OpenStack services that are "
|
||||||
|
"using Ceph."
|
||||||
|
msgstr ""
|
||||||
|
"1) Node reduction: Shutdown 1 of 3 nodes to simulate node failure. Capture "
|
||||||
|
"effect of node failure on Ceph as well as other OpenStack services that are "
|
||||||
|
"using Ceph."
|
||||||
|
|
||||||
|
msgid "1) Remove out of quorum MON:"
|
||||||
|
msgstr "1) Remove out of quorum MON:"
|
||||||
|
|
||||||
|
msgid ""
|
||||||
|
"2) Node expansion: Apply Ceph and OpenStack related labels to another unused "
|
||||||
|
"k8 node. Node expansion should provide more resources for k8 to schedule "
|
||||||
|
"PODs for Ceph and OpenStack services."
|
||||||
|
msgstr ""
|
||||||
|
"2) Node expansion: Apply Ceph and OpenStack related labels to another unused "
|
||||||
|
"k8 node. Node expansion should provide more resources for k8 to schedule "
|
||||||
|
"PODs for Ceph and OpenStack services."
|
||||||
|
|
||||||
|
msgid ""
|
||||||
|
"2) Node reduction (failure): Shutdown 1 of 3 nodes (mnode3) to test node "
|
||||||
|
"failure. This should cause Ceph cluster to go in HEALTH_WARN state as it has "
|
||||||
|
"lost 1 MON and 1 OSD. Capture Ceph cluster status as well as K8s PODs status."
|
||||||
|
msgstr ""
|
||||||
|
"2) Node reduction (failure): Shutdown 1 of 3 nodes (mnode3) to test node "
|
||||||
|
"failure. This should cause Ceph cluster to go in HEALTH_WARN state as it has "
|
||||||
|
"lost 1 MON and 1 OSD. Capture Ceph cluster status as well as K8s PODs status."
|
||||||
|
|
||||||
|
msgid "2) Remove down OSD from Ceph cluster:"
|
||||||
|
msgstr "2) Remove down OSD from Ceph cluster:"
|
||||||
|
|
||||||
|
msgid ""
|
||||||
|
"3) Fix Ceph Cluster: After node expansion, perform maintenance on Ceph "
|
||||||
|
"cluster to ensure quorum is reached and Ceph is HEALTH_OK."
|
||||||
|
msgstr ""
|
||||||
|
"3) Fix Ceph Cluster: After node expansion, perform maintenance on Ceph "
|
||||||
|
"cluster to ensure quorum is reached and Ceph is HEALTH_OK."
|
||||||
|
|
||||||
|
msgid ""
|
||||||
|
"3) Node expansion: Add Ceph and OpenStack related labels to 4th node "
|
||||||
|
"(mnode4) for expansion. Ceph cluster would show new MON and OSD being added "
|
||||||
|
"to cluster. However Ceph cluster would continue to show HEALTH_WARN because "
|
||||||
|
"1 MON and 1 OSD are still missing."
|
||||||
|
msgstr ""
|
||||||
|
"3) Node expansion: Add Ceph and OpenStack related labels to 4th node "
|
||||||
|
"(mnode4) for expansion. Ceph cluster would show new MON and OSD being added "
|
||||||
|
"to cluster. However Ceph cluster would continue to show HEALTH_WARN because "
|
||||||
|
"1 MON and 1 OSD are still missing."
|
||||||
|
|
||||||
|
msgid ""
|
||||||
|
"4) Ceph cluster recovery: Perform Ceph maintenance to make Ceph cluster "
|
||||||
|
"HEALTH_OK. Remove lost MON and OSD from Ceph cluster."
|
||||||
|
msgstr ""
|
||||||
|
"4) Ceph cluster recovery: Perform Ceph maintenance to make Ceph cluster "
|
||||||
|
"HEALTH_OK. Remove lost MON and OSD from Ceph cluster."
|
||||||
|
|
||||||
|
msgid ""
|
||||||
|
"4. Replace the failed disk with a new one. If you repair (not replace) the "
|
||||||
|
"failed disk, you may need to run the following:"
|
||||||
|
msgstr ""
|
||||||
|
"4. Replace the failed disk with a new one. If you repair (not replace) the "
|
||||||
|
"failed disk, you may need to run the following:"
|
||||||
|
|
||||||
|
msgid "6 Nodes (VM based) env"
|
||||||
|
msgstr "6 Nodes (VM based) env"
|
||||||
|
|
||||||
|
msgid ""
|
||||||
|
"Above output shows Ceph cluster in HEALTH_OK with all OSDs and MONs up and "
|
||||||
|
"running."
|
||||||
|
msgstr ""
|
||||||
|
"Above output shows Ceph cluster in HEALTH_OK with all OSDs and MONs up and "
|
||||||
|
"running."
|
||||||
|
|
||||||
|
msgid "Above output shows that ``osd.1`` is down."
|
||||||
|
msgstr "Above output shows that ``osd.1`` is down."
|
||||||
|
|
||||||
|
msgid "Adding Tests"
|
||||||
|
msgstr "Adding Tests"
|
||||||
|
|
||||||
|
msgid ""
|
||||||
|
"Additional information on Helm tests for OpenStack-Helm and how to execute "
|
||||||
|
"these tests locally via the scripts used in the gate can be found in the "
|
||||||
|
"gates_ directory."
|
||||||
|
msgstr ""
|
||||||
|
"Additional information on Helm tests for OpenStack-Helm and how to execute "
|
||||||
|
"these tests locally via the scripts used in the gate can be found in the "
|
||||||
|
"gates_ directory."
|
||||||
|
|
||||||
|
msgid ""
|
||||||
|
"After 10+ miniutes, Ceph starts rebalancing with one node lost (i.e., 6 osds "
|
||||||
|
"down) and the status stablizes with 18 osds."
|
||||||
|
msgstr ""
|
||||||
|
"After 10+ miniutes, Ceph starts rebalancing with one node lost (i.e., 6 osds "
|
||||||
|
"down) and the status stablises with 18 osds."
|
||||||
|
|
||||||
|
msgid "After applying labels, let's check status"
|
||||||
|
msgstr "After applying labels, let's check status"
|
||||||
|
|
||||||
|
msgid "After reboot (node voyager3), the node status changes to ``NotReady``."
|
||||||
|
msgstr "After reboot (node voyager3), the node status changes to ``NotReady``."
|
||||||
|
|
||||||
|
msgid "All PODs are in running state."
|
||||||
|
msgstr "All PODs are in running state."
|
||||||
|
|
||||||
|
msgid ""
|
||||||
|
"All tests should be added to the gates during development, and are required "
|
||||||
|
"for any new service charts prior to merging. All Helm tests should be "
|
||||||
|
"included as part of the deployment script. An example of this can be seen "
|
||||||
|
"in this script_."
|
||||||
|
msgstr ""
|
||||||
|
"All tests should be added to the gates during development, and are required "
|
||||||
|
"for any new service charts prior to merging. All Helm tests should be "
|
||||||
|
"included as part of the deployment script. An example of this can be seen "
|
||||||
|
"in this script_."
|
||||||
|
|
||||||
|
msgid "Any Helm tests associated with a chart can be run by executing:"
|
||||||
|
msgstr "Any Helm tests associated with a chart can be run by executing:"
|
||||||
|
|
||||||
|
msgid ""
|
||||||
|
"Any templates for Helm tests submitted should follow the philosophies "
|
||||||
|
"applied in the other templates. These include: use of overrides where "
|
||||||
|
"appropriate, use of endpoint lookups and other common functionality in helm-"
|
||||||
|
"toolkit, and mounting any required scripting templates via the configmap-bin "
|
||||||
|
"template for the service chart. If Rally tests are not appropriate or "
|
||||||
|
"adequate for a service chart, any additional tests should be documented "
|
||||||
|
"appropriately and adhere to the same expectations."
|
||||||
|
msgstr ""
|
||||||
|
"Any templates for Helm tests submitted should follow the philosophies "
|
||||||
|
"applied in the other templates. These include: use of overrides where "
|
||||||
|
"appropriate, use of endpoint lookups and other common functionality in helm-"
|
||||||
|
"toolkit, and mounting any required scripting templates via the configmap-bin "
|
||||||
|
"template for the service chart. If Rally tests are not appropriate or "
|
||||||
|
"adequate for a service chart, any additional tests should be documented "
|
||||||
|
"appropriately and adhere to the same expectations."
|
||||||
|
|
||||||
|
msgid ""
|
||||||
|
"As shown above, Ceph status is now HEALTH_OK and shows 3 MONs available."
|
||||||
|
msgstr ""
|
||||||
|
"As shown above, Ceph status is now HEALTH_OK and shows 3 MONs available."
|
||||||
|
|
||||||
|
msgid ""
|
||||||
|
"As shown in Ceph status above, ``osd: 4 osds: 3 up, 3 in`` 1 of 4 OSDs is "
|
||||||
|
"still down. Let's remove that OSD."
|
||||||
|
msgstr ""
|
||||||
|
"As shown in Ceph status above, ``osd: 4 osds: 3 up, 3 in`` 1 of 4 OSDs is "
|
||||||
|
"still down. Let's remove that OSD."
|
||||||
|
|
||||||
|
msgid "Capture Ceph pods statuses."
|
||||||
|
msgstr "Capture Ceph pods statuses."
|
||||||
|
|
||||||
|
msgid "Capture Openstack pods statuses."
|
||||||
|
msgstr "Capture Openstack pods statuses."
|
||||||
|
|
||||||
|
msgid "Capture final Ceph pod statuses:"
|
||||||
|
msgstr "Capture final Ceph pod statuses:"
|
||||||
|
|
||||||
|
msgid "Capture final Openstack pod statuses:"
|
||||||
|
msgstr "Capture final Openstack pod statuses:"
|
||||||
|
|
||||||
|
msgid "Case: A disk fails"
|
||||||
|
msgstr "Case: A disk fails"
|
||||||
|
|
||||||
|
msgid "Case: One host machine where ceph-mon is running is rebooted"
|
||||||
|
msgstr "Case: One host machine where ceph-mon is running is rebooted"
|
||||||
|
|
||||||
|
msgid "Caveats:"
|
||||||
|
msgstr "Caveats:"
|
||||||
|
|
||||||
|
msgid "Ceph - Node Reduction, Expansion and Ceph Recovery"
|
||||||
|
msgstr "Ceph - Node Reduction, Expansion and Ceph Recovery"
|
||||||
|
|
||||||
|
msgid "Ceph Cephfs provisioner docker images."
|
||||||
|
msgstr "Ceph Cephfs provisioner docker images."
|
||||||
|
|
||||||
|
msgid "Ceph Luminous point release images for Ceph components"
|
||||||
|
msgstr "Ceph Luminous point release images for Ceph components"
|
||||||
|
|
||||||
|
msgid "Ceph MON and OSD PODs got scheduled on mnode4 node."
|
||||||
|
msgstr "Ceph MON and OSD PODs got scheduled on mnode4 node."
|
||||||
|
|
||||||
|
msgid "Ceph RBD provisioner docker images."
|
||||||
|
msgstr "Ceph RBD provisioner docker images."
|
||||||
|
|
||||||
|
msgid ""
|
||||||
|
"Ceph can be upgraded without downtime for Openstack components in a "
|
||||||
|
"multinode env."
|
||||||
|
msgstr ""
|
||||||
|
"Ceph can be upgraded without downtime for Openstack components in a "
|
||||||
|
"multinode env."
|
||||||
|
|
||||||
|
msgid "Ceph cluster is in HEALTH_OK state with 3 MONs and 3 OSDs."
|
||||||
|
msgstr "Ceph cluster is in HEALTH_OK state with 3 MONs and 3 OSDs."
|
||||||
|
|
||||||
|
msgid "Ceph status shows 1 Ceph MON and 1 Ceph OSD missing."
|
||||||
|
msgstr "Ceph status shows 1 Ceph MON and 1 Ceph OSD missing."
|
||||||
|
|
||||||
|
msgid "Ceph status shows HEALTH_WARN as expected"
|
||||||
|
msgstr "Ceph status shows HEALTH_WARN as expected"
|
||||||
|
|
||||||
|
msgid "Ceph status shows that MON and OSD count has been increased."
|
||||||
|
msgstr "Ceph status shows that MON and OSD count has been increased."
|
||||||
|
|
||||||
|
msgid ""
|
||||||
|
"Ceph status shows that ceph-mon running on ``voyager3`` becomes out of "
|
||||||
|
"quorum. Also, six osds running on ``voyager3`` are down; i.e., 18 osds are "
|
||||||
|
"up out of 24 osds."
|
||||||
|
msgstr ""
|
||||||
|
"Ceph status shows that ceph-mon running on ``voyager3`` becomes out of "
|
||||||
|
"quorum. Also, six osds running on ``voyager3`` are down; i.e., 18 osds are "
|
||||||
|
"up out of 24 osds."
|
||||||
|
|
||||||
|
msgid "Ceph status still shows HEALTH_WARN as one MON and OSD are still down."
|
||||||
|
msgstr "Ceph status still shows HEALTH_WARN as one MON and OSD are still down."
|
||||||
|
|
||||||
|
msgid "Ceph version: 12.2.3"
|
||||||
|
msgstr "Ceph version: 12.2.3"
|
||||||
|
|
||||||
|
msgid "Check Ceph Pods"
|
||||||
|
msgstr "Check Ceph Pods"
|
||||||
|
|
||||||
|
msgid "Check version of each Ceph components."
|
||||||
|
msgstr "Check version of each Ceph components."
|
||||||
|
|
||||||
|
msgid "Check which images Provisionors and Mon-Check PODs are using"
|
||||||
|
msgstr "Check which images Provisionors and Mon-Check PODs are using"
|
||||||
|
|
||||||
|
msgid "Cluster size: 4 host machines"
|
||||||
|
msgstr "Cluster size: 4 host machines"
|
||||||
|
|
||||||
|
msgid "Conclusion:"
|
||||||
|
msgstr "Conclusion:"
|
||||||
|
|
||||||
|
msgid "Confirm Ceph component's version."
|
||||||
|
msgstr "Confirm Ceph component's version."
|
||||||
|
|
||||||
|
msgid "Continue with OSH multinode guide to install other Openstack charts."
|
||||||
|
msgstr "Continue with OSH multinode guide to install other Openstack charts."
|
||||||
|
|
||||||
|
msgid "Deploy and Validate Ceph"
|
||||||
|
msgstr "Deploy and Validate Ceph"
|
||||||
|
|
||||||
|
msgid "Disk Failure"
|
||||||
|
msgstr "Disk Failure"
|
||||||
|
|
||||||
|
msgid "Docker Images:"
|
||||||
|
msgstr "Docker Images:"
|
||||||
|
|
||||||
|
msgid ""
|
||||||
|
"Every OpenStack-Helm chart should include any required Helm tests necessary "
|
||||||
|
"to provide a sanity check for the OpenStack service. Information on using "
|
||||||
|
"the Helm testing framework can be found in the Helm repository_. Currently, "
|
||||||
|
"the Rally testing framework is used to provide these checks for the core "
|
||||||
|
"services. The Keystone Helm test template can be used as a reference, and "
|
||||||
|
"can be found here_."
|
||||||
|
msgstr ""
|
||||||
|
"Every OpenStack-Helm chart should include any required Helm tests necessary "
|
||||||
|
"to provide a sanity check for the OpenStack service. Information on using "
|
||||||
|
"the Helm testing framework can be found in the Helm repository_. Currently, "
|
||||||
|
"the Rally testing framework is used to provide these checks for the core "
|
||||||
|
"services. The Keystone Helm test template can be used as a reference, and "
|
||||||
|
"can be found here_."
|
||||||
|
|
||||||
|
msgid "Find that Ceph is healthy with a lost OSD (i.e., a total of 23 OSDs):"
|
||||||
|
msgstr "Find that Ceph is healthy with a lost OSD (i.e., a total of 23 OSDs):"
|
||||||
|
|
||||||
|
msgid "First, run ``ceph osd tree`` command to get list of OSDs."
|
||||||
|
msgstr "First, run ``ceph osd tree`` command to get list of OSDs."
|
||||||
|
|
||||||
|
msgid "Follow all steps from OSH multinode guide with below changes."
|
||||||
|
msgstr "Follow all steps from OSH multinode guide with below changes."
|
||||||
|
|
||||||
|
msgid ""
|
||||||
|
"Followed OSH multinode guide steps to install Ceph and OpenStack charts up "
|
||||||
|
"to Cinder."
|
||||||
|
msgstr ""
|
||||||
|
"Followed OSH multinode guide steps to install Ceph and OpenStack charts up "
|
||||||
|
"to Cinder."
|
||||||
|
|
||||||
|
msgid ""
|
||||||
|
"Followed OSH multinode guide steps to setup nodes and install K8s cluster"
|
||||||
|
msgstr ""
|
||||||
|
"Followed OSH multinode guide steps to setup nodes and install K8s cluster"
|
||||||
|
|
||||||
|
msgid "Following is a partial part from script to show changes."
|
||||||
|
msgstr "Following is a partial part from script to show changes."
|
||||||
|
|
||||||
|
msgid ""
|
||||||
|
"From the Kubernetes cluster, remove the failed OSD pod, which is running on "
|
||||||
|
"``voyager4``:"
|
||||||
|
msgstr ""
|
||||||
|
"From the Kubernetes cluster, remove the failed OSD pod, which is running on "
|
||||||
|
"``voyager4``:"
|
||||||
|
|
||||||
|
msgid "Hardware Failure"
|
||||||
|
msgstr "Hardware Failure"
|
||||||
|
|
||||||
|
msgid "Helm Tests"
|
||||||
|
msgstr "Helm Tests"
|
||||||
|
|
||||||
|
msgid "Host Failure"
|
||||||
|
msgstr "Host Failure"
|
||||||
|
|
||||||
|
msgid ""
|
||||||
|
"In this test env, MariaDB chart is deployed with only 1 replica. In order to "
|
||||||
|
"test properly, the node with MariaDB server POD (mnode2) should not be "
|
||||||
|
"shutdown."
|
||||||
|
msgstr ""
|
||||||
|
"In this test env, MariaDB chart is deployed with only 1 replica. In order to "
|
||||||
|
"test properly, the node with MariaDB server POD (mnode2) should not be "
|
||||||
|
"shutdown."
|
||||||
|
|
||||||
|
msgid "In this test env, ``mnode3`` is out of quorum."
|
||||||
|
msgstr "In this test env, ``mnode3`` is out of quorum."
|
||||||
|
|
||||||
|
msgid ""
|
||||||
|
"In this test env, each node has Ceph and OpenStack related PODs. Due to "
|
||||||
|
"this, shutting down a Node will cause issue with Ceph as well as OpenStack "
|
||||||
|
"services. These PODs level failures are captured following subsequent "
|
||||||
|
"screenshots."
|
||||||
|
msgstr ""
|
||||||
|
"In this test env, each node has Ceph and OpenStack related PODs. Due to "
|
||||||
|
"this, shutting down a Node will cause issue with Ceph as well as OpenStack "
|
||||||
|
"services. These PODs level failures are captured following subsequent "
|
||||||
|
"screenshots."
|
||||||
|
|
||||||
|
msgid "In this test env, let's shutdown ``mnode3`` node."
|
||||||
|
msgstr "In this test env, let's shutdown ``mnode3`` node."
|
||||||
|
|
||||||
|
msgid ""
|
||||||
|
"In this test env, let's use ``mnode4`` and apply Ceph and OpenStack related "
|
||||||
|
"labels."
|
||||||
|
msgstr ""
|
||||||
|
"In this test env, let's use ``mnode4`` and apply Ceph and OpenStack related "
|
||||||
|
"labels."
|
||||||
|
|
||||||
|
msgid ""
|
||||||
|
"In this test env, since out of quorum MON is no longer available due to node "
|
||||||
|
"failure, we can processed with removing it from Ceph cluster."
|
||||||
|
msgstr ""
|
||||||
|
"In this test env, since out of quorum MON is no longer available due to node "
|
||||||
|
"failure, we can processed with removing it from Ceph cluster."
|
||||||
|
|
||||||
|
msgid "Install Ceph charts (version 12.2.4)"
|
||||||
|
msgstr "Install Ceph charts (version 12.2.4)"
|
||||||
|
|
||||||
|
msgid "Install OSH components as per OSH multinode guide."
|
||||||
|
msgstr "Install OSH components as per OSH multinode guide."
|
||||||
|
|
||||||
|
msgid "Install Openstack charts"
|
||||||
|
msgstr "Install Openstack charts"
|
||||||
|
|
||||||
|
msgid "Kubernetes version: 1.10.5"
|
||||||
|
msgstr "Kubernetes version: 1.10.5"
|
||||||
|
|
||||||
|
msgid "Let's add more resources for K8s to schedule PODs on."
|
||||||
|
msgstr "Let's add more resources for K8s to schedule PODs on."
|
||||||
|
|
||||||
|
msgid ""
|
||||||
|
"Make sure only 3 nodes (mnode1, mnode2, mnode3) have Ceph and OpenStack "
|
||||||
|
"related labels. K8s would only schedule PODs on these 3 nodes."
|
||||||
|
msgstr ""
|
||||||
|
"Make sure only 3 nodes (mnode1, mnode2, mnode3) have Ceph and OpenStack "
|
||||||
|
"related labels. K8s would only schedule PODs on these 3 nodes."
|
||||||
|
|
||||||
|
msgid "Mission"
|
||||||
|
msgstr "Mission"
|
||||||
|
|
||||||
|
msgid ""
|
||||||
|
"Note: To find the daemonset associated with a failed OSD, check out the "
|
||||||
|
"followings:"
|
||||||
|
msgstr ""
|
||||||
|
"Note: To find the daemonset associated with a failed OSD, check out the "
|
||||||
|
"followings:"
|
||||||
|
|
||||||
|
msgid ""
|
||||||
|
"Now that we have added new node for Ceph and OpenStack PODs, let's perform "
|
||||||
|
"maintenance on Ceph cluster."
|
||||||
|
msgstr ""
|
||||||
|
"Now that we have added new node for Ceph and OpenStack PODs, let's perform "
|
||||||
|
"maintenance on Ceph cluster."
|
||||||
|
|
||||||
|
msgid "Number of disks: 24 (= 6 disks per host * 4 hosts)"
|
||||||
|
msgstr "Number of disks: 24 (= 6 disks per host * 4 hosts)"
|
||||||
|
|
||||||
|
msgid "OSD count is set to 3 based on env setup."
|
||||||
|
msgstr "OSD count is set to 3 based on env setup."
|
||||||
|
|
||||||
|
msgid ""
|
||||||
|
"Only 3 nodes will have Ceph and OpenStack related labels. Each of these 3 "
|
||||||
|
"nodes will have one MON and one OSD running on them."
|
||||||
|
msgstr ""
|
||||||
|
"Only 3 nodes will have Ceph and OpenStack related labels. Each of these 3 "
|
||||||
|
"nodes will have one MON and one OSD running on them."
|
||||||
|
|
||||||
|
msgid "OpenStack PODs that were scheduled mnode3 also shows NodeLost/Unknown."
|
||||||
|
msgstr "OpenStack PODs that were scheduled mnode3 also shows NodeLost/Unknown."
|
||||||
|
|
||||||
|
msgid "OpenStack-Helm commit: 25e50a34c66d5db7604746f4d2e12acbdd6c1459"
|
||||||
|
msgstr "OpenStack-Helm commit: 25e50a34c66d5db7604746f4d2e12acbdd6c1459"
|
||||||
|
|
||||||
|
msgid ""
|
||||||
|
"Our focus lies on resiliency for various failure scenarios but not on "
|
||||||
|
"performance or stress testing."
|
||||||
|
msgstr ""
|
||||||
|
"Our focus lies on resiliency for various failure scenarios but not on "
|
||||||
|
"performance or stress testing."
|
||||||
|
|
||||||
|
msgid "PODs that were scheduled on mnode3 node has status of NodeLost/Unknown."
|
||||||
|
msgstr ""
|
||||||
|
"PODs that were scheduled on mnode3 node has status of NodeLost/Unknown."
|
||||||
|
|
||||||
|
msgid "Recovery:"
|
||||||
|
msgstr "Recovery:"
|
||||||
|
|
||||||
|
msgid ""
|
||||||
|
"Remove the failed OSD (OSD ID = 2 in this example) from the Ceph cluster:"
|
||||||
|
msgstr ""
|
||||||
|
"Remove the failed OSD (OSD ID = 2 in this example) from the Ceph cluster:"
|
||||||
|
|
||||||
|
msgid "Resiliency Tests for OpenStack-Helm/Ceph"
|
||||||
|
msgstr "Resiliency Tests for OpenStack-Helm/Ceph"
|
||||||
|
|
||||||
|
msgid "Run ``ceph osd purge`` command to remove OSD from ceph cluster."
|
||||||
|
msgstr "Run ``ceph osd purge`` command to remove OSD from Ceph cluster."
|
||||||
|
|
||||||
|
msgid "Running Tests"
|
||||||
|
msgstr "Running Tests"
|
||||||
|
|
||||||
|
msgid "Setup:"
|
||||||
|
msgstr "Setup:"
|
||||||
|
|
||||||
|
msgid ""
|
||||||
|
"Showing partial output from kubectl describe command to show which image is "
|
||||||
|
"Docker container is using"
|
||||||
|
msgstr ""
|
||||||
|
"Showing partial output from kubectl describe command to show which image is "
|
||||||
|
"Docker container is using"
|
||||||
|
|
||||||
|
msgid ""
|
||||||
|
"Shutdown 1 of 3 nodes (mnode1, mnode2, mnode3) to simulate node failure/lost."
|
||||||
|
msgstr ""
|
||||||
|
"Shutdown 1 of 3 nodes (mnode1, mnode2, mnode3) to simulate node failure/lost."
|
||||||
|
|
||||||
|
msgid ""
|
||||||
|
"Since the node that was shutdown earlier had both Ceph and OpenStack PODs, "
|
||||||
|
"mnode4 should get Ceph and OpenStack related labels as well."
|
||||||
|
msgstr ""
|
||||||
|
"Since the node that was shutdown earlier had both Ceph and OpenStack PODs, "
|
||||||
|
"mnode4 should get Ceph and OpenStack related labels as well."
|
||||||
|
|
||||||
|
msgid "Software Failure"
|
||||||
|
msgstr "Software Failure"
|
||||||
|
|
||||||
|
msgid "Solution:"
|
||||||
|
msgstr "Solution:"
|
||||||
|
|
||||||
|
msgid "Start a new OSD pod on ``voyager4``:"
|
||||||
|
msgstr "Start a new OSD pod on ``voyager4``:"
|
||||||
|
|
||||||
|
msgid "Step 1: Initial Ceph and OpenStack deployment"
|
||||||
|
msgstr "Step 1: Initial Ceph and OpenStack deployment"
|
||||||
|
|
||||||
|
msgid "Step 2: Node reduction (failure):"
|
||||||
|
msgstr "Step 2: Node reduction (failure):"
|
||||||
|
|
||||||
|
msgid "Step 3: Node Expansion"
|
||||||
|
msgstr "Step 3: Node Expansion"
|
||||||
|
|
||||||
|
msgid "Step 4: Ceph cluster recovery"
|
||||||
|
msgstr "Step 4: Ceph cluster recovery"
|
||||||
|
|
||||||
|
msgid "Steps:"
|
||||||
|
msgstr "Steps:"
|
||||||
|
|
||||||
|
msgid "Symptom:"
|
||||||
|
msgstr "Symptom:"
|
||||||
|
|
||||||
|
msgid "Test Environment"
|
||||||
|
msgstr "Test Environment"
|
||||||
|
|
||||||
|
msgid "Test Scenarios:"
|
||||||
|
msgstr "Test Scenarios:"
|
||||||
|
|
||||||
|
msgid "Testing"
|
||||||
|
msgstr "Testing"
|
||||||
|
|
||||||
|
msgid "Testing Expectations"
|
||||||
|
msgstr "Testing Expectations"
|
||||||
|
|
||||||
|
msgid ""
|
||||||
|
"The goal of our resiliency tests for `OpenStack-Helm/Ceph <https://github."
|
||||||
|
"com/openstack/openstack-helm/tree/master/ceph>`_ is to show symptoms of "
|
||||||
|
"software/hardware failure and provide the solutions."
|
||||||
|
msgstr ""
|
||||||
|
"The goal of our resiliency tests for `OpenStack-Helm/Ceph <https://github."
|
||||||
|
"com/openstack/openstack-helm/tree/master/ceph>`_ is to show symptoms of "
|
||||||
|
"software/hardware failure and provide the solutions."
|
||||||
|
|
||||||
|
msgid ""
|
||||||
|
"The node status of ``voyager3`` changes to ``Ready`` after the node is up "
|
||||||
|
"again. Also, Ceph pods are restarted automatically. The Ceph status shows "
|
||||||
|
"that the monitor running on ``voyager3`` is now in quorum and 6 osds gets "
|
||||||
|
"back up (i.e., a total of 24 osds are up)."
|
||||||
|
msgstr ""
|
||||||
|
"The node status of ``voyager3`` changes to ``Ready`` after the node is up "
|
||||||
|
"again. Also, Ceph pods are restarted automatically. The Ceph status shows "
|
||||||
|
"that the monitor running on ``voyager3`` is now in quorum and 6 osds gets "
|
||||||
|
"back up (i.e., a total of 24 osds are up)."
|
||||||
|
|
||||||
|
msgid ""
|
||||||
|
"The output of the Helm tests can be seen by looking at the logs of the pod "
|
||||||
|
"created by the Helm tests. These logs can be viewed with:"
|
||||||
|
msgstr ""
|
||||||
|
"The output of the Helm tests can be seen by looking at the logs of the pod "
|
||||||
|
"created by the Helm tests. These logs can be viewed with:"
|
||||||
|
|
||||||
|
msgid "The pod status of ceph-mon and ceph-osd shows as ``NodeLost``."
|
||||||
|
msgstr "The pod status of ceph-mon and ceph-osd shows as ``NodeLost``."
|
||||||
|
|
||||||
|
msgid ""
|
||||||
|
"This document captures steps and result from node reduction and expansion as "
|
||||||
|
"well as ceph recovery."
|
||||||
|
msgstr ""
|
||||||
|
"This document captures steps and result from node reduction and expansion as "
|
||||||
|
"well as Ceph recovery."
|
||||||
|
|
||||||
|
msgid ""
|
||||||
|
"This is to test a scenario when a disk failure happens. We monitor the ceph "
|
||||||
|
"status and notice one OSD (osd.2) on voyager4 which has ``/dev/sdh`` as a "
|
||||||
|
"backend is down."
|
||||||
|
msgstr ""
|
||||||
|
"This is to test a scenario when a disk failure happens. We monitor the ceph "
|
||||||
|
"status and notice one OSD (osd.2) on voyager4 which has ``/dev/sdh`` as a "
|
||||||
|
"backend is down."
|
||||||
|
|
||||||
|
msgid "To replace the failed OSD, execute the following procedure:"
|
||||||
|
msgstr "To replace the failed OSD, execute the following procedure:"
|
||||||
|
|
||||||
|
msgid "Update Ceph Client chart with new overrides:"
|
||||||
|
msgstr "Update Ceph Client chart with new overrides:"
|
||||||
|
|
||||||
|
msgid "Update Ceph Mon chart with new overrides"
|
||||||
|
msgstr "Update Ceph Mon chart with new overrides"
|
||||||
|
|
||||||
|
msgid "Update Ceph OSD chart with new overrides:"
|
||||||
|
msgstr "Update Ceph OSD chart with new overrides:"
|
||||||
|
|
||||||
|
msgid "Update Ceph Provisioners chart with new overrides:"
|
||||||
|
msgstr "Update Ceph Provisioners chart with new overrides:"
|
||||||
|
|
||||||
|
msgid ""
|
||||||
|
"Update ceph install script ``./tools/deployment/multinode/030-ceph.sh`` to "
|
||||||
|
"add ``images:`` section in overrides as shown below."
|
||||||
|
msgstr ""
|
||||||
|
"Update Ceph install script ``./tools/deployment/multinode/030-ceph.sh`` to "
|
||||||
|
"add ``images:`` section in overrides as shown below."
|
||||||
|
|
||||||
|
msgid ""
|
||||||
|
"Update, image section in new overrides ``ceph-update.yaml`` as shown below"
|
||||||
|
msgstr ""
|
||||||
|
"Update, image section in new overrides ``ceph-update.yaml`` as shown below"
|
||||||
|
|
||||||
|
msgid "Upgrade Ceph charts to update version"
|
||||||
|
msgstr "Upgrade Ceph charts to update version"
|
||||||
|
|
||||||
|
msgid ""
|
||||||
|
"Upgrade Ceph charts to version 12.2.5 by updating docker images in overrides."
|
||||||
|
msgstr ""
|
||||||
|
"Upgrade Ceph charts to version 12.2.5 by updating Docker images in overrides."
|
||||||
|
|
||||||
|
msgid ""
|
||||||
|
"Use Ceph override file ``ceph.yaml`` that was generated previously and "
|
||||||
|
"update images section as below"
|
||||||
|
msgstr ""
|
||||||
|
"Use Ceph override file ``ceph.yaml`` that was generated previously and "
|
||||||
|
"update images section as below"
|
||||||
|
|
||||||
|
msgid ""
|
||||||
|
"Using ``ceph mon_status`` and ``ceph -s`` commands, confirm ID of MON that "
|
||||||
|
"is out of quorum."
|
||||||
|
msgstr ""
|
||||||
|
"Using ``ceph mon_status`` and ``ceph -s`` commands, confirm ID of MON that "
|
||||||
|
"is out of quorum."
|
||||||
|
|
||||||
|
msgid ""
|
||||||
|
"Validate the Ceph status (i.e., one OSD is added, so the total number of "
|
||||||
|
"OSDs becomes 24):"
|
||||||
|
msgstr ""
|
||||||
|
"Validate the Ceph status (i.e., one OSD is added, so the total number of "
|
||||||
|
"OSDs becomes 24):"
|
||||||
|
|
||||||
|
msgid "`Disk failure <./disk-failure.html>`_"
|
||||||
|
msgstr "`Disk failure <./disk-failure.html>`_"
|
||||||
|
|
||||||
|
msgid "`Host failure <./host-failure.html>`_"
|
||||||
|
msgstr "`Host failure <./host-failure.html>`_"
|
||||||
|
|
||||||
|
msgid "`Monitor failure <./monitor-failure.html>`_"
|
||||||
|
msgstr "`Monitor failure <./monitor-failure.html>`_"
|
||||||
|
|
||||||
|
msgid "`OSD failure <./osd-failure.html>`_"
|
||||||
|
msgstr "`OSD failure <./osd-failure.html>`_"
|
||||||
|
|
||||||
|
msgid "``Ceph MON Status:``"
|
||||||
|
msgstr "``Ceph MON Status:``"
|
||||||
|
|
||||||
|
msgid "``Ceph MON Status``"
|
||||||
|
msgstr "``Ceph MON Status``"
|
||||||
|
|
||||||
|
msgid "``Ceph PODs:``"
|
||||||
|
msgstr "``Ceph PODs:``"
|
||||||
|
|
||||||
|
msgid "``Ceph PODs``"
|
||||||
|
msgstr "``Ceph PODs``"
|
||||||
|
|
||||||
|
msgid "``Ceph Status:``"
|
||||||
|
msgstr "``Ceph Status:``"
|
||||||
|
|
||||||
|
msgid "``Ceph quorum status:``"
|
||||||
|
msgstr "``Ceph quorum status:``"
|
||||||
|
|
||||||
|
msgid "``Ceph quorum status``"
|
||||||
|
msgstr "``Ceph quorum status``"
|
||||||
|
|
||||||
|
msgid "``Ceph status:``"
|
||||||
|
msgstr "``Ceph status:``"
|
||||||
|
|
||||||
|
msgid "``Ceph status``"
|
||||||
|
msgstr "``Ceph status``"
|
||||||
|
|
||||||
|
msgid "``Check node status:``"
|
||||||
|
msgstr "``Check node status:``"
|
||||||
|
|
||||||
|
msgid "``Following are PODs scheduled on mnode3 before shutdown:``"
|
||||||
|
msgstr "``Following are PODs scheduled on mnode3 before shutdown:``"
|
||||||
|
|
||||||
|
msgid "``OpenStack PODs:``"
|
||||||
|
msgstr "``OpenStack PODs:``"
|
||||||
|
|
||||||
|
msgid "``OpenStack PODs``"
|
||||||
|
msgstr "``OpenStack PODs``"
|
||||||
|
|
||||||
|
msgid "``Remove MON from Ceph cluster``"
|
||||||
|
msgstr "``Remove MON from Ceph cluster``"
|
||||||
|
|
||||||
|
msgid "``Result/Observation:``"
|
||||||
|
msgstr "``Result/Observation:``"
|
||||||
|
|
||||||
|
msgid ""
|
||||||
|
"``Results:`` All provisioner pods got terminated at once (same time). Other "
|
||||||
|
"ceph pods are running. No interruption to OSH pods."
|
||||||
|
msgstr ""
|
||||||
|
"``Results:`` All provisioner pods got terminated at once (same time). Other "
|
||||||
|
"Ceph pods are running. No interruption to OSH pods."
|
||||||
|
|
||||||
|
msgid ""
|
||||||
|
"``Results:`` Mon pods got updated one by one (rolling updates). Each Mon pod "
|
||||||
|
"got respawn and was in 1/1 running state before next Mon pod got updated. "
|
||||||
|
"Each Mon pod got restarted. Other ceph pods were not affected with this "
|
||||||
|
"update. No interruption to OSH pods."
|
||||||
|
msgstr ""
|
||||||
|
"``Results:`` Mon pods got updated one by one (rolling updates). Each Mon pod "
|
||||||
|
"got respawn and was in 1/1 running state before next Mon pod got updated. "
|
||||||
|
"Each Mon pod got restarted. Other ceph pods were not affected with this "
|
||||||
|
"update. No interruption to OSH pods."
|
||||||
|
|
||||||
|
msgid ""
|
||||||
|
"``Results:`` Rolling updates (one pod at a time). Other ceph pods are "
|
||||||
|
"running. No interruption to OSH pods."
|
||||||
|
msgstr ""
|
||||||
|
"``Results:`` Rolling updates (one pod at a time). Other ceph pods are "
|
||||||
|
"running. No interruption to OSH pods."
|
||||||
|
|
||||||
|
msgid ""
|
||||||
|
"``ceph_bootstrap``, ``ceph-config_helper`` and ``ceph_rbs_pool`` images are "
|
||||||
|
"used for jobs. ``ceph_mon_check`` has one script that is stable so no need "
|
||||||
|
"to upgrade."
|
||||||
|
msgstr ""
|
||||||
|
"``ceph_bootstrap``, ``ceph-config_helper`` and ``ceph_rbs_pool`` images are "
|
||||||
|
"used for jobs. ``ceph_mon_check`` has one script that is stable so no need "
|
||||||
|
"to upgrade."
|
||||||
|
|
||||||
|
msgid "``cp /tmp/ceph.yaml ceph-update.yaml``"
|
||||||
|
msgstr "``cp /tmp/ceph.yaml ceph-update.yaml``"
|
||||||
|
|
||||||
|
msgid "``helm upgrade ceph-client ./ceph-client --values=ceph-update.yaml``"
|
||||||
|
msgstr "``helm upgrade ceph-client ./ceph-client --values=ceph-update.yaml``"
|
||||||
|
|
||||||
|
msgid "``helm upgrade ceph-mon ./ceph-mon --values=ceph-update.yaml``"
|
||||||
|
msgstr "``helm upgrade ceph-mon ./ceph-mon --values=ceph-update.yaml``"
|
||||||
|
|
||||||
|
msgid "``helm upgrade ceph-osd ./ceph-osd --values=ceph-update.yaml``"
|
||||||
|
msgstr "``helm upgrade ceph-osd ./ceph-osd --values=ceph-update.yaml``"
|
||||||
|
|
||||||
|
msgid ""
|
||||||
|
"``helm upgrade ceph-provisioners ./ceph-provisioners --values=ceph-update."
|
||||||
|
"yaml``"
|
||||||
|
msgstr ""
|
||||||
|
"``helm upgrade ceph-provisioners ./ceph-provisioners --values=ceph-update."
|
||||||
|
"yaml``"
|
||||||
|
|
||||||
|
msgid "``series of console outputs:``"
|
||||||
|
msgstr "``series of console outputs:``"
|
Loading…
Reference in New Issue
Block a user