Imported Translations from Zanata

For more information about this automatic import see:
https://docs.openstack.org/i18n/latest/reviewing-translation-import.html

Change-Id: I96b7f1e5391fe1cedcf7d23e0245881676ba3e0e
This commit is contained in:
OpenStack Proposal Bot 2020-02-08 08:20:40 +00:00
parent a7fcc03112
commit 549a114917

View File

@ -4,11 +4,11 @@ msgid ""
msgstr ""
"Project-Id-Version: openstack-helm\n"
"Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2020-01-13 21:14+0000\n"
"POT-Creation-Date: 2020-02-04 17:43+0000\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n"
"PO-Revision-Date: 2020-01-15 10:36+0000\n"
"PO-Revision-Date: 2020-02-07 09:39+0000\n"
"Last-Translator: Andi Chandler <andi@gowling.com>\n"
"Language-Team: English (United Kingdom)\n"
"Language: en_GB\n"
@ -127,6 +127,13 @@ msgstr "After applying labels, let's check status"
msgid "After reboot (node voyager3), the node status changes to ``NotReady``."
msgstr "After reboot (node voyager3), the node status changes to ``NotReady``."
msgid ""
"After the host is down (node voyager3), the node status changes to "
"``NotReady``."
msgstr ""
"After the host is down (node voyager3), the node status changes to "
"``NotReady``."
msgid "All PODs are in running state."
msgstr "All PODs are in running state."
@ -188,6 +195,9 @@ msgstr "Capture final Openstack pod statuses:"
msgid "Case: A disk fails"
msgstr "Case: A disk fails"
msgid "Case: A host machine where ceph-mon is running is down"
msgstr "Case: A host machine where ceph-mon is running is down"
msgid "Case: One host machine where ceph-mon is running is rebooted"
msgstr "Case: One host machine where ceph-mon is running is rebooted"
@ -228,6 +238,15 @@ msgstr "Ceph status shows HEALTH_WARN as expected"
msgid "Ceph status shows that MON and OSD count has been increased."
msgstr "Ceph status shows that MON and OSD count has been increased."
msgid ""
"Ceph status shows that ceph-mon running on ``voyager3`` becomes out of "
"quorum. Also, 6 osds running on ``voyager3`` are down (i.e., 18 out of 24 "
"osds are up). Some placement groups become degraded and undersized."
msgstr ""
"Ceph status shows that ceph-mon running on ``voyager3`` becomes out of "
"quorum. Also, 6 osds running on ``voyager3`` are down (i.e., 18 out of 24 "
"osds are up). Some placement groups become degraded and undersized."
msgid ""
"Ceph status shows that ceph-mon running on ``voyager3`` becomes out of "
"quorum. Also, six osds running on ``voyager3`` are down; i.e., 18 osds are "
@ -524,6 +543,15 @@ msgstr ""
"com/openstack/openstack-helm/tree/master/ceph>`_ is to show symptoms of "
"software/hardware failure and provide the solutions."
msgid ""
"The node status of ``voyager3`` changes to ``Ready`` after the node is up "
"again. Also, Ceph pods are restarted automatically. Ceph status shows that "
"the monitor running on ``voyager3`` is now in quorum."
msgstr ""
"The node status of ``voyager3`` changes to ``Ready`` after the node is up "
"again. Also, Ceph pods are restarted automatically. Ceph status shows that "
"the monitor running on ``voyager3`` is now in quorum."
msgid ""
"The node status of ``voyager3`` changes to ``Ready`` after the node is up "
"again. Also, Ceph pods are restarted automatically. The Ceph status shows "
@ -552,6 +580,11 @@ msgstr ""
"This document captures steps and result from node reduction and expansion as "
"well as Ceph recovery."
msgid ""
"This is for the case when a host machine (where ceph-mon is running) is down."
msgstr ""
"This is for the case when a host machine (where ceph-mon is running) is down."
msgid ""
"This is to test a scenario when a disk failure happens. We monitor the ceph "
"status and notice one OSD (osd.2) on voyager4 which has ``/dev/sdh`` as a "