Proposed Service Architecture for Kosmos
Also including the old massive overview .dot file, its not referenced, but it might be handy in the future. Change-Id: I68dc4521afe4efb520748fbc323867a045a2eb77
This commit is contained in:
parent
2e72a70820
commit
4ee173564f
@ -26,6 +26,9 @@ extensions = [
|
||||
#'sphinx.ext.intersphinx',
|
||||
'oslosphinx',
|
||||
'yasfb',
|
||||
'sphinx.ext.graphviz',
|
||||
'sphinxcontrib.seqdiag',
|
||||
'sphinxcontrib.httpdomain',
|
||||
]
|
||||
|
||||
# Feed configuration for yasfb
|
||||
|
@ -14,6 +14,14 @@ Liberty approved specs:
|
||||
|
||||
specs/liberty/*
|
||||
|
||||
Mitaka approved specs:
|
||||
|
||||
.. toctree::
|
||||
:glob:
|
||||
:maxdepth: 1
|
||||
|
||||
specs/mitaka/*
|
||||
|
||||
|
||||
kosmos-specs Repository Information
|
||||
===================================================
|
||||
|
@ -1,4 +1,6 @@
|
||||
pbr>=0.11,<2.0
|
||||
oslosphinx
|
||||
sphinx>=1.1.2,!=1.2.0,!=1.3b1,<1.3
|
||||
yasfb>=0.5.1
|
||||
yasfb>=0.5.1
|
||||
sphinxcontrib-seqdiag
|
||||
sphinxcontrib-httpdomain
|
133
specs/mitaka/sysarch.rst
Normal file
133
specs/mitaka/sysarch.rst
Normal file
@ -0,0 +1,133 @@
|
||||
..
|
||||
|
||||
This work is licensed under a Creative Commons Attribution 3.0 Unported License.
|
||||
http://creativecommons.org/licenses/by/3.0/legalcode
|
||||
|
||||
..
|
||||
This template should be in ReSTructured text. The filename in the git
|
||||
repository should match the launchpad URL, for example a URL of
|
||||
https://blueprints.launchpad.net/kosmos/+spec/awesome-thing should be named
|
||||
awesome-thing.rst . Please do not delete any of the sections in this
|
||||
template. If you have nothing to say for a whole section, just write: None
|
||||
For help with syntax, see http://sphinx-doc.org/rest.html
|
||||
To test out your formatting, see http://www.tele3.cz/jbar/rest/rest.html
|
||||
|
||||
=====================
|
||||
System Architecture
|
||||
=====================
|
||||
|
||||
Overview
|
||||
========
|
||||
|
||||
The system should be made up of a few services, that are small, and horizontally scalable.
|
||||
|
||||
The system should be the following:
|
||||
- Scalable
|
||||
- Fault Tolerant
|
||||
- Use other OpenStack projects as first class citizens
|
||||
- For the GSLB appliance drivers the default should be `Designate`_
|
||||
- For regional pool members Neutron `LBaaS V2`_ should be the default
|
||||
|
||||
.. note:: This is for the MVP. Post MVP we need to also be able to run as a global service.
|
||||
|
||||
|
||||
Overview Diagram
|
||||
----------------
|
||||
|
||||
.. graphviz:: sysarch/sysarch-diagram-overview.dot
|
||||
|
||||
|
||||
Services
|
||||
--------
|
||||
|
||||
+-----------------------+-------------------------------------------------+-------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------+
|
||||
| service name | deployment model | purpose | notes |
|
||||
+=======================+=================================================+===============================================================================+===================================================================================================+
|
||||
| kosmos-api | multiple | Configuration of GSLBs | |
|
||||
+-----------------------+-------------------------------------------------+-------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------+
|
||||
| kosmos-conductor | multiple - single region | Database Access | |
|
||||
+-----------------------+-------------------------------------------------+-------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------+
|
||||
| kosmos-status-checker | multiple - global | Check status of endpoints | |
|
||||
+-----------------------+-------------------------------------------------+-------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------+
|
||||
| kosmos-engine | multiple - single region | Business Logic | |
|
||||
+-----------------------+-------------------------------------------------+-------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------+
|
||||
| GSLB Appliance | multiple - global (depending on appliance used) | Applicance that is used by Kosmos to direct traffic to the correct endpoints. | This is Designate in the reference Implementation |
|
||||
+-----------------------+-------------------------------------------------+-------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------+
|
||||
| Endpoints | multiple - multiple regions | The destination for the traffic that is being load balanced by Kosmos | This will be LBaaS V2 Load Balancers by default, but could also be IPs or Hardware Load Balancers |
|
||||
+-----------------------+-------------------------------------------------+-------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------+
|
||||
|
||||
kosmos-api
|
||||
^^^^^^^^^^
|
||||
|
||||
The API would be a WSGI service that implements the API Spec previously approved.
|
||||
|
||||
It should follow the standard OpenStack patterns (use Keystone / oslo.middleware / oslo.context)
|
||||
|
||||
kosmos-conductor
|
||||
^^^^^^^^^^^^^^^^
|
||||
|
||||
This will act as a single point of access to the DB, allowing for consistent data validation.
|
||||
|
||||
As results are fed in from status-checkers, then engine will decide if an endpoint is suitable to be included in the pool. As it adds and removes nodes, it will
|
||||
update both the database, and the GSLB appliance, via the plugin. (e.g. if using the Designate plugin it would remove the endpoint's IP from the record)
|
||||
|
||||
.. warning:: The logging component described below may be cut in the MVP
|
||||
|
||||
There is a logging component that could be a separate DB, a timeseries DB, a columnar DB, elastic search or other storage system
|
||||
This will store a section of history for the pools, to allow users to see when / why endpoints were added / removed from the load balancer.
|
||||
|
||||
.. graphviz:: sysarch/sysarch-diagram-conductor.dot
|
||||
|
||||
kosmos-status-checker
|
||||
^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
This is a worker style service that will run status checks on the defined endpoints.
|
||||
|
||||
This will have a plugin interface to allow for more checks to the loaded by different plugins, or for custom checks to be written by the deployer (using standard OpenStack plugin patterns)
|
||||
|
||||
For example, any pool-members / endpoints that are LBaaS instances will not call into the VIP, but will call the LBaaS V2 API to get the health of the regional members.
|
||||
|
||||
.. graphviz:: sysarch/sysarch-diagram-status-checker.dot
|
||||
|
||||
kosmos-engine
|
||||
^^^^^^^^^^^^^
|
||||
|
||||
This is where all the business logic resides. This service will consume status results and decide if an endpoint should be removed / added.
|
||||
This service will then use the plugin loaded for the GSLB backend to orchestrate this.
|
||||
|
||||
The plugins will allow different types of GSLB appliances / services to be used, and will use standard OpenStack plugin patterns.
|
||||
|
||||
.. graphviz:: sysarch/sysarch-diagram-engine.dot
|
||||
|
||||
|
||||
Entity Relationship Diagram
|
||||
---------------------------
|
||||
|
||||
.. graphviz:: sysarch/erd-diagram.dot
|
||||
|
||||
|
||||
Example
|
||||
-------
|
||||
|
||||
This is an example flow of information, when we are using Designate and checking the health info of a Neutron LBaaS load balancer.
|
||||
The plugin components are excluded for clarity, but would be between the "Status Check" and "Engine" components.
|
||||
|
||||
.. seqdiag:: sysarch/example-flow.diag
|
||||
|
||||
Implementation
|
||||
==============
|
||||
|
||||
Assignee(s)
|
||||
-----------
|
||||
|
||||
Primary assignee:
|
||||
kosmos-drivers
|
||||
|
||||
Milestones
|
||||
----------
|
||||
|
||||
Target Milestone for completion:
|
||||
M-2
|
||||
|
||||
.. _Designate: http://wiki.openstack.org/wiki/Designate
|
||||
.. _LBaaS V2: http://https://wiki.openstack.org/wiki/Neutron/LBaaS
|
91
specs/mitaka/sysarch/erd-diagram.dot
Normal file
91
specs/mitaka/sysarch/erd-diagram.dot
Normal file
@ -0,0 +1,91 @@
|
||||
digraph "models_diagram" {
|
||||
graph[overlap=false, splines=true, fontname="sans-serif", fontsize=10 ]
|
||||
node [ fontname="sans-serif"fontsize=10 ];
|
||||
edge [ fontname="sans-serif"fontsize=10 ];
|
||||
|
||||
|
||||
"LB" [shape=record, label="{\
|
||||
loadbalancers|
|
||||
id :uuid\l\
|
||||
project_id :string\l\
|
||||
domain_id :string\l\
|
||||
name :string\l\
|
||||
description :string\l\
|
||||
fqdn :string\l\
|
||||
zone_name :string\l\
|
||||
flavor :enum\l\
|
||||
appliance_id :string\l\
|
||||
pool_ids :uuid\l\
|
||||
}"]
|
||||
|
||||
"Pools" [shape=record, label="{\
|
||||
pools|\l\
|
||||
id :uuid\l\
|
||||
project_id :string\l\
|
||||
domain_id :string\l\
|
||||
name :string\l\
|
||||
description :string\l\
|
||||
}"]
|
||||
|
||||
"PoolMembers" [shape=record, label="{\
|
||||
pool_members|\l\
|
||||
id :uuid\l\
|
||||
project_id :string\l\
|
||||
domain_id :string\l\
|
||||
pool_id :string\l\
|
||||
name :string\l\
|
||||
description :string\l\
|
||||
type :enum\l\
|
||||
}"]
|
||||
|
||||
"PoolMemberParameters" [shape=record, label="{\
|
||||
pool_member_parameters|\l\
|
||||
id :uuid\l\
|
||||
project_id :string\l\
|
||||
domain_id :string\l\
|
||||
pool_member_id :string\l\
|
||||
key :enum\l\
|
||||
value :string\l\
|
||||
}"]
|
||||
|
||||
"Monitors" [shape=record, label="{\
|
||||
monitors|\l\
|
||||
id :uuid\l\
|
||||
project_id :string\l\
|
||||
domain_id :string\l\
|
||||
name :string\l\
|
||||
description :string\l\
|
||||
type :string\l\
|
||||
target :string\l\
|
||||
auth :bool\l\
|
||||
}"]
|
||||
|
||||
"MonitorParameters" [shape=record, label="{\
|
||||
monitor_parameters|\l\
|
||||
id :uuid\l\
|
||||
project_id :string\l\
|
||||
domain_id :string\l\
|
||||
monitor_id :uuid\l\
|
||||
key :enum\l\
|
||||
value :string\l\
|
||||
}"]
|
||||
|
||||
"PoolsMonitor" [shape=record, label="{\
|
||||
pools_monitors|\l\
|
||||
pool_id :uuid\l\
|
||||
monitor_id :uuid\l\
|
||||
}"]
|
||||
|
||||
|
||||
{ rank=same; "LB", "Pools" }
|
||||
{ rank=same; "PoolsMonitor" "PoolMembers" }
|
||||
{ rank=same; "Monitors" "PoolMemberParameters" "MonitorParameters" }
|
||||
|
||||
"LB" -> "Pools" [arrowtail=odot, arrowhead=crow, dir=both]
|
||||
|
||||
"Pools" -> "PoolsMonitor" [arrowtail=odot, arrowhead=crow, dir=both]
|
||||
"PoolsMonitor" -> "Monitors" [arrowtail=crow, arrowhead=odot, dir=both]
|
||||
"Monitors" -> "MonitorParameters" [arrowtail=odot, arrowhead=crow, dir=both]
|
||||
"Pools" -> "PoolMembers" [arrowtail=odot, arrowhead=crow, dir=both]
|
||||
"PoolMembers" -> "PoolMemberParameters" [arrowtail=odot, arrowhead=crow, dir=both]
|
||||
}
|
19
specs/mitaka/sysarch/example-flow.diag
Normal file
19
specs/mitaka/sysarch/example-flow.diag
Normal file
@ -0,0 +1,19 @@
|
||||
seqdiag {
|
||||
|
||||
"End User";
|
||||
Engine -> Conductor [label = "Get Load Balancer Information"];
|
||||
Engine <-- Conductor;
|
||||
|
||||
Engine -> "Status Check" [label = "Tell Status Check to do a healthcheck"];
|
||||
"Status Check" -> "Neutron LBaaS API" [label = "Get the Regional information from LBaaS V2 Status API"];
|
||||
"Status Check" <-- "Neutron LBaaS API";
|
||||
"Engine" <-- "Status Check";
|
||||
|
||||
Engine -> "Designate API" [label = "Tell Designate about what IPs to send traffic to"]
|
||||
"Designate API" -> "DNS Servers" [label = "Designate updates DNS Server"]
|
||||
"Designate API" <--"DNS Servers"
|
||||
Engine <-- "Designate API"
|
||||
|
||||
"End User" -> "DNS Servers" [label = "User asks for IPs of Service"]
|
||||
"End User" <-- "DNS Servers" [label = "DNS Returns IPs for the LBaaS VIPs of currently available regions"]
|
||||
}
|
34
specs/mitaka/sysarch/sysarch-diagram-conductor.dot
Normal file
34
specs/mitaka/sysarch/sysarch-diagram-conductor.dot
Normal file
@ -0,0 +1,34 @@
|
||||
digraph "Kosmos"{
|
||||
rankdir=TB
|
||||
node [ fontname="sans-serif"fontsize=10 ];
|
||||
edge [ fontname="sans-serif"fontsize=10 ];
|
||||
label="kosmos-conductor";
|
||||
overlap="ortho";
|
||||
fontname="sans-serif"
|
||||
newrank=true
|
||||
|
||||
subgraph cluster_conductor_service {
|
||||
fontname="sans-serif"
|
||||
label="kosmos-conductor";
|
||||
fontsize=12
|
||||
|
||||
Conductor[label="Conductor"];
|
||||
Database[label="Database", shape="folder"];
|
||||
Logger[label="Logger", shape="folder"];
|
||||
}
|
||||
|
||||
{ rank=same; "Conductor" "Engine" "WSGI" }
|
||||
|
||||
WSGI [style="invisible"]
|
||||
Engine [style="invisible"]
|
||||
|
||||
WSGI -> Conductor [label="kosmos-api"];
|
||||
|
||||
Conductor -> Database [dir="both"];
|
||||
Conductor -> Logger [dir="both"];
|
||||
|
||||
Conductor -> Engine [label="kosmos-engine"];
|
||||
|
||||
|
||||
|
||||
}
|
34
specs/mitaka/sysarch/sysarch-diagram-engine.dot
Normal file
34
specs/mitaka/sysarch/sysarch-diagram-engine.dot
Normal file
@ -0,0 +1,34 @@
|
||||
digraph "Kosmos"{
|
||||
rankdir=TB
|
||||
node [ fontname="sans-serif"fontsize=10 ];
|
||||
edge [ fontname="sans-serif"fontsize=10 ];
|
||||
label="kosmos-engine";
|
||||
overlap="ortho";
|
||||
fontname="sans-serif"
|
||||
newrank=true
|
||||
|
||||
Conductor [style="invisible"]
|
||||
|
||||
subgraph cluster_engine_service {
|
||||
fontname="sans-serif"
|
||||
label="kosmos-engine";
|
||||
fontsize=12
|
||||
|
||||
node[shape=record];
|
||||
Engine[label="<f0> Engine|<f1> GSLB Plugin Interface |<f2> Status Check Consumer"];
|
||||
PluginDriver[label="GSLB Plugin Driver", shape="component"]
|
||||
}
|
||||
|
||||
|
||||
ApplicanceAPI [style="invisible"]
|
||||
Worker [style="invisible"]
|
||||
|
||||
Engine:f0 -> Conductor [label="kosmos-conductor"];
|
||||
|
||||
Engine:f1 -> PluginDriver [dir="both"];
|
||||
PluginDriver -> ApplicanceAPI [dir="both" label="GSLB Appliance API"];
|
||||
|
||||
|
||||
Worker -> Engine:f2[label="kosmos-status-checker"];
|
||||
|
||||
}
|
67
specs/mitaka/sysarch/sysarch-diagram-overview.dot
Normal file
67
specs/mitaka/sysarch/sysarch-diagram-overview.dot
Normal file
@ -0,0 +1,67 @@
|
||||
digraph "Kosmos"{
|
||||
node [ fontname="sans-serif"fontsize=10; shape=record ];
|
||||
edge [ fontname="sans-serif"fontsize=10 ];
|
||||
label="Kosmos System Overview";
|
||||
overlap="ortho";
|
||||
fontname="sans-serif"
|
||||
newrank=true
|
||||
|
||||
|
||||
API_user [
|
||||
label="API User"
|
||||
style="dashed"
|
||||
]
|
||||
|
||||
cluster_keystone [
|
||||
label="Keystone";
|
||||
style="dashed"
|
||||
]
|
||||
|
||||
cluster_api_service [
|
||||
label="kosmos-api";
|
||||
]
|
||||
|
||||
cluster_conductor_service [
|
||||
label="kosmos-conductor";
|
||||
]
|
||||
|
||||
cluster_engine_service [
|
||||
label="kosmos-engine";
|
||||
]
|
||||
|
||||
cluster_gslb_appliance [
|
||||
label="GSLB Appliance";
|
||||
style="dotted"
|
||||
]
|
||||
|
||||
cluster_status_checks [
|
||||
label="kosmos-status-check";
|
||||
]
|
||||
|
||||
cluster_endpoints [
|
||||
label="Endpoints";
|
||||
style="dashed"
|
||||
]
|
||||
|
||||
end_user [
|
||||
label="End User"
|
||||
style="dashed"
|
||||
]
|
||||
|
||||
API_user -> cluster_api_service
|
||||
cluster_api_service -> cluster_keystone
|
||||
cluster_api_service -> cluster_conductor_service
|
||||
|
||||
{ rank=same; "API_user" "cluster_api_service" "cluster_keystone" }
|
||||
|
||||
cluster_conductor_service -> cluster_engine_service
|
||||
cluster_status_checks -> cluster_engine_service
|
||||
cluster_status_checks -> cluster_endpoints
|
||||
cluster_engine_service -> cluster_gslb_appliance
|
||||
|
||||
{ rank=same; "cluster_engine_service" "cluster_gslb_appliance" "end_user"}
|
||||
{ rank=same; "cluster_endpoints"}
|
||||
end_user -> cluster_gslb_appliance
|
||||
end_user -> cluster_endpoints
|
||||
|
||||
}
|
35
specs/mitaka/sysarch/sysarch-diagram-status-checker.dot
Normal file
35
specs/mitaka/sysarch/sysarch-diagram-status-checker.dot
Normal file
@ -0,0 +1,35 @@
|
||||
digraph "Kosmos"{
|
||||
rankdir=TB
|
||||
node [ fontname="sans-serif"fontsize=10 ];
|
||||
edge [ fontname="sans-serif"fontsize=10 ];
|
||||
label="kosmos-status-checker";
|
||||
overlap="ortho";
|
||||
fontname="sans-serif"
|
||||
newrank=true
|
||||
|
||||
subgraph cluster_status_checks {
|
||||
fontname="sans-serif"
|
||||
fontsize=12
|
||||
label="kosmos-status-checker";
|
||||
|
||||
Worker[label="Status Checking Worker"];
|
||||
|
||||
node[shape=record];
|
||||
Checks[label="<f0> Build In Checks Interface |<f1> GSLB Plugin Checks Interface"];
|
||||
|
||||
BuiltInChecks[label="Built In Status Checks"]
|
||||
PluginChecks[label="Plugin Status Checks", shape="component"]
|
||||
}
|
||||
|
||||
Endpoints [style="invisible"]
|
||||
Engine [style="invisible"]
|
||||
|
||||
Worker -> Engine [label="kosmos-engine"];
|
||||
Worker -> Checks [dir="both"];
|
||||
Checks:f0 -> BuiltInChecks [dir="both"];
|
||||
Checks:f1 -> PluginChecks [dir="both"];
|
||||
BuiltInChecks -> Endpoints [dir="both" label="Endpoints"];
|
||||
PluginChecks -> Endpoints [dir="both" label="Endpoints"];
|
||||
|
||||
|
||||
}
|
114
specs/mitaka/sysarch/sysarch-diagram.dot
Normal file
114
specs/mitaka/sysarch/sysarch-diagram.dot
Normal file
@ -0,0 +1,114 @@
|
||||
digraph "Kosmos"{
|
||||
rankdir=TB
|
||||
node [ fontname="sans-serif"fontsize=10 ];
|
||||
edge [ fontname="sans-serif"fontsize=10 ];
|
||||
label="Kosmos System Overview";
|
||||
overlap="ortho";
|
||||
fontname="sans-serif"
|
||||
newrank=true
|
||||
|
||||
subgraph cluster_keystone {
|
||||
fontname="sans-serif"
|
||||
label="Keystone";
|
||||
fontsize=12
|
||||
style="dashed"
|
||||
|
||||
Keystone[label="Keystone API", style="dotted"];
|
||||
}
|
||||
|
||||
subgraph cluster_api_service {
|
||||
fontname="sans-serif"
|
||||
label="API Service";
|
||||
fontsize=12
|
||||
|
||||
WSGI[label="WSGI API"];
|
||||
}
|
||||
|
||||
|
||||
subgraph cluster_conductor_service {
|
||||
fontname="sans-serif"
|
||||
label="Conductor Service";
|
||||
fontsize=12
|
||||
|
||||
Conductor[label="Conductor"];
|
||||
Database[label="Database", shape="folder"];
|
||||
Logger[label="Logger", shape="folder"];
|
||||
}
|
||||
|
||||
subgraph cluster_engine_service {
|
||||
fontname="sans-serif"
|
||||
label="Engine Service";
|
||||
fontsize=12
|
||||
|
||||
node[shape=record];
|
||||
Engine[label="<f0> Engine|<f1> GSLB Plugin Interface |<f2> Status Check Consumer"];
|
||||
PluginDriver[label="GSLB Plugin Driver", shape="component"]
|
||||
}
|
||||
|
||||
subgraph cluster_gslb_appliance {
|
||||
fontname="sans-serif"
|
||||
fontsize=12
|
||||
label="GSLB Appliance";
|
||||
style="dashed"
|
||||
|
||||
Applicance[label="GSLB Traffic Director", style="dotted"];
|
||||
ApplicanceAPI[label="GSLB Appliance API", style="dotted"];
|
||||
}
|
||||
|
||||
subgraph cluster_status_checks {
|
||||
fontname="sans-serif"
|
||||
fontsize=12
|
||||
label="Status Check Service";
|
||||
|
||||
Worker[label="Status Checking Worker"];
|
||||
|
||||
node[shape=record];
|
||||
Checks[label="<f0> Build In Checks Interface |<f1> GSLB Plugin Checks Interface"];
|
||||
|
||||
BuiltInChecks[label="Built In Status Checks"]
|
||||
PluginChecks[label="Plugin Status Checks", shape="component"]
|
||||
}
|
||||
|
||||
subgraph cluster_endpoints {
|
||||
fontname="sans-serif"
|
||||
fontsize=12
|
||||
label="Endpoints";
|
||||
style="dashed"
|
||||
|
||||
Endpoint1[label="Endpoint", style="dotted"];
|
||||
Endpoint2[label="Endpoint", style="dotted"];
|
||||
Endpoint3[label="Endpoint", style="dotted"];
|
||||
LBaaSAPI[label="LBaaS Status API", style="dotted"];
|
||||
|
||||
}
|
||||
|
||||
AdminUser[label="GSLB User", style="dashed"];
|
||||
AdminUser -> WSGI [dir="both"];
|
||||
|
||||
Keystone -> WSGI [dir="both"];
|
||||
|
||||
WSGI -> Conductor:f0 [dir="both"];
|
||||
|
||||
Conductor -> Database [dir="both"];
|
||||
|
||||
Engine:f0 -> Conductor [dir="both"];
|
||||
|
||||
Engine:f1 -> PluginDriver [dir="both"];
|
||||
PluginDriver -> ApplicanceAPI [dir="both"];
|
||||
|
||||
Applicance -> ApplicanceAPI [dir="both"];
|
||||
|
||||
Worker -> Engine:f2;
|
||||
Worker -> Checks [dir="both"];
|
||||
Checks:f0 -> BuiltInChecks [dir="both"];
|
||||
Checks:f1 -> PluginChecks [dir="both"];
|
||||
BuiltInChecks -> {Endpoint1; Endpoint2; Endpoint3; LBaaSAPI} [dir="both"];
|
||||
LBaaSAPI -> {Endpoint1; Endpoint2; Endpoint3}
|
||||
PluginChecks -> {Endpoint1; Endpoint2; Endpoint3} [dir="both"];
|
||||
|
||||
EndUser[label="End User", style="dashed"];
|
||||
|
||||
EndUser -> Applicance
|
||||
EndUser -> {Endpoint1; Endpoint2; Endpoint3}
|
||||
|
||||
}
|
Loading…
Reference in New Issue
Block a user