WARNING

  • It's imperative you review, plan and define the details below before installing Workbench; failure to do so could result in a Workbench re-installation
  • Review and complete each sub-section below before moving onto the next
  • Workbench can be deployed as a single-node/host or as a multi-node/host cluster.
  • The Workbench multi-node cluster deployment is available to support high-availability and/or environments that have a high volume of events/metrics.
  • Multiple Data-Centers are supported, where Workbench can be deployed as single-node/host or as a cluster per Data-Center.
  • Workbench deployments across Data-Centers can then be connected and synced in real-time to provide holistic visibility of the Alarms, Changes, Channel Monitoring and Auditing features.
  • To determine the number of Workbench nodes/hosts, and the resource requirements for each, please follow the steps below.

WARNING: The Workbench 9.x Sizing steps below should be followed for each Data-Center where Workbench will be deployed.

Calculate Workbench node/host disk space

Based on the number of Hosts (i.e. Engage SIP, URS, FWK etc) that Workbench will ingest Metric data from, review the table below to determine the respective disk space required for each Workbench Host at a given Data-Center:

Number of Hosts
  • to ingest Metric data from
Total Disk Space
  • assuming a 30 day Workbench data Retention Period and a 60 second Metric collection frequency
1-50 250 GB
51-100 500 GB
101-150 750 GB
150+ 1 TB [+250 GB for every 50 hosts > 200]

Note the Total Disk Space = ___________ (used for next steps)

WARNING

  • Currently Workbench 9.x is limited to a maximum of 100 Hosts (the global combined Workbench or Engage Hosts), the table above details beyond the 100 Host limit for future Workbench sizing context.

Only if/when the default Retention Period and Metric Frequency settings are changed

The table above assumes the Workbench default data Retention Period of 30 days and a Workbench Agent/Remote Metric collection frequency of every 60 seconds.

If these default Retention Period and Metric Frequency values require modification, please re-calculate the Total Disk Space, by using the scale factors below:

  • Retention Scale Factor = [New Retention Period Days] / 30
  • Metric Frequency Scale Factor = 60 / [New Collection Frequency Seconds]
  • Re-calculated Total Disk Space = Disk Space (from the section 1 table above) * Retention Scale Factor * Metric Frequency Scale Factor

NOTE:

  • The global Workbench Retention Period is editable via Workbench Configuration\General\Retention Period\Workbench Data Retention Period (Days)
  • The Metric Frequency collection setting can be changed on each Workbench Agent and Workbench Agent Remote application via:
    • Workbench Configuration\Applications\Application Name (i.e. WB_Agent_Primary)\MetricBeat Host Metrics\Host Metric Collection Frequency (seconds)
    • Workbench Configuration\Applications\Application Name (i.e. WB_Agent_Primary)\MetricBeat Associated Application Metrics\Application/Process Metric Collection Frequency (seconds)

Determine the node/host count

Using the Total Disk Space calculation from the previous step, next determine the required number of Workbench Nodes/Hosts:

Total Disk Space from Step 1 or 2 above Number of Workbench Nodes/Hosts Required
is less than 2.5 TB A single (1) Node/Host Workbench can be used
is greater than 2.5 TB OR if Workbench High Availability is required A 3 x Nodes/Hosts Workbench Cluster is required

NOTE:

  • Workbench High-Availability (HA) is resiliency of event data (via Workbench Elasticsearch) and configuration data (via Workbench ZooKeeper)

Node/host resources

This section details the per Workbench Node/Host recommended resources based on the previous steps:

Type Specification
Workbench Primary Node/Host
  • be it single Node or part of a 3 Node Cluster
  • CPU: 10 Cores/Threads
  • Memory: 24 GB
  • NIC: 100 MB
  • Disk:
    • if a single Workbench Node/Host = Total Disk Space from Step 1 or 2 above
    • if part of a Workbench 3 Node Cluster = divide the Total Disk Space from Step 1 or 2 above by 3
      • The Total Disk Space is divided by 3 due to the Workbench Cluster deployment architecture
Non Workbench Primary Nodes/Hosts
  • that are part of a Workbench Cluster
  • CPU: 10 Cores/Threads
  • Memory: 16 GB
  • NIC: 100 MB
  • Disk: Total Disk Space from Step 1 or 2 above / 3
      • The Total Disk Space is divided by 3 due to the Cluster deployment architecture

NOTE:

  • The following Memory allocation is need for each Workbench Elasticsearch Node/Host in the deployment.
  • Please review ES Heap Settings for details on configuring the RAM for each Workbench Elasticsearch instance.
Total Disk Space per Node/Host Dedicated Workbench Elasticsearch Memory Required
< 100 GB 2 GB RAM
100 - 750 GB 4 GB RAM
750 - 1.5 TB 6 GB RAM
1.5 - 2.5 TB 8 GB RAM

NOTE:

  • If/when Total Disk Space is greater than 2.5 TB per Node/Host, please raise a Genesys Customer Care Case for consultation/guidance.

Required number of additional node(s)/host(s) at each Data-Center

Workbench currently supports ingesting Metric data from a maximum of 100 Hosts.

Required Number of WB additional Nodes/Hosts Number of Hosts sending Metric data to Workbench Frequency of Metrics being sent from each Host to Workbench
0 (WB on Primary host) 100 60 (default)
1 (WB on Primary host and Logstash on the additional node) 100 30
1 (WB on Primary host and Logstash on the additional node) 100 10

Example 1: Ingest from 10 hosts - 30 day retention period - 60 second metric frequency

A production Workbench deployment ingesting Metric data from 10 Engage Hosts:

  • Number of Hosts to ingest Metric data from = 10
  • Retention Period = 30 days (default)
  • Metric Frequency Collection = 60 seconds (default)
  • Total Disk Space = 250 GB
  • 1 x Workbench Node/Host
    • CPU: 10 Cores
    • RAM: 24 GB
    • NIC: 100 MB
    • DISK: 250 GB
    • DEDICATED Elasticsearch RAM: 4 GB

Example 2: Ingest from 30 hosts - 7 day retention period - 10 second metric frequency

A production Workbench deployment ingesting Metric data from 30 Engage Hosts:

  • Number of Hosts to ingest Metric data from = 30
  • Retention Period = 7 days
    • therefore re-calculated Retention Scale Factor is 7 (days) / 30 => 0.23
  • Metric Frequency Collection = 10 seconds
    • therefore re-calculated Metric Frequency Scale Factor is 60 / 10 => 6
  • Re-calculated Total Disk Space is 250 GB * 0.23 * 6 => 345 GB
  • 1 x Workbench Node/Host
    • CPU: 10 Cores
    • RAM: 24 GB
    • NIC: 100 MB
    • DISK: 345 GB
    • DEDICATED Elasticsearch RAM: 4 GB

Example 3: Ingest from 90 hosts - 90 day retention period - 30 second metric frequency

A production Workbench HA deployment ingesting Metric data from 90 Engage Hosts:

  • Number of Hosts to ingest Metric data from = 90
  • Retention Period = 90 days
    • therefore re-calculated Retention Scale Factor is 90 (days) / 30 => 3
  • Metric Frequency Collection = 30 seconds
    • therefore re-calculated Metric Frequency Scale Factor is 60 / 30 => 2
  • Re-calculated Total Disk Space is 500 GB * 3 * 2 => 3000 GB (3 TB)
  • 3 x Workbench Nodes/Hosts required given Total Disk Space is greater than 2.5 TB
  • Workbench Primary
    • CPU: 10 Cores
    • RAM: 24 GB
    • NIC: 100 MB
    • DISK: 1000 GB (1 TB on each Node/Host given the Cluster architecture)
    • DEDICATED Elasticsearch RAM: 8 GB
  • Workbench Nodes 2 and 3
    • CPU: 10 Cores
    • RAM: 16 GB
    • NIC: 100 MB
    • DISK: 1000 GB (1 TB on each Node/Host given the Cluster architecture)
    • DEDICATED Elasticsearch RAM: 8 GB