Administer the Workload Balancing virtual appliance

This article provides information about the following subjects:

  • Using Workload Balancing to start VMs on the best possible host

  • Accepting the recommendations Workload Balancing issues to move VMs to different hosts

Note:

Workload Balancing is available for XenServer Enterprise edition customers or those customers who have access to XenServer through their Citrix Virtual Apps and Desktops entitlement. For more information about XenServer licensing, see Licensing. To upgrade, or to buy a XenServer license, visit the Citrix website.

Introduction to basic tasks

Workload Balancing is a powerful XenServer component that includes many features designed to optimize the workloads in your environment. These features include:

  • Host power management
  • Scheduling optimization-mode changes
  • Running reports.

In addition, you can fine-tune the criteria Workload Balancing uses to make optimization recommendations.

However, when you first begin using Workload Balancing, there are two main tasks you use Workload Balancing for on a daily (or regular) basis:

  • Determining the best host on which to run a VM

  • Accepting Workload Balancing optimization recommendations

Running reports about the workloads in your environment, which is described in Generate workload reports, is another frequently used task.

Determine the best host on which to run a VM

VM placement enables you to determine the host on which to start and run a VM. This feature is useful when you want to restart a powered off VM and when you want to migrate a VM to a different host. Placement recommendations may also be useful in Citrix Virtual Desktops environments.

Accept Workload Balancing recommendations

After Workload Balancing is running for a while, it begins to make recommendations about ways in which you can improve your environment. For example, if your goal is to improve VM density on hosts, at some point, Workload Balancing might recommend that you consolidate VMs on a host. If you aren’t running in automated mode, you can choose either to accept this recommendation and apply it or to ignore it.

Both of these tasks, and how you perform them in XenCenter, are explained in more depth in the sections that follow.

Important:

After Workload Balancing runs for a time, if you don’t receive optimal placement recommendations, evaluate your performance thresholds. This evaluation is described in Understand when Workload Balancing makes recommendations. It is critical to set Workload Balancing to the correct thresholds for your environment or its recommendations might not be appropriate.

Choose the best host for a VM

When you have enabled Workload Balancing and you restart an offline VM, XenCenter recommends the optimal pool members to start the VM on. The recommendations are also known as star ratings since stars are used to indicate the best host.

 This illustration shows a screen capture of the Start On Server feature. More stars appear beside host17 since this host is the optimal host on which to start the VM. host16 does not have any stars beside it, which indicates that host is not recommended. However, since host16 is enabled the user can select it. host18 is grayed out due to insufficient memory, so the user cannot select it.

The term optimal indicates the physical server best suited to hosting your workload. There are several factors Workload Balancing uses when determining which host is optimal for a workload:

  • The amount of resources available on each host in the pool. When a pool runs in Maximum Performance mode, Workload Balancing tries to balance the VMs across the hosts so that all VMs have good performance. When a pool runs in Maximum Density mode, Workload Balancing places VMs onto hosts as densely as possible while ensuring the VMs have sufficient resources.

  • The optimization mode in which the pool is running (Maximum Performance or Maximum Density). When a pool runs in Maximum Performance mode, Workload Balancing places VMs on hosts with the most resources available of the type the VM requires. When a pool runs in Maximum Density mode, Workload Balancing places VMs on hosts that already have VMs running. This approach ensures that VMs run on as few hosts as possible.

  • The amount and type of resources the VM requires. After WLB monitors a VM for a while, it uses the VM metrics to make placement recommendations according to the type of resources the VM requires. For example, Workload Balancing might select a host with less available CPU but more available memory if it is what the VM requires.

When Workload Balancing is enabled, XenCenter provides ratings to indicate the optimal hosts for starting a VM. These ratings are also provided:

  • When you want to start the VM when it is powered off
  • When you want to start the VM when it is suspended
  • When you want to migrate the VM to a different host (Migrate and Maintenance Mode)

When you use these features with Workload Balancing enabled, host recommendations appear as star ratings beside the name of the physical host. Five empty stars indicate the lowest-rated (least optimal) server. If you can’t start or migrate a VM to a host, the host name is grayed out in the menu command for a placement feature. The reason it cannot accept the VM appears beside it.

In general, Workload Balancing functions more effectively and makes better, less frequent optimization recommendations if you start VMs on the hosts it recommends. To follow the host recommendations, use one of the placement features to select the host with the most stars beside it.

To start a virtual machine on the optimal server

  1. In the Resources pane of XenCenter, select the VM you want to start.

  2. From the VM menu, select Start on Server and then select one of the following:

    • Optimal Server. The optimal server is the physical host that is best suited to the resource demands of the VM you are starting. Workload Balancing determines the optimal server based on its historical records of performance metrics and your placement strategy. The optimal server is the server with the most stars.

    • One of the servers with star ratings listed under the Optimal Server command. Five stars indicate the most-recommended (optimal) server and five empty stars indicates the least-recommended server.

Tip:

You can also select Start on Server by right-clicking the VM you want to start in the Resources pane.

To resume a virtual machine on the optimal server

  1. In the Resources pane of XenCenter, select the suspended VM you want to resume.

  2. From the VM menu, select Resume on Server and then select one of the following:

    • Optimal Server. The optimal server is the physical host that is best suited to the resource demands of the VM you are starting. Workload Balancing determines the optimal server based on its historical records of performance metrics and your placement strategy. The optimal server is the server with the most stars.

    • One of the servers with star ratings listed under the Optimal Server command. Five stars indicate the most-recommended (optimal) server and five empty stars indicates the least-recommended server.

Tip:

You can also select Resume on Server by right-clicking the suspended VM in the Resources pane.

Accept optimization recommendations

Workload Balancing provides recommendations about ways you can migrate VMs to optimize your environment. Optimization recommendations appear in the WLB optimization tab in XenCenter.

 This illustration shows a screen capture of the Optimization Recommendations list, which appears on the WLB tab. The Operation column displays the behavior change suggested for that optimization recommendation. The Reason column displays the purpose of the recommendation. This screen capture shows an optimization recommendation for a VM, HA-prot-VM-7, and a host, host17.domain4.bedford4.ctx4.

Optimization recommendations are based on the:

  • Placement strategy you select (that is, the optimization mode).

  • Performance metrics for resources such as a physical host’s CPU, memory, network, and disk utilization.

  • The role of the host in the resource pool. When making placement recommendations, Workload Balancing considers the pool master for VM placement only if no other host can accept the workload. Likewise, when a pool operates in Maximum Density mode, Workload Balancing considers the pool master last when determining the order to fill hosts with VMs.

Optimization recommendations display the following information:

  • The name of the VM that Workload Balancing recommends relocating
  • The host that the VM currently resides on
  • The host Workload Balancing recommends as the new location.

The optimization recommendations also display the reason Workload Balancing recommends moving the VM. For example, the recommendation displays “CPU” to improve CPU utilization. When Workload Balancing power management is enabled, Workload Balancing also displays optimization recommendations for hosts it recommends powering on or off. Specifically, these recommendations are for consolidations.

After you click Apply Recommendations, XenServer performs all operations listed in the Optimization Recommendations list.

Tip:

Find out the optimization mode for a pool by using XenCenter to select the pool. Look in the Configuration section of the WLB tab for the information.

To accept an optimization recommendation

  1. In the Resources pane of XenCenter, select the resource pool for which you want to display recommendations.

  2. Click the WLB tab. If there are any recommended optimizations for any VMs on the selected resource pool, they display in the Optimization Recommendations section of the WLB tab.

  3. To accept the recommendations, click Apply Recommendations. XenServer begins performing all the operations listed in the Operations column of the Optimization Recommendations section.

    After you click Apply Recommendations, XenCenter automatically displays the Logs tab so you can see the progress of the VM migration.

Understand WLB recommendations under high availability

If you have Workload Balancing and XenServer High Availability enabled in the same pool, it is helpful to understand how the two features interact. Workload Balancing is designed not to interfere with High Availability. When there is a conflict between a Workload Balancing recommendation and a High Availability setting, the High Availability setting always takes precedence. In practice, this precedence means that:

  • If attempting to start a VM on a host violates the High Availability plan, Workload Balancing doesn’t give you star ratings.

  • Workload Balancing does not automatically power off any hosts beyond the number specified in the Failures allowed box in the Configure HA dialog.

    • However, Workload Balancing might still make recommendations to power off more hosts than the number of host failures to tolerate. (For example, Workload Balancing still recommends that you power off two hosts when High Availability is only configured to tolerate one host failure.) However, when you attempt to apply the recommendation, XenCenter might display an error message stating that High Availability is no longer guaranteed.

    • When Workload Balancing runs in automated mode and has power management enabled, recommendations that exceed the number of tolerated host failures are ignored. In this situation, the Workload Balancing log shows a message that power-management recommendation wasn’t applied because High Availability is enabled.

Generate workload reports

This section provides information about using the Workload Balancing component to generate reports about your environment, including reports about hosts and VMs. Specifically, this section provides information about the following:

  • How to generate reports

  • What workload reports are available

Note:

Workload Balancing is available for XenServer Enterprise edition customers or those customers who have access to XenServer through their Citrix Virtual Apps and Desktops entitlement. For more information about XenServer licensing, see the Licensing. To upgrade, or to buy a XenServer license, visit the Citrix website.

Overview of workload reports

The Workload Balancing reports can help you perform capacity planning, determine virtual server health, and evaluate how effective your configured threshold levels are.

Workload Balancing lets you generate reports on three types of objects: physical hosts, resource pools, and VMs. At a high level, Workload Balancing provides two types of reports:

  • Historical reports that display information by date

  • “Roll up” style reports, which provide a summarizing overview of an area

Workload Balancing provides some reports for auditing purposes, so you can determine, for example, the number of times a VM moved.

You can use the Pool Health report to evaluate how effective your optimization thresholds are. While Workload Balancing provides default threshold settings, you might need to adjust these defaults for them to provide value in your environment. If you do not have the optimization thresholds adjusted to the correct level for your environment, Workload Balancing recommendations might not be appropriate for your environment.

To generate a Workload Balancing report, the pool must be running Workload Balancing. Ideally, the pool has been running Workload Balancing for a couple of hours or long enough to generate the data to display in the reports.

Generate a Workload Balancing report

  1. In XenCenter, from the Pool menu, select View Workload Reports.

    Tip:

    You can also display the Workload Reports screen from the WLB tab by clicking the Reports button.

  2. From the Workload Reports screen, select a report from the Reports pane.

  3. Select the Start Date and the End Date for the reporting period. Depending on the report you select, you might need to specify a host in the Host list box.

  4. Click Run Report. The report displays in the report window. For information about the meaning of the reports, see Workload Balancing report glossary.

After generating a report, you can use the toolbar buttons in the report to navigate and perform certain tasks. To display the name of a toolbar button, pause your mouse over toolbar icon.

Toolbar buttons Description
Document map Document Map enables you to display a document map that helps you navigate through long reports.
Page forward and back Page Forward/Back enables you to move one page ahead or back in the report.
Back to parent report Back to Parent Report enables you to return to the parent report when working with drill-through reports. Note: This button is only available in drill-through reports, such as the Pool Health report.
Stop rendering Stop Rendering cancels the report generation.
Print Print enables you to print a report and specify general printing options. These options include: the printer, the number of pages, and the number of copies.
Print layout Print Layout enables you to display a preview of the report before you print it. To exit Print Layout, click the Print Layout button again.
Page setup Page Setup enables you to specify printing options such as the paper size, page orientation, and margins.
Export Export enables you to export the report as an Acrobat (.PDF) file or as an Excel file with a .XLS extension.
Find Find enables you to search for a word in a report, such as the name of a VM.

Before you can print a report, you must first generate it.

  1. (Optional) Preview the printed document by clicking the following Print Layout button:

  2. (Optional) Change the paper size and source, page orientation, or margins by clicking the following Page Setup button:

  3. Click the following Print button:

Export a Workload Balancing report

You can export a report in either Microsoft Excel or Adobe Acrobat (PDF) formats.

  1. After generating the report, click the following Export button:

  2. Select one of the following items from the Export button menu:

    • Excel

    • Acrobat (PDF) file

Note:

The data that appears between report export formats may be inconsistent, depending on the export format you select. Reports exported to Excel include all the data available for reports, including “drilldown” data. Reports exported to PDF and displayed in XenCenter only contain the data that you selected when you generated the report.

Workload Balancing report glossary

This section provides information about the following Workload Balancing reports:

Chargeback Utilization Analysis

You can use the Chargeback Utilization Analysis report (“chargeback report”) to determine how much of a resource a specific department in your organization used. Specifically, the report shows information about all the VMs in your pool, including their availability and resource utilization. Since this report shows VM up time, it can help you demonstrate Service Level Agreements compliance and availability.

The chargeback report can help you implement a simple chargeback solution and facilitate billing. To bill customers for a specific resource, generate the report, save it as Excel, and customize edit the spreadsheet to include your price per unit. Alternatively, you can import the Excel data into your billing system.

If you want to bill internal or external customers for VM usage, consider incorporating department or customer names in your VM naming conventions. This practice makes reading chargeback reports easier.

The resource reporting in the chargeback report is, sometimes, based on the allocation of physical resources to individual VMs.

The average memory data in this report is based on the amount of memory currently allocated to the VM. XenServer enables you to have a fixed memory allocation or an automatically adjusting memory allocation (Dynamic Memory Control).

The chargeback report contains the following columns of data:

  • VM Name. The name of the VM to which the data in the columns in that row applies.

  • VM Uptime. The number of minutes the VM was powered on (or, more specifically, appears with a green icon beside it in XenCenter).

  • vCPU Allocation. The number of virtual CPUs configured on the VM. Each virtual CPU receives an equal share of the physical CPUs on the host. For example, consider the case where you configured eight virtual CPUs on a host that contains two physical CPUs. If the vCPU Allocation column has “1” in it, this value is equal to 2/16 of the total processing power on the host.

  • Minimum CPU Usage (%). The lowest recorded value for virtual CPU utilization in the reporting period. This value is expressed as a percentage of the VM’s vCPU capacity. The capacity is based on the number of vCPUs allocated to the VM. For example, if you allocated one vCPU to a VM, Minimum CPU Usage represents the lowest percentage of vCPU usage that is recorded. If you allocated two vCPUs to the VM, the value is the lowest usage of the combined capacity of both vCPUs as a percentage.

    Ultimately, the percentage of CPU usage represents the lowest recorded workload that virtual CPU handled. For example, if you allocate one vCPU to a VM and the pCPU on the host is 2.4 GHz, 0.3 GHz is allocated to the VM. If the Minimum CPU Usage for the VM was 20%, the VM’s lowest usage of the physical host’s CPU during the reporting period was 60 MHz.

  • Maximum CPU Usage (%). The highest percentage of the VM’s virtual CPU capacity that the VM consumed during the reporting period. The CPU capacity consumed is a percentage of the virtual CPU capacity you allocated to the VM. For example, if you allocated one vCPU to the VM, the Maximum CPU Usage represents the highest recorded percentage of vCPU usage during the time reported. If you allocated two virtual CPUs to the VM, the value in this column represents the highest utilization from the combined capacity of both virtual CPUs.

  • Average CPU Usage (%). The average amount, expressed as a percentage, of the VM’s virtual CPU capacity that was in use during the reporting period. The CPU capacity is the virtual CPU capacity you allocated to the VM. If you allocated two virtual CPUs to the VM, the value in this column represents the average utilization from the combined capacity of both virtual CPUs.

  • Total Storage Allocation (GB). The amount of disk space that is currently allocated to the VM at the time the report was run. Frequently, unless you modified it, this disk space is the amount of disk space you allocated to the VM when you created it.

  • Virtual NIC Allocation. The number of virtual interfaces (VIFs) allocated to the VM.

  • Current Minimum Dynamic Memory (MB).

    • Fixed memory allocation. If you assigned a VM a fixed amount of memory (for example, 1,024 MB), the same amount of memory appears in the following columns: Current Minimum Dynamic Memory (MB), Current Maximum Dynamic Memory (MB), Current Assigned Memory (MB), and Average Assigned Memory (MB).

    • Dynamic memory allocation. If you configured XenServer to use Dynamic Memory Control, the minimum amount of memory specified in the range appears in this column. If the range has 1,024 MB as minimum memory and 2,048 MB as maximum memory, the Current Minimum Dynamic Memory (MB) column displays 1,024 MB.

  • Current Maximum Dynamic Memory (MB).

    • Dynamic memory allocation. If XenServer adjusts a VM’s memory automatically based on a range, the maximum amount of memory specified in the range appears in this column. For example, if the memory range values are 1,024 MB minimum and 2,048 MB maximum, 2,048 MB appears in the Current Maximum Dynamic Memory (MB) column.

    • Fixed memory allocation. If you assign a VM a fixed amount of memory (for example, 1,024 MB), the same amount of memory appears in the following columns: Current Minimum Dynamic Memory (MB), Current Maximum Dynamic Memory (MB), Current Assigned Memory (MB), and Average Assigned Memory (MB).

  • Current Assigned Memory (MB).

    • Dynamic memory allocation. When Dynamic Memory Control is configured, this value indicates the amount of memory XenServer allocates to the VM when the report runs.

    • Fixed memory allocation. If you assign a VM a fixed amount of memory (for example, 1,024 MB), the same amount of memory appears in the following columns: Current Minimum Dynamic Memory (MB), Current Maximum Dynamic Memory (MB), Current Assigned Memory (MB), and Average Assigned Memory (MB).

    Note:

    If you change the VM’s memory allocation immediately before running this report, the value reflected in this column reflects the new memory allocation you configured.

  • Average Assigned Memory (MB).

    • Dynamic memory allocation. When Dynamic Memory Control is configured, this value indicates the average amount of memory XenServer allocated to the VM over the reporting period.

    • Fixed memory allocation. If you assign a VM a fixed amount of memory (for example, 1,024 MB), the same amount of memory appears in the following columns: Current Minimum Dynamic Memory (MB), Current Maximum Dynamic Memory (MB), Current Assigned Memory (MB), and Average Assigned Memory (MB).

    Note:

    If you change the VM’s memory allocation immediately before running this report, the value in this column might not change from what was previously displayed. The value in this column reflects the average over the time period.

  • Average Network Reads (BPS). The average amount of data (in bits per second) the VM received during the reporting period.

  • Average Network Writes (BPS). The average amount of data (in bits per second) the VM sent during the reporting period.

  • Average Network Usage (BPS). The combined total (in bits per second) of the Average Network Reads and Average Network Writes. If a VM sends, on average, 1,027 bps and receives, on average, 23,831 bps during the reporting period, the Average Network Usage is the combined total of these values: 24,858 bps.

  • Total Network Usage (BPS). The total of all network read and write transactions in bits per second over the reporting period.

Host Health History

This report displays the performance of resources (CPU, memory, network reads, and network writes) on specific host in relation to threshold values.

The colored lines (red, green, yellow) represent your threshold values. You can use this report with the Pool Health report for a host to determine how the host’s performance might affect overall pool health. When you are editing the performance thresholds, you can use this report for insight into host performance.

You can display resource utilization as a daily or hourly average. The hourly average lets you see the busiest hours of the day, averaged, for the time period.

To view report data which is grouped by hour, under Host Health History expand Click to view report data grouped by house for the time period.

Workload Balancing displays the average for each hour for the time period you set. The data point is based on a utilization average for that hour for all days in the time period. For example, in a report for May 1, 2009, to May 15, 2009, the Average CPU Usage data point represents the resource utilization of all 15 days at 12:00 hours. This information is combined as an average. If CPU utilization was 82% at 12PM on May 1, 88% at 12PM on May 2, and 75% on all other days, the average displayed for 12PM is 76.3%.

Note:

Workload Balancing smooths spikes and peaks so data does not appear artificially high.

Pool Optimization Performance History

The optimization performance report displays optimization events against that pool’s average resource usage. These events are instances when you optimized a resource pool. Specifically, it displays resource usage for CPU, memory, network reads, and network writes.

The dotted line represents the average usage across the pool over the period of days you select. A blue bar indicates the day on which you optimized the pool.

This report can help you determine if Workload Balancing is working successfully in your environment. You can use this report to see what led up to optimization events (that is, the resource usage before Workload Balancing recommended optimizing).

This report displays average resource usage for the day. It does not display the peak utilization, such as when the system is stressed. You can also use this report to see how a resource pool is performing when Workload Balancing is not making optimization recommendations.

In general, resource usage declines or stays steady after an optimization event. If you do not see improved resource usage after optimization, consider readjusting threshold values. Also, consider whether the resource pool has too many VMs and whether or not you added or removed new VMs during the period that you specified.

Pool Audit Trail

This report displays the contents of the XenServer Audit Log. The Audit Log is a XenServer feature designed to log attempts to perform unauthorized actions and select authorized actions. These actions include:

  • Import and export
  • Host and pool backups
  • Guest and host console access.

The report gives more meaningful information when you give XenServer administrators their own user accounts with distinct roles assigned to them by using the RBAC feature.

Important:

To run the audit log report, you must enable the Audit Logging feature. By default, Audit Log is always enabled in the Workload Balancing virtual appliance.

The enhanced Pool Audit Trail feature allows you to specify the granularity of the audit log report. You can also search and filter the audit trail logs by specific users, objects, and by time. The Pool Audit Trail Granularity is set to Minimum by default. This option captures limited amount of data for specific users and object types. You can modify the setting at any time based on the level of detail you require in your report. For example, set the granularity to Medium for a user-friendly report of the audit log. If you require a detailed report, set the option to Maximum.

Report contents

The Pool Audit Trail report contains the following:

  • Time. The time XenServer recorded the user’s action.

  • User Name. The name of the person who created the session in which the action was performed. Sometimes, this value can be the User ID

  • Event Object. The object that was the subject of the action (for example, a VM).

  • Event Action. The action that occurred. For definitions of these actions, see Audit Log Event Names.

  • Access. Whether or not the user had permission to perform the action.

  • Object Name. The name of the object (for example, the name of the VM).

  • Object UUID. The UUID of the object (for example, the UUID of the VM).

  • Succeeded. This information provides the status of the action (that is, whether or not it was successful).

Audit Log event names

The Audit Log report logs XenServer events, event objects and actions, including import/export, host and pool backups, and guest and host console access. The following table defines some of the typical events that appear frequently in the XenServer Audit Log and Pool Audit Trail report. The table also specifies the granularity of these events.

In the Pool Audit Trail report, the events listed in the Event Action column apply to a pool, VM, or host. To determine what the events apply to, see the Event Object and Object Name columns in the report. For more event definitions, see the events section of the XenServer Management API.

Pool Audit Trail Granularity Event Action User Action
Minimum pool.join Instructed the host to join a new pool
Minimum pool.join_force Instructed (forced) the host to join a pool
Medium SR.destroy Destroyed the storage repository
Medium SR.create Created a storage repository
Medium VDI.snapshot Took a read-only snapshot of the VDI, returning a reference to the snapshot
Medium VDI.clone Took an exact copy of the VDI, returning a reference to the new disk
Medium VIF.plug Hot-plugged the specified VIF, dynamically attaching it to the running VM
Medium VIF.unplug Hot-unplugged the specified VIF, dynamically detaching it from the running VM
Maximum auth.get_subject_identifier Queried the external directory service to obtain the subject identifier as a string from the human-readable subject name
Maximum task.cancel Requested that a task is canceled
Maximum VBD.insert Inserted new media into the device
Maximum VIF.get_by_uuid Obtained a reference to the VIF instance with the specified UUID
Maximum VIF.get_shareable Obtained the shareable field of the given VDI
Maximum SR.get_all Returns a list of all of the SRs known to the system
Maximum pool.create_new_blob Created a placeholder for a named binary piece of data that is associated with this pool
Maximum host.send_debug_keys Injected the given string as debugging keys into Xen
Maximum VM.get_boot_record Returned a record describing the VMs dynamic state, initialized when the VM boots, and updated to reflect runtime configuration changes, for example, CPU hotplug
Pool Health

The Pool Health report displays the percentage of time a resource pool and its hosts spent in four different threshold ranges: Critical, High, Medium, and Low. You can use the Pool Health report to evaluate the effectiveness of your performance thresholds.

A few points about interpreting this report:

  • Resource utilization in the Average Medium Threshold (blue) is the optimum resource utilization regardless of the placement strategy you selected. Likewise, the blue section on the pie chart indicates the amount of time that host used resources optimally.

  • Resource utilization in the Average Low Threshold Percent (green) is not necessarily positive. Whether Low resource utilization is positive depends on your placement strategy. If your placement strategy is Maximum Density and your resource usage is green, WLB might not be fitting the maximum number of VMs possible on that host or pool. If so, adjust your performance threshold values until most of your resource utilization falls into the Average Medium (blue) threshold range.

  • Resource utilization in the Average Critical Threshold Percent (red) indicates the amount of time average resource utilization met or exceeded the Critical threshold value.

If you double-click on a pie chart for a host’s resource usage, XenCenter displays the Host Health History report for that resource on that host. Clicking Back to Parent Report on the toolbar returns you to the Pool Health history report.

If you find that most of your report results are not in the Average Medium Threshold range, adjust the Critical threshold for this pool. While Workload Balancing provides default threshold settings, these defaults are not effective in all environments. If you do not have the thresholds adjusted to the correct level for your environment, the Workload Balancing optimization and placement recommendations might not be appropriate. For more information, see Change the critical thresholds.

Pool Health History

This report provides a line graph of resource utilization on all physical hosts in a pool over time. It lets you see the trend of resource utilization—if it tends to be increasing in relation to your thresholds (Critical, High, Medium, and Low). You can evaluate the effectiveness of your performance thresholds by monitoring trends of the data points in this report.

Workload Balancing extrapolates the threshold ranges from the values you set for the Critical thresholds when you connected the pool to Workload Balancing. Although similar to the Pool Health report, the Pool Health History report displays the average utilization for a resource on a specific date. Instead of the amount of time overall the resource spent in a threshold.

Except for the Average Free Memory graph, the data points never average above the Critical threshold line (red). For the Average Free Memory graph, the data points never average below the Critical threshold line (which is at the bottom of the graph). Because this graph displays free memory, the Critical threshold is a low value, unlike the other resources.

A few points about interpreting this report:

  • When the Average Usage line in the chart approaches the Average Medium Threshold (blue) line, it indicates the pool’s resource utilization is optimum. This indication is regardless of the placement strategy configured.

  • Resource utilization approaching the Average Low Threshold (green) is not necessarily positive. Whether Low resource utilization is positive depends on your placement strategy. In the case where:
    • Your placement strategy is Maximum Density
    • Most days the Average Usage line is at or below the green line Workload Balancing might not be placing VMs as densely as possible on that pool. If so, adjust the pool’s Critical threshold values until most of its resource utilization falls into the Average Medium (blue) threshold range.
  • When the Average Usage line intersects with the Average Critical Threshold Percent (red), this indicates the days when the average resource utilization met or exceeded the Critical threshold value for that resource.

If data points in your graphs aren’t in the Average Medium Threshold range, but you are satisfied with performance, you can adjust the Critical threshold for this pool. For more information, see Change the critical thresholds.

Pool Optimization History

The Pool Optimization History report provides chronological visibility into Workload Balancing optimization activity.

Optimization activity is summarized graphically and in a table. Drilling into a date field within the table displays detailed information for each pool optimization performed for that day.

This report lets you see the following information:

  • VM Name. The name of the VM that Workload Balancing optimized.

  • Reason. The reason for the optimization.

  • Method. Whether the optimization was successful.

  • From Host. The physical server where the VM was originally hosted.

  • To Host. The physical server where the VM was migrated.

  • Time. The time when the optimization occurred.

Tip:

You can also generate a Pool Optimization History report from the WLB tab by clicking the View History link.

Virtual Machine Motion History

This line graph displays the number of times VMs migrated on a resource pool over a period. It indicates if a migration resulted from an optimization recommendation and to which host the VM moved. This report also indicates the reason for the optimization. You can use this report to audit the number of migrations on a pool.

Some points about interpreting this report:

  • The numbers on the left side of the chart correspond with the number of migrations possible. This value is based on how many VMs are in a resource pool.

  • You can look at details of the migrations on a specific date by expanding the + sign in the Date section of the report.

Virtual Machine Performance History

This report displays performance data for each VM on a specific host for a time period you specify. Workload Balancing bases the performance data on the amount of virtual resources allocated for the VM. For example, if Average CPU Usage for your VM is 67%, your VM uses, on average, 67% of its vCPU for the specified period.

The initial view of the report displays an average value for resource utilization over the period you specified.

Expanding the + sign displays line graphs for individual resources. You can use these graphs to see trends in resource utilization over time.

This report displays data for CPU Usage, Free Memory, Network Reads/Writes, and Disk Reads/Writes.

Manage Workload Balancing features and settings

This section provides information about how to perform optional changes to Workload Balancing settings, including how to:

  • Adjust the optimization mode

  • Optimize and manage power automatically

  • Change the critical thresholds

  • Tune metric weightings

  • Exclude hosts from recommendations

  • Configure advanced automation settings and data storage

  • Adjust the Pool Audit Trail granularity settings

This section assumes that you already connected your pool to a Workload Balancing virtual appliance. For information about downloading, importing, and configuring a Workload Balancing virtual appliance, see Get started. To connect to the virtual appliance, see Connect to the Workload Balancing Virtual Appliance.

Change Workload Balancing settings

After connecting to the Workload Balancing virtual appliance, you can, if desired, edit the settings Workload Balancing uses to calculate placement and recommendations.

Placement and optimization settings that you can modify include the following:

  • Changing the placement strategy
  • Configuring automatic optimizations and power management
  • Editing performance thresholds and metric weightings
  • Excluding hosts.

Workload Balancing settings apply collectively to all VMs and hosts in the pool.

Provided the network and disk thresholds align with the hardware in your environment, consider using most of the defaults in Workload Balancing initially.

After Workload Balancing is enabled for a while, Citrix recommends evaluating your performance thresholds and determining whether to edit them. For example, consider if you are:

  • Getting recommendations when they are not yet required. If so, try adjusting the thresholds until Workload Balancing begins providing suitable recommendations.

  • Not getting recommendations when you expect to receive them. For example, if your network has insufficient bandwidth and you do not receive recommendations, you might need to tweak your settings. If so, try lowering the network critical thresholds until Workload Balancing begins providing recommendations.

Before you edit your thresholds, you can generate a Pool Health report and the Pool Health History report for each physical host in the pool.

You can use the Workload Balancing Configuration properties in XenCenter to modify the configuration settings.

To update the credentials XenServer and the Workload Balancing server use to communicate, see Edit the Workload Balancing configuration file.

In the Infrastructure pane of XenCenter, select XenCenter > your-pool.

In the Properties pane, click the WLB tab.

In the WLB tab, click Settings.

Adjust the optimization mode

Workload Balancing makes recommendations to rebalance, or optimize, the VM workload in your environment based on a strategy for placement you select. The placement strategy is known as the optimization mode.

Workload Balancing lets you chose from two optimization modes:

  • Maximize Performance. (Default.) Workload Balancing attempts to spread workload evenly across all physical hosts in a resource pool. The goal is to minimize CPU, memory, and network pressure for all hosts. When Maximize Performance is your placement strategy, Workload Balancing recommends optimization when a host reaches the High threshold.

  • Maximize Density. Workload Balancing attempts to minimize the number of physical hosts that must be online by consolidating the active VMs.

    When you select Maximize Density as your placement strategy, you can specify parameters similar to the ones in Maximize Performance. However, Workload Balancing uses these parameters to determine how it can pack VMs onto a host. If Maximize Density is your placement strategy, Workload Balancing recommends consolidation optimizations when a VM reaches the Low threshold.

Workload Balancing also lets you apply these optimization modes all of the time, Fixed, or switch between modes for specified time periods, Scheduled:

  • Fixed optimization modes set Workload Balancing to have a specific optimization behavior always. This behavior can be either to try to create the best performance or to create the highest density.

  • Scheduled optimization modes let you schedule for Workload Balancing to apply different optimization modes depending on the time of day. For example, you might want to configure Workload Balancing to optimize for performance during the day when you have users connected. To save energy, you can then specify for Workload Balancing to optimize for Maximum Density at night.

    When you configure Scheduled optimization modes, Workload Balancing automatically changes to the optimization mode at the beginning of the time period you specified. You can configure Everyday, Weekdays, Weekends, or individual days. For the hour, you select a time of day.

In the Resources pane of XenCenter, select XenCenter > your-pool.

In the Properties pane, click the WLB tab.

On the WLB tab, click Settings.

In the left pane, click Optimization Mode.

In the Fixed section of the Optimization Mode page, select one of these optimization modes:

  • Maximize Performance. (Default.) Attempts to spread workload evenly across all physical hosts in a resource pool. The goal is to minimize CPU, memory, and network pressure for all hosts.

  • Maximize Density. Attempts to fit as many VMs as possible onto a physical host. The goal is to minimize the number of physical hosts that must be online. (Workload Balancing considers the performance of consolidated VMs and issues a recommendation to improve performance if a resource on a host reaches a Critical threshold.)

In the Infrastructure pane of XenCenter, select XenCenter > your-pool.

In the Properties pane, click the WLB tab.

On the WLB tab, click Settings.

In the left pane, click Optimization Mode

In the Optimization Mode pane, select Scheduled. The Scheduled section becomes available.

Click Add New.

In the Change to box, select one of the following modes:

  • Maximize Performance. Attempts to spread workload evenly across all physical hosts in a resource pool. The goal is to minimize CPU, memory, and network pressure for all hosts.

  • Maximize Density. Attempts to fit as many VMs as possible onto a physical host. The goal is to minimize the number of physical hosts that must be online.

Select the day of the week and the time when you want Workload Balancing to begin operating in this mode.

Create more scheduled mode changes (that is, “tasks”) until you have the number you need. If you only schedule one task, Workload Balancing switches to that mode as scheduled, but then it never switches back.

Click OK.

Display the Optimization Mode dialog box by following steps 1–4 in the previous procedure.

Select the task you want to delete or disable from the Scheduled Mode Changes list.

Do one of the following:

  • Delete the task permanently. Click the Delete button.

  • Stop the task from running temporarily. Right-click the task and click Disable.

    Tips:

    • You can also disable or enable tasks by selecting the task, clicking Edit, and selecting the Enable Task check box in the Optimization Mode Scheduler dialog.
    • To re-enable a task, right-click the task in the Scheduled Mode Changes list and click Enable.

Do one of the following:

  • Double-click the task you want to edit.

  • Select the task you want to edit, and click Edit.

In the Change to box, select a different mode or make other changes as desired.

Note:

Clicking Cancel, before clicking OK, undoes any changes you made in the Optimization tab, including deleting a task.

Optimize and manage power automatically

You can configure Workload Balancing to apply recommendations automatically (Automation) and turn hosts on or off automatically. To power down hosts automatically (for example, during low-usage periods), you must configure Workload Balancing to apply recommendations automatically and enable power management. Both power management and automation are described in the sections that follow.

Apply recommendations automatically

Workload Balancing lets you configure for it to apply recommendations on your behalf and perform the optimization actions it recommends automatically. You can use this feature, which is known as Automatic optimization acceptance, to apply any recommendations automatically, including ones to improve performance or power down hosts. However, to power down hosts as VMs usage drops, you must configure automation, power management, and Maximum Density mode.

By default, Workload Balancing does not apply recommendations automatically. If you want Workload Balancing to apply recommendations automatically, enable Automation. If you do not, you must apply recommendations manually by clicking Apply Recommendations.

Workload Balancing does not automatically apply recommendations to hosts or VMs when the recommendations conflict with High Availability settings. If a pool becomes overcommitted by applying Workload Balancing optimization recommendations, XenCenter prompts you whether or not you want to continue applying the recommendation. When Automation is enabled, Workload Balancing does not apply any power-management recommendations that exceed the number of host failures to tolerate in the High Availability plan.

When Workload Balancing is running with the Automation feature enabled, this behavior is sometimes called running in automated mode.

It is possible to tune how Workload Balancing applies recommendations in automated mode. For information, see Set conservative or aggressive automated recommendations.

Enable Workload Balancing power management

The term power management means the ability to the turn the power on or off for physical hosts. In a Workload Balancing context, this term means powering hosts in a pool on or off based on the pool’s total workload.

Configuring Workload Balancing power management on a host requires that:

  • The hardware for the host has remote power on/off capabilities

  • The Host Power On feature is configured for the host

  • The host has been explicitly selected as a host to participate in (Workload Balancing) Power Management

In addition, if you want Workload Balancing to power off hosts automatically, configure Workload Balancing to do the following actions:

  • Apply recommendations automatically

  • Apply Power Management recommendations automatically

If WLB detects unused resources in a pool in Maximum Density mode, it recommends powering off hosts until it eliminates all excess capacity. If there isn’t enough host capacity in the pool to shut down hosts, WLB recommends leaving the hosts on until the pool workload decreases enough. When you configure Workload Balancing to power off extra hosts automatically, it applies these recommendations automatically and, so, behaves in the same way.

When a host is set to participate in Power Management, Workload Balancing makes power-on and power-off recommendations as needed. If you run in Maximum Performance mode:

  • If you configure WLB to power on hosts automatically, WLB powers on hosts when resource utilization on a host exceeds the High threshold.
  • Workload Balancing never powers off hosts after it has powered them on.

If you turn on the option to apply Power Management recommendations automatically, you do so at the pool level. However, you can specify which hosts from the pool you want to participate in Power Management.

Understand power management behavior

Before Workload Balancing recommends powering hosts on or off, it selects the hosts to transfer VMs to (that is, to “fill”). It does so in the following order:

  1. Filling the pool master since it is the host that cannot be powered off.

  2. Filling the host with the most VMs.

  3. Filling subsequent hosts according to which hosts have the most VMs running.

When Workload Balancing fills the pool master, it does so assuming artificially low (internal) thresholds for the master. Workload Balancing uses these low thresholds as a buffer to prevent the pool master from being overloaded.

Workload Balancing fills hosts in this order to encourage density.

 When consolidating VMs on hosts in Maximum Density mode, XenServer fills the pool master first, the most loaded host second, and the least loaded host third.

When WLB detects a performance issue while the pool is in Maximum Density mode, it recommends migrating workloads among the powered-on hosts. If Workload Balancing cannot resolve the issue using this method, it attempts to power on a host. (Workload Balancing determines which hosts to power on by applying the same criteria it would if the optimization mode was set to Maximum Performance.)

When WLB runs in Maximum Performance mode, WLB recommends powering on hosts until the resource utilization on all pool members falls below the High threshold.

If, while migrating VMs, Workload Balancing determines that increasing capacity benefits the pool’s overall performance, it powers on hosts automatically or recommends doing so.

Important:

Workload Balancing only recommends powering on a host that Workload Balancing powered off.

Design environments for power management and VM consolidation

When you are planning a XenServer implementation and you intend to configure automatic VM consolidation and power management, consider your workload design. For example, you may want to:

  • Place Different Types of Workloads in Separate Pools. If you have an environment with distinct types of workloads, consider whether to locate the VMs hosting these workloads in different pools. Also consider splitting VMs that host types of applications that perform better with certain types of hardware into different pool.

    Because power management and VM consolidation are managed at the pool level, design pools so they contain workloads that you want consolidated at the same rate. Ensure that you factor in considerations such as those discussed in Control automated recommendations.

  • Exclude Hosts from Workload Balancing. Some hosts might need to be always on. For more information, see Exclude hosts from recommendations.

To apply optimization recommendations automatically
  1. In the Infrastructure pane of XenCenter, select XenCenter > your-pool.

  2. In the Properties pane, click the WLB tab.

  3. In the WLB tab, click Settings.

  4. In the left pane, click Automation.

  5. Select one or more of the following check boxes:

    • Automatically apply Optimization recommendations. When you select this option, you do not need to accept optimization recommendations manually. Workload Balancing automatically accepts optimization and placement recommendations it makes.

    • Automatically apply Power Management recommendations. The behavior of this option varies according to the pool’s optimization mode:

      • Maximum Performance Mode. When Automatically apply Power Management recommendations is enabled, Workload Balancing automatically powers on hosts when doing so improves host performance.

      • Maximum Density Mode. When Automatically apply Power Management recommendations is enabled, Workload Balancing automatically powers off hosts when resource utilization drops below the Low threshold. That is, Workload Balancing powers off hosts automatically during low usage periods.

  6. (Optional.) Fine-tune optimization recommendations by clicking Advanced in the left pane of the Settings dialog and doing one or more of the following:

    • Specifying the number of times Workload Balancing must make an optimization recommendation before the recommendation is applied automatically. The default is three times, which means the recommendation is applied on the third time it is made.

    • Selecting the lowest level of optimization recommendation that you want Workload Balancing to apply automatically. The default is High.

    • Changing the aggressiveness with which Workload Balancing applies its optimization recommendations.

      You may also want to specify the number of minutes Workload Balancing has to wait before applying an optimization recommendation to a recently moved VM.

      All of these settings are explained in more depth in Set conservative or aggressive automated recommendations.

  7. Do one of the following:

    • If you want to configure power management, click Automation/Power Management and proceed to the To select hosts for power management.

    • If you do not want to configure power management and you are finished configuring automation, click OK.

To select hosts for power management
  1. In the Power Management section, select the hosts that you want Workload Balancing to recommend powering on and off.

    Note:

    Selecting hosts for power management recommendations without selecting Automatically apply Power Management recommendations causes WLB to suggest power-management recommendations but not apply them automatically for you.

  2. Click OK. If none of the hosts in the resource pool support remote power management, Workload Balancing displays the message, “No hosts support Power Management.”

Understand when Workload Balancing makes recommendations

Workload Balancing continuously evaluates the resource metrics of physical hosts and VMs across the pools it is managing against thresholds. Thresholds are preset values that function like boundaries that a host must exceed before Workload Balancing can make an optimization recommendation. At a very high level. The Workload Balancing process is as follows:

  1. Workload Balancing detects that the threshold for a resource was violated.

  2. Workload Balancing evaluates if it makes an optimization recommendation.

  3. Workload Balancing determines which hosts it recommends function as the destination hosts. A destination host is the host where Workload Balancing recommends relocating one or more VMs.

  4. Workload Balancing makes the recommendation.

After WLB determines that a host can benefit from optimization, before it makes the recommendation, it evaluates other hosts on the pool to decide the following:

  1. The order to make the optimization (what hosts, what VMs)
  2. Where to recommend placing a VM when it does make a recommendation

To accomplish these two tasks, Workload Balancing uses thresholds and weightings as follows:

  • Thresholds are the boundary values that Workload Balancing compares your pool’s resource metrics against. The thresholds are used to determine whether or not to make a recommendation and what hosts are a suitable candidate for hosting relocated VMs.

  • Weightings are a way of ranking resources according to how much you want them to be considered, are used to determine the processing order. After Workload Balancing decides to make a recommendation, it uses your specifications of which resources are important to determine the following:

    • Which hosts’ performance to address first
    • Which VMs to recommend migrating first

For each resource Workload Balancing monitors, it has four levels of thresholds (Critical, High, Medium, and Low), which are discussed in the sections that follow. Workload Balancing evaluates whether to make a recommendation when a resource metric on a host:

  • Exceeds the High threshold when the pool is running in Maximum Performance mode (improve performance)

  • Drops below the Low threshold when the pool is running in Maximum Density mode (consolidate VMs on hosts)

  • Exceeds the Critical threshold when the pool is running in Maximum Density mode (improve performance)

If the High threshold for a pool running in Maximum Performance mode is 80%, when CPU utilization on a host reaches 80.1%, WLB evaluates whether to issue a recommendation.

When a resource violates its threshold, WLB evaluates the resource’s metric against historical performance to prevent making an optimization recommendation based on a temporary spike. To do so, Workload Balancing creates a historically averaged utilization metric by evaluating the data for resource utilization captured at the following times:

Data captured Weight
Immediately, at the time threshold was exceeded (that is, real-time data) 70%
30 minutes before the threshold was exceeded 25%
24 hours before the threshold was exceeded 5%

If CPU utilization on the host exceeds the threshold at 12:02 PM, WLB checks the utilization at 11:32 AM that day, and at 12:02PM on the previous day. For example, if CPU utilization is at the following values, WLB doesn’t make a recommendation:

  • 80.1% at 12:02 PM that day
  • 50% at 11:32 AM that day
  • 78% at 12:32 PM the previous day

This behavior is because the historically averaged utilization is 72.47%, so Workload Balancing assumes that the utilization is a temporary spike. However, if the CPU utilization was 78% at 11:32AM, Workload Balancing makes a recommendation since the historically averaged utilization is 80.1%.

Optimization and consolidation process

The Workload Balancing process for determining potential optimizations varies according to the optimization mode (Maximum Performance or Maximum Density). However, regardless of the optimization mode, optimization, and placement recommendations are made using a two-stage process:

  1. Determine potential optimizations. (That is, what VMs to migrate off hosts.)

  2. Determine placement recommendations. (That is, what hosts would be suitable candidates for new hosts.)

Note:

Workload Balancing only recommends migrating VMs that meet the XenServer criteria for live migration, including the destination host must have the storage the VM requires. The destination host must also have sufficient resources to accommodate adding the VM without exceeding the thresholds of the optimization mode configured on the pool. For example, the High threshold in Maximum Performance mode and the Critical threshold for Maximum Density mode.

When Workload Balancing is running in automated mode, you can tune the way it applies recommendations. For more information, see Set conservative or aggressive automated recommendations.

Optimization recommendation process in Maximum Performance mode

When running in Maximum Performance mode, Workload Balancing uses the following process to determine potential optimizations:

  1. Every two minutes Workload Balancing evaluates the resource utilization for each host in the pool. It does so by monitoring on each host and determining if each resource’s utilization exceeds its High threshold. See Change the critical threshold for more information about the High threshold.

    If, in Maximum Performance mode, a resource’s utilization exceeds its High threshold, WLB starts the process to determine whether to make an optimization recommendation. Workload Balancing determines whether to make an optimization recommendation based on whether doing so can ease performance constraints, such as ones revealed by the High threshold.

    For example, consider the case where Workload Balancing sees that insufficient CPU resources negatively affect the performance of the VMs on Host A. If Workload Balancing can find another host with less CPU utilization, it recommends moving one or more VMs to another host.

  2. If a resource’s utilization on a host exceeds the relevant threshold, Workload Balancing combines the following data to form the historically averaged utilization:
    • The resource’s current utilization
    • Historical data from 30 minutes ago
    • Historical data from 24 hours ago If the historically averaged utilization exceeds the resource’s threshold, Workload Balancing determines it makes an optimization recommendation.
  3. Workload Balancing uses metric weightings to determine what hosts to optimize first. The resource to which you have assigned the most weight is the one that Workload Balancing attempts to address first. See Tune metric weightings for information about metric weightings.

  4. Workload Balancing determines which hosts can support the VMs it wants to migrate off hosts.

    Workload Balancing makes this determination by calculating the projected effect on resource utilization of placing different combinations of VMs on hosts. (Workload Balancing uses a method of performing these calculations that in mathematics is known as permutation.)

    To do so, Workload Balancing creates a single metric or score to forecast the impact of migrating a VM to the host. The score indicates the suitability of a host as a home for more VMs.

    To score the host’s performance, Workload Balancing combines the following metrics:

    • The host’s current metrics
    • The host’s metrics from the last 30 minutes
    • The host’s metrics from 24 hours ago
    • The VM’s metrics.
  5. After scoring hosts and VMs, WLB attempts to build virtual models of what the hosts look like with different combinations of VMs. WLB uses these models to determine the best host to place the VM.

    In Maximum Performance mode, Workload Balancing uses metric weightings to determine what hosts to optimize first and what VMs on those hosts to migrate first. Workload Balancing bases its models on the metric weightings. For example, if CPU utilization is assigned the highest importance, Workload Balancing sorts hosts and VMs to optimize according to the following criteria:

    • First, what hosts CPU utilization most affects (that is, are running closest to the High threshold for CPU utilization)
    • What VMs have the highest CPU utilization (or are running the closest to its High threshold).
  6. Workload Balancing continues calculating optimizations. It views hosts as candidates for optimization and VMs as candidates for migration until predicted resource utilization on the VM’s host drops below the High threshold. Predicted resource utilization is the resource utilization that Workload Balancing forecasts a host has after Workload Balancing has added or removed a VM from the host.
Consolidation process in Maximum Density mode

WLB determines whether to make a recommendation based on whether it can migrate a VM onto a host and still run that host below the Critical threshold.

  1. When a resource’s utilization drops below its Low threshold, Workload Balancing begins calculating potential consolidation scenarios.

  2. When WLB discovers a way that it can consolidate VMs on a host, it evaluates whether the destination host is a suitable home for the VM.

  3. Like in Maximum Performance mode, Workload Balancing scores the host to determine the suitability of a host as a home for new VMs.

    Before WLB makes recommendations to consolidate VMs on fewer hosts, it checks that resource utilization on those hosts after VMs are relocated to them is below Critical thresholds.

    Note:

    Workload Balancing does not consider metric weightings when it makes a consolidation recommendation. It only considers metric weightings to ensure performance on hosts.

  4. After scoring hosts and VMs, WLB attempts to build virtual models of what the hosts look like with different combinations of VMs. It uses these models to determine the best host to place the VM.

  5. WLB calculates the effect of adding VMs to a host until it forecasts that adding another VM causes a host resource to exceed the Critical threshold.

  6. Workload Balancing recommendations always suggest filling the pool master first since it is the host that cannot be powered off. However, Workload Balancing applies a buffer to the pool master so that it cannot be over-allocated.

  7. WLB continues to recommend migrating VMs on to hosts until no hosts remain that don’t exceed a Critical threshold when a VM is migrated to them.

Change the critical thresholds

You might want to change critical thresholds as a way of controlling when optimization recommendations are triggered. This section provides guidance about:

  • How to modify the default Critical thresholds on hosts in the pool
  • How values set for Critical threshold alter High, Medium, and Low thresholds.

Workload Balancing determines whether to produce recommendations based on whether the averaged historical utilization for a resource on a host violates its threshold. Workload Balancing recommendations are triggered when the High threshold in Maximum Performance mode or Low and Critical thresholds for Maximum Density mode are violated. For more information, see Optimization and consolidation process. After you specify a new Critical threshold for a resource, Workload Balancing resets the resource’s other thresholds relative to the new Critical threshold. (To simplify the user interface, the Critical threshold is the only threshold you can change through XenCenter.)

The following table shows the default values for the Workload Balancing thresholds:

Metric Critical High Medium Low
CPU Utilization 90% 76.5% 45% 22.5%
Free Memory 51 MB 63.75 MB 510 MB 1020 MB
Network Reads 25 MB/sec 21.25 MB/sec 12.5 MB/sec 6.25 MB/sec
Network Writes 25 MB/sec 21.25 MB/sec 12.5 MB/sec 6.25 MB/sec
Disk Reads 25 MB/sec 21.25 MB/sec 12.5 MB/sec 6.25 MB/sec
Disk Writes 25 MB/sec 21.25 MB/sec 12.5 MB/sec 6.25 MB/sec

To calculate the values for all metrics except memory, Workload Balancing multiplies the new value for the Critical threshold with the following factors:

  • High Threshold Factor: 0.85

  • Medium Threshold Factor: 0.50

  • Low Threshold Factor: 0.25

For example, if you increase the Critical threshold for CPU Utilization to 95%, WLB resets the High, Medium, and Low thresholds to 80.75%, 47.5%, and 23.75%.

To calculate the threshold values for free memory, Workload Balancing multiplies the Critical threshold with these factors:

  • High Threshold Factor: 1.25

  • Medium Threshold Factor: 10.0

  • Low Threshold Factor: 20.0

To perform this calculation for a specific threshold, multiply the factor for the threshold with the value you entered for the critical threshold for that resource:

High, Medium, or Low Threshold = Critical Threshold * Threshold Factor

For example, if you change the Critical threshold for Network Reads to 40 MB/sec, to get its Low threshold, multiply 40 by 0.25, which equals 10 MB/sec. To obtain the Medium threshold, you multiple 40 by 0.50, and so on.

While the Critical threshold triggers many optimization recommendations, other thresholds can also trigger optimization recommendations, as follows:

  • High threshold.

    • Maximum Performance. Exceeding the High threshold triggers optimization recommendations to relocate a VM to a host with lower resource utilization.

    • Maximum Density. Workload Balancing doesn’t recommend placing a VM on host when moving that VM to the host causes the host resource utilization to exceed a High threshold.

  • Low threshold.

    • Maximum Performance. Workload Balancing does not trigger recommendations from the Low threshold.

    • Maximum Density. When a metric value drops below the Low threshold, WLB determines that hosts are underutilized and makes an optimization recommendation to consolidate VMs on fewer hosts. Workload Balancing continues to recommend moving VMs onto a host until the metric values for one of the host’s resources reaches its High threshold.

      However, after a VM is relocated, utilization of a resource on the VM’s new host can exceeds a Critical threshold. In this case, WLB temporarily uses an algorithm similar to the Maximum Performance load-balancing algorithm to find a new host for the VMs. Workload Balancing continues to use this algorithm to recommend moving VMs until resource utilization on hosts across the pool falls below the High threshold.

To change the critical thresholds
  1. In the Infrastructure pane of XenCenter, select XenCenter > your-resource-pool.

  2. In the Properties pane, click the WLB tab.

  3. In the WLB tab, click Settings.

  4. In the left pane, select Critical Thresholds. These critical thresholds are used to evaluate host resource utilization.

  5. In Critical Thresholds page, type one or more new values in the Critical Thresholds boxes. The values represent resource utilization on the host.

    Workload Balancing uses these thresholds when making VM placement and pool-optimization recommendations. Workload Balancing strives to keep resource utilization on a host below the critical values set.

Tune metric weightings

How Workload Balancing uses metric weightings when determining which hosts and VMs to process first varies according to the optimization mode: Maximum Density or Maximum Performance.

When Workload Balancing is processing optimization recommendations, it creates an optimization order. To determine this, Workload Balancing ranks the hosts to address first according to which hosts have the highest metric values for whatever resource is ranked as the most important in the metric weightings page.

In general, metric weightings are used when a pool is in Maximum Performance mode. However, when Workload Balancing is in Maximum Density mode, it does use metric weightings when a resource exceeds its Critical threshold.

Maximum Performance mode

In Maximum Performance mode, Workload Balancing uses metric weightings to determine (a) which hosts’ performance to address first and (b) which VMs to recommend migrating first.

For example, if Network Writes is the most important resource for WLB, WLB first makes optimization recommendations for the host with the highest number of Network Writes per second. To make Network Writes the most important resource move the Metric Weighting slider to the right and all the other sliders to the middle.

If you configure all resources to be equally important, Workload Balancing addresses CPU utilization first and memory second, as these resources are typically the most constrained. To make all resources equally important, set the Metric Weighting slider is in the same place for all resources.

Maximum Density mode

In Maximum Density mode, Workload Balancing only uses metric weightings when a host reaches the Critical threshold. At that point, Workload Balancing applies an algorithm similar to that for Maximum Performance until no hosts exceed the Critical thresholds. When using this algorithm, Workload Balancing uses metric weightings to determine the optimization order in the same way as it does for Maximum Performance mode.

If two or more hosts have resources exceeding their Critical thresholds, Workload Balancing verifies the importance you set for each resource. It uses this importance to determine which host to optimize first and which VMs on that host to relocate first.

For example, your pool contains Host A and Host B, which are in the following state:

  • The CPU utilization on Host A exceeds its Critical threshold and the metric weighting for CPU utilization is set to the far right: More Important.

  • The memory utilization on Host B exceeds its Critical threshold and the metric weighting for memory utilization is set to the far left: Less Important.

Workload Balancing recommends optimizing Host A first because the resource on it that reached the Critical threshold is the resource assigned the highest weight. After Workload Balancing determines that it must address the performance on Host A, Workload Balancing then begins recommending placements for VMs on that host. It begins with the VM that has the highest CPU utilization, since that CPU utilization is the resource with the highest weight.

After Workload Balancing has recommended optimizing Host A, it makes optimization recommendations for Host B. When it recommends placements for the VMs on Host B, it does so by addressing CPU utilization first, since CPU utilization was assigned the highest weight.

If there are more hosts that need optimization, Workload Balancing addresses the performance on those hosts according to what host has the third highest CPU utilization.

By default, all metric weightings are set to the farthest point on the slider (More Important).

Note:

The weighting of metrics is relative. If all of the metrics are set to the same level, even if that level is Less Important, they are all be weighted the same. The relation of the metrics to each other is more important than the actual weight at which you set each metric.

To edit metric weighting factors
  1. Stopping

  2. In the Infrastructure pane of XenCenter, select XenCenter > your-resource-pool.

  3. Click the WLB tab, and then click Settings.

  4. In the left pane, select Metric Weighting.

  5. In Metric Weighting page, as desired, adjust the sliders beside the individual resources.

    Move the slider towards Less Important to indicate that ensuring VMs always have the highest available amount of this resource is not as vital for this pool.

Excluding hosts from recommendations

When configuring Workload Balancing, you can specify that specific physical hosts are excluded from Workload Balancing optimization and placement recommendations, including Start On placement recommendations.

Situations when you might want to exclude hosts from recommendations include when:

  • You want to run the pool in Maximum Density mode and consolidate and shut down hosts, but you want to exclude specific hosts from this behavior.

  • You have two VM workloads that must always run on the same host. For example, if the VMs have complementary applications or workloads.

  • You have workloads that you do not want moved (for example, a domain controller or database server).

  • You want to perform maintenance on a host and you do not want VMs placed on the host.

  • The performance of the workload is so critical that the cost of dedicated hardware is irrelevant.

  • Specific hosts are running high-priority workloads (VMs), and you do not want to use the High Availability feature to prioritize these VMs.

  • The hardware in the host is not the optimum for the other workloads in the pool.

Regardless of whether you specify a fixed or scheduled optimization mode, excluded hosts remain excluded even when the optimization mode changes. Therefore, if you only want to prevent Workload Balancing from shutting off a host automatically, consider not enabling (or deselecting) Power Management for that host instead. For more information, see Optimize and manage power automatically.

When you exclude a host from recommendations, you are specifying for Workload Balancing not to manage that host at all. This configuration means that Workload Balancing doesn’t make any optimization recommendations for an excluded host. In contrast, when you don’t select a host to participate in Power Management, WLB still manages the host, but doesn’t make power management recommendations for it.

To exclude hosts from Workload Balancing

Use this procedure to exclude a host in a pool that Workload Balancing is managing from power management, host evacuation, placement, and optimization recommendations.

  1. In the Resources pane of XenCenter, select XenCenter > your-resource-pool.

  2. In the Properties pane, click the WLB tab.

  3. In the WLB tab, click Settings.

  4. In the left pane, select Excluded Hosts.

  5. In Excluded Hosts page, select the hosts for which you do not want WLB to recommend alternate placements and optimizations.

Control automated recommendations

Workload Balancing supplies some advanced settings that let you control how Workload Balancing applies automated recommendations. These settings appear on the Advanced page of the Workload Balancing Configuration dialog.

In the Resources pane of XenCenter, select XenCenter > your-resource-pool.

In the Properties pane, click the WLB tab.

In the WLB tab, click Settings.

In the left pane, select Advanced.

Set conservative or aggressive automated recommendations

When running in automated mode, the frequency of optimization and consolidation recommendations and how soon they are automatically applied is a product of multiple factors, including:

  • How long you specify Workload Balancing waits after moving a VM before making another recommendation

  • The number of recommendations Workload Balancing must make before applying a recommendation automatically

  • The severity level a recommendation must achieve before the optimization is applied automatically

  • The level of consistency in recommendations (recommended VMs to move, destination hosts) Workload Balancing requires before applying recommendations automatically

Important:

In general, only adjust the settings for factors in the following cases:

  • You have guidance from Citrix Technical Support
  • You made significant observation and testing of your pool’s behavior with Workload Balancing enabled

Incorrectly configuring these settings can result in Workload Balancing not making any recommendations.

VM migration interval

You can specify the number of minutes WLB waits after the last time a VM was moved, before WLB can make another recommendation for that VM.

The recommendation interval is designed to prevent Workload Balancing from generating recommendations for artificial reasons (for example, if there was a temporary utilization spike).

When Automation is configured, it is especially important to be careful when modifying the recommendation interval. If an issue occurs that leads to continuous, recurring spikes, increasing the frequency (that is, setting a lower number) can generate many recommendations and, therefore, relocations.

Note:

Setting a recommendation interval does not affect how long Workload Balancing waits to factor recently rebalanced hosts into recommendations for Start-On Placement, Resume, and Maintenance Mode.

Recommendation count

Every two minutes, Workload Balancing checks to see if it can generate recommendations for the pool it is monitoring. When you enable Automation, you can specify the number of times a consistent recommendation must be made before Workload Balancing automatically applies the recommendation. To do so, you configure a setting known as the Recommendation Count. The Recommendation Count and the Optimization Aggressiveness setting let you fine-tune the automated application of recommendations in your environment.

As described in the aggressiveness section, Workload Balancing uses the similarity of recommendations to make the following checks:

  1. Whether the recommendation is truly needed
  2. Whether the destination host has stable enough performance over a prolonged period to accept a relocated VM (without needing to move it off the host again shortly)

Workload Balancing uses the Recommendation Count value to determine a recommendation must be repeated before Workload Balancing automatically applies the recommendation.

Workload Balancing uses this setting as follows:

  1. Every time Workload Balancing generates a recommendation that meets its consistency requirements, as indicated by the Optimization Aggressiveness setting, Workload Balancing increments the Recommendation Count. If the recommendation does not meet the consistency requirements, Workload Balancing may reset the Recommendation Count to zero, depending on the factors described in Optimization aggressiveness.

  2. When WLB generates enough consistent recommendations to meet the value for the Recommendation Count, as specified in the Recommendations text box, it automatically applies the recommendation.

If you choose to modify this setting, the value to set varies according to your environment. Consider these scenarios:

  • If host loads and activity increase rapidly in your environment, you may want to increase value for the Recommendation Count. Workload Balancing generates recommendations every two minutes. For example, if you set this interval to 3, then six minutes later Workload Balancing applies the recommendation automatically.

  • If host loads and activity increase gradually in your environment, you may want to decrease the value for the Recommendation Count.

Accepting recommendations uses system resources and affects performance when Workload Balancing is relocating the VMs. Increasing the Recommendation Count increases the number of matching recommendations that must occur before Workload Balancing applies the recommendation. This setting encourages Workload Balancing to apply more conservative, stable recommendations and can decrease the potential for spurious VM moves. The Recommendation Count is set to a conservative value by default.

Because of the potential impact adjusting this setting can have on your environment, only change it with extreme caution. Preferably, make these adjustments by testing and iteratively changing the value or under the guidance of Citrix Technical Support.

Recommendation severity

All optimization recommendations include a severity rating (Critical, High, Medium, Low) that indicates the importance of the recommendation. Workload Balancing bases this rating on a combination of factors including the following:

  • Configuration options you set, such as thresholds and metric tunings
  • Resources available for the workload
  • Resource-usage history.

The severity rating for a recommendation appears in the Optimization Recommendations pane on the WLB tab.

When you configure WLB to apply recommendations automatically, you can set the minimum severity level to associate with a recommendation before Workload Balancing automatically applies it.

Optimization aggressiveness

To provide extra assurance when running in automated mode, Workload Balancing has consistency criteria for accepting optimizations automatically. This can help to prevent moving VMs due to spikes and anomalies. In automated mode, Workload Balancing does not accept the first recommendation it produces. Instead, Workload Balancing waits to apply a recommendation automatically until a host or VM exhibits consistent behavior over time. Consistent behavior over time includes factors like whether a host continues to trigger recommendations and whether the same VMs on that host continue to trigger recommendations.

Workload Balancing determines if behavior is consistent by using criteria for consistency and by having criteria for the number of times the same recommendation is made. You can configure how strictly you want Workload Balancing to apply the consistency criteria using the Optimization Aggressiveness setting.

Citrix primarily designed the Optimization Aggressiveness setting for demonstration purposes. However, you can use this setting to control the amount of stability you want in your environment before Workload Balancing applies an optimization recommendation. The most stable setting (Low aggressiveness) is configured by default. In this context, the term stable means the similarity of the recommended changes over time, as explained throughout this section. Aggressiveness is not desirable in most environments. Therefore, Low is the default setting.

Workload Balancing uses up to four criteria to ascertain consistency. The number of criteria that must be met varies according to the level you set in the Optimization Aggressiveness setting. The lower the level (for example, Low or Medium) the less aggressively Workload Balancing is in accepting a recommendation. In other words, Workload Balancing is stricter about requiring criteria to match (or less cavalier or aggressive) about consistency when aggressiveness is set to Low.

For example, if the aggressiveness level is set to Low, each criterion for Low must be met the number of times specified by the Recommendation Count value before automatically applying the recommendation.

If you set the Recommendation Count to 3, Workload Balancing waits until all the criteria listed for Low are met and repeated in three consecutive recommendations. This setting helps ensure that the VM actually needs to be moved and that the recommended destination host has stable resource utilization over a longer period. It reduces the potential for a recently moved VM to be moved off a host due to host performance changes after the move. By default, this setting is set to a conservative setting (Low) to encourage stability.

Citrix does not recommend increasing the Optimization Aggressiveness setting to increase the frequency with which your hosts are being optimized. If you think that your hosts aren’t being optimized quickly or frequently enough, try adjusting the Critical thresholds. Compare the thresholds against the Pool Health report.

The consistency criteria associated with the different levels of aggressiveness is the following:

Low:

  • All VMs in subsequent recommendations must be the same (as demonstrated by matching UUIDs in each recommendation).

  • All destination hosts must be the same in subsequent recommendations

  • The recommendation that immediately follows the initial recommendation must match or else the Recommendation Count reverts to 1

Medium:

  • All VMs in subsequent recommendations must be from the same host same; however, they can be different VMs from the ones in the first recommendation.

  • All destination hosts must be the same in subsequent recommendations

  • One of the next two recommendations that immediately follows the first recommendation must match or else the Recommendation Count reverts to 1

High:

  • All VMs in the recommendations must be from the same host same. However, the recommendations do not have to follow each other immediately.

  • The host from which Workload Balancing recommended that the VM move must be the same in each recommendation

  • The Recommendation Count does not revert to 1 when the two recommendations that follow the first recommendation do not match

Example

The following example illustrates how Workload Balancing uses the Optimization Aggressiveness setting and the Recommendation Count to determine whether or not to accept a recommendation automatically.

The first column represents the recommendation number. The second column, “Placement Recommendations,” represents the placement recommendations made when Workload Balancing issued the optimization recommendation: each recommendation proposes three VM placements (moves). The third, fourth, and fifth columns represent the effect of the Optimization Aggressiveness setting on a group of placement recommendations. The row denotes the group, for example Recommendation #1. The number in the aggressiveness columns is the number of times there have been consecutive recommendation at that Optimization Aggressiveness setting. For example, 1 in the medium column for Recommendation #2 indicates that the recommendation was not consistent enough at that Optimization Aggressiveness setting. The counter was reset to 1.

In the following examples, when the Optimization Aggressiveness setting is set to High, the Recommendation Count continues to increase after Recommendation #1, #2, and #3. This increase happens even though the same VMs are not recommended for new placements in each recommendation. Workload Balancing applies the placement recommendation with Recommendation #3 because it has seen the same behavior from that host for three consecutive recommendations.

In contrast, when set to Low aggressiveness, the consecutive recommendations count does not increase for the first four recommendations. The Recommendation Count resets to 1 with each recommendation because the same VMs were not recommended for placements. The Recommendation Count does not start to increase until the same recommendation is made in Recommendation #5. Finally, Workload Balancing automatically applies the recommendation made in Recommendation #6 after the third time it issues the same placement recommendations.

Recommendation #1:

  • Move VM1 from Host A to Host B
  • Move VM3 from Host A to Host B
  • Move VM5 from Host A to Host C

High Aggressiveness Recommendation Count: 1

Medium Aggressiveness Recommendation Count: 1

Low Aggressiveness Recommendation Count: 1

Recommendation #2:

  • Move VM1 from Host A to Host B
  • Move VM3 from Host A to Host C
  • Move VM7 from Host A to Host C

High Aggressiveness Recommendation Count: 2

Medium Aggressiveness Recommendation Count: 1

Low Aggressiveness Recommendation Count: 1

Recommendation #3:

  • Move VM1 from Host A to Host B
  • Move VM3 from Host A to Host C
  • Move VM5 from Host A to Host C

High Aggressiveness Recommendation Count: 3 (Apply)

Medium Aggressiveness Recommendation Count: 1

Low Aggressiveness Recommendation Count: 1

Recommendation #4:

  • Move VM1 from Host A to Host B
  • Move VM3 from Host A to Host B
  • Move VM5 from Host A to Host C

Medium Aggressiveness Recommendation Count: 2

Low Aggressiveness Recommendation Count: 1

Recommendation #5:

  • Move VM1 from Host A to Host B
  • Move VM3 from Host A to Host B
  • Move VM5 from Host A to Host C

Medium Aggressiveness Recommendation Count: 3 (Apply)

Low Aggressiveness Recommendation Count: 2

Recommendation #6:

  • Move VM1 from Host A to Host B
  • Move VM3 from Host A to Host B
  • Move VM5 from Host A to Host C

Low Aggressiveness Recommendation Count: 3 (Apply)

To configure VM recommendation intervals

  1. In the Resources pane of XenCenter, select XenCenter>your-pool.

  2. In the Properties pane, click the WLB tab.

  3. In the WLB tab, click Settings.

  4. In the left pane, click Advanced.

  5. In the VM Recommendation Interval section, do one or more of the following:

    • In the Minutes box, type a value for the number of minutes Workload Balancing waits before making another optimization recommendation on a newly rebalanced host.

    • In the Recommendations box, type a value for the number of recommendations you want Workload Balancing to make before it applies a recommendation automatically.

    • Select a minimum severity level before optimizations are applied automatically.

    • Modify how aggressively Workload Balancing applies optimization recommendations when it is running in automated mode. Increasing the aggressiveness level reduces constraints on the consistency of recommendations before automatically applying them. The Optimization Aggressiveness setting directly complements the Recommendation Count setting (that is, the Recommendations box).

      Note:

      If you type “1” for the value in the Recommendations setting, the Optimization Aggressiveness setting is not relevant.

To modify the Pool Audit Trail granularity settings

Follow this procedure to modify the granularity settings:

  1. Select the pool in the Infrastructure view, click the WLB tab, and then click Settings.

  2. In the left pane, click Advanced.

  3. On the Advance page, click the Pool Audit Trail Report Granularity list, and select an option from the list.

    Important:

    Select the granularity based on your audit log requirements. For example, if you set your audit log report granularity to Minimum, the report only captures limited amount of data for specific users and object types. If you set the granularity to Medium, the report provides a user-friendly report of the audit log. If you choose to set the granularity to Maximum, the report contains detailed information about the audit log report. Setting the audit log report to Maximum can cause the Workload Balancing server to use more disk space and memory.

  4. To confirm your changes, click OK.

To view Pool Audit Trail reports based on objects in XenCenter

Follow this procedure to run and view reports of Pool Audit Trail based on the selected object:

  1. After you have set the Pool Audit Trail Granularity setting, click Reports. The Workload Reports page appears.

  2. Select Pool Audit Trail on the left pane.

  3. You can run and view the reports based on a specific Object by choosing it from the Object list. For example, choose Host from the list to get the reports based on Host alone.

Administer Workload Balancing

This section provides information about the following subjects:

  • How to reconfigure a pool to use a different Workload Balancing virtual appliance

  • How to disconnect a pool from Workload Balancing or temporarily stop Workload Balancing

  • Database grooming

  • How to change configuration options

Note:

Workload Balancing is available for XenServer Enterprise edition customers or those who have access to XenServer through their Citrix Virtual Apps and Desktops entitlement. For more information about XenServer licensing, see Licensing. To upgrade, or to buy a XenServer license, visit the Citrix website.

Administer and Maintain Workload Balancing

After Workload Balancing has been running for a while, there are routine tasks that you might need to perform to keep Workload Balancing running optimally. These tasks may arise either as the result of changes to your environment (such as different IP addresses or credentials), hardware upgrades, or routine maintenance.

Some administrative tasks you may want to perform on Workload Balancing include:

  • Connecting or reconnecting a pool to a Workload Balancing virtual appliance

  • Reconfiguring a pool to use another Workload Balancing virtual appliance

  • Renaming the Workload Balancing user account

  • Disconnecting the Workload Balancing virtual appliance from a pool

  • Removing the Workload Balancing virtual appliance

  • Understanding the Role Based Access Control permissions Workload Balancing requires

Workload Balancing lets you fine-tune some aspects of its behavior through a configuration file known as the wlb.conf file.

This section also discusses some database administration tasks for those users interested in extra ways to manage the Workload Balancing database.

Connect to the Workload Balancing Virtual Appliance

After Workload Balancing configuration, connect the pool you want managed to the Workload Balancing virtual appliance using either the CLI or XenCenter. Likewise, you might need to reconnect to the same virtual appliance at some point.

To complete the XenCenter procedure that follows, you need the:

  • Host name (or IP address) and port of the Workload Balancing virtual appliance.

  • Credentials for the resource pool you want Workload Balancing to monitor.

  • Credentials for the account you created on the Workload Balancing virtual appliance. This account is often known as the Workload Balancing user account. XenServer uses this account to communicates with Workload Balancing. You created this account on the Workload Balancing virtual appliance during Workload Balancing Configuration.

 This illustration shows: (1) XenServer communicates with Workload Balancing using an account you created during Workload Balancing Configuration. (2) the Workload Balancing virtual appliance authenticates to XenServer using the credentials for the pool.

To specify the Workload Balancing virtual appliance’s host name for use when connecting to the Workload Balancing virtual appliance, first add its host name and IP address to your DNS server.

If you want to configure certificates from a certificate authority, Citrix recommends specifying an FQDN or an IP address that does not expire.

When you first connect to Workload Balancing, it uses the default thresholds and settings for balancing workloads. Automatic features, such as Automated Optimization Mode, Power Management, and Automation, are disabled by default.

Note:

Workload Balancing is available for XenServer Enterprise edition customers or those customers who have access to XenServer through their Citrix Virtual Apps and Desktops entitlement. For more information about XenServer licensing, see Licensing. To upgrade, or to buy a XenServer license, visit the Citrix website.

To connect your pool to the Workload Balancing virtual appliance
  1. In the Resources pane of XenCenter, select XenCenter > your-resource-pool.

  2. In the Properties pane, click the WLB tab.

    The WLB tab displays the Connect button.

    XenCenter GUI with the WLB panel open.

  3. In the WLB tab, click Connect.

    The Connect to WLB Server dialog box appears.

    The Connect to WLB Server wizard.

  4. In the Server Address section, enter the following:

    1. In the Address box, type the IP address or FQDN of the Workload Balancing virtual appliance (for example, your-WLB-appliance-computername.yourdomain.net).

      Tip:

      For more information, see To obtain the IP address for the WLB virtual appliance.

    2. Enter the port number in the Port box. XenServer uses this port to communicate with Workload Balancing.

      By default, XenServer connects to Workload Balancing (specifically the Web Service Host service) on port 8012. If you changed the port number during Workload Balancing Configuration, you must enter that port number here.

      Note:

      Use the default port number unless you changed it during Workload Balancing Configuration. The port number specified during Workload Balancing Configuration, in any firewall rules, and in the Connect to WLB Server dialog must match.

  5. In the WLB Server Credentials section, enter the user name (for example, wlbuser) and password that the pool uses to connect to the Workload Balancing virtual appliance.

    The Update Credentials dialog. The fields are user name and Password.

    These credentials must be for the account you created during Workload Balancing Configuration. By default, the user name for this account is wlbuser.

  6. In the XenServer Credentials section, enter the user name and password for the pool you are configuring. Workload Balancing uses these credentials to connect to the XenServer hosts in that pool.

    The XenServerCredentials dialog. The fields are user name and Password.

    To use the credentials with which you are currently logged into XenServer, select the Use the current XenCenter credentials check box. If you have assigned a role to this account using the Access Control feature (RBAC), ensure that the role has sufficient permissions to configure Workload Balancing. For more information, see Workload Balancing Access Control Permissions.

  7. After connecting the pool to the Workload Balancing virtual appliance, Workload Balancing automatically begins monitoring the pool with the default optimization settings. If you want to modify these settings or change the priority given to resources, wait until the XenCenter Log shows that discovery is finished before proceeding. For more information, see Change Workload Balancing Settings.

To obtain the IP address for the WLB virtual appliance
  1. Select the WLB virtual appliance in the Resource pane in XenCenter, and select the Console tab.

  2. Log in to the appliance. Enter the VM user name (typically “root”) and the root password you created when you imported the appliance.

  3. Enter the following command at the prompt:

    ifconfig
    
Workload Balancing access control permissions

When Role Based Access Control (RBAC) is implemented in your environment, all user roles can display the WLB tab. However, not all roles can perform all operations. The following table lists the minimum role administrators require to use Workload Balancing features:

Task Minimum Required Role
Configure, Initialize, Enable, Disable WLB Pool Operator
Apply WLB Optimization Recommendations (in WLB tab) Pool Operator
Modify WLB report subscriptions Pool Operator
Accept WLB Placement Recommendations (“star” recommendations) VM Power Admin
Generate WLB Reports, including the Pool Audit Trail report Read Only
Display WLB Configuration Read Only
Definition of permissions

The following table provides more details about permissions.

Permission Allows Assignee To
Configure, Initialize, Enable, Disable WLB Configure WLB
  Initialize WLB and change WLB servers
  Enable WLB
  Disable WLB
Apply WLB Optimization Recommendations (in WLB tab) Apply any optimization recommendations that appear in the WLB tab
Modify WLB report subscriptions Change the WLB report generated or its recipient
Accept WLB Placement Recommendations (“star” recommendations) Select one of the servers Workload Balancing recommends for placement (“star” recommendations)
Generate WLB Reports, including the Pool Audit Trail report View and run WLB reports, including the Pool Audit Trail report
Display WLB Configuration View WLB settings for a pool as shown on the WLB tab

If a user tries to use Workload Balancing and that user doesn’t have sufficient permissions, a role elevation dialog appears. For more information about RBAC, see Role-based access control.

Determine the status of the Workload Balancing Virtual Appliance

Run the service workloadbalancing status command, as described in the Workload Balancing commands.

Reconfigure a pool to use another WLB Appliance

You can reconfigure a resource pool to use a different Workload Balancing virtual appliance.

However, to prevent the old Workload Balancing virtual appliance from running against a pool, ensure that you first disconnect the pool from the old Workload Balancing virtual appliance.

After disconnecting a pool from the old Workload Balancing virtual appliance, you can connect the pool by specifying the name of the new Workload Balancing virtual appliance. Perform the steps in the procedure that follows on the pool you want to connect to a different Workload Balancing virtual appliance.

To use a different Workload Balancing virtual appliance:

  1. From the Pool menu, select Disconnect Workload Balancing Server and click Disconnect when prompted.

  2. In the WLB tab, click Connect. The Connect to WLB Server dialog appears.

  3. In the Address box, type the IP address or FQDN of the new Workload Balancing server.

  4. In the WLB Server Credentials section, enter the user name and password that the XenServer pool uses to connect to the Workload Balancing virtual appliance.

    These credentials must be for the account you created during Workload Balancing Configuration for the new virtual appliance. By default, the user name for this account is wlbuser.

  5. In the XenServer Credentials section, enter the user name and password for the pool you are configuring (typically the password for the pool master). Workload Balancing uses these credentials to connect to the hosts in the pool.

    The XenServerCredentials dialog. The fields are user name and Password.

    To use the credentials with which you are currently logged into XenServer, select the Use the current XenCenter credentials check box. If you have assigned a role to this account using the Access Control feature (RBAC), be sure that the role has sufficient permissions to configure Workload Balancing. For more information, see Workload Balancing Access Control Permissions.

Update Workload Balancing credentials

After initial configuration, if you want to update the credentials XenServer and the Workload Balancing appliance use to communicate, use the following process:

  1. Pause Workload Balancing by clicking Pause in the WLB tab.

  2. Change the WLB credentials by running the wlbconfig command. For more information, see Workload Balancing Commands.

  3. Re-enable Workload Balancing and specify the new credentials.

  4. After the progress bar completes, click Connect.

    The Connect to WLB Server dialog box appears.

  5. Click Update Credentials.

  6. In the Server Address section, modify the following as desired:

    • In the Address box, type the IP address or FQDN of the Workload Balancing appliance.

    • (Optional.) If you changed the port number during Workload Balancing Configuration, enter that port number. The port number you specify in this box and during Workload Balancing Configuration is the port number XenServer uses to connect to Workload Balancing.

    By default, XenServer connects to Workload Balancing on port 8012.

    Note:

    Only edit this port number if you changed it when you ran the Workload Balancing Configuration wizard. The port number value specified when you ran the Workload Balancing Configuration wizard and the Connect to WLB Server dialog must match.

  7. In the WLB Server Credentials section, enter the user name (for example, wlbuser) and password the computers running XenServer uses to connect to the Workload Balancing server.

  8. In the XenServer Credentials section, enter the user name and password for the pool you are configuring (typically the password for the pool master). Workload Balancing uses these credentials to connect to the computers running XenServer in that pool.

  9. In the XenServer Credentials section, enter the user name and password for the pool you are configuring. Workload Balancing uses these credentials to connect to the computers running XenServer in that pool.

To use the credentials with which you are currently logged into XenServer, select the Use the current XenCenter credentials check box.

Change the Workload Balancing IP address

Situations when you may want to update the

To change the Workload Balancing IP address, do the following:

  1. Stop the Workload Balancing services by running the service workloadbalancing stop command on the virtual appliance.

  2. Change the Workload Balancing IP address by running the ifconfig command on the virtual appliance.

  3. Re-enable Workload Balancing and specify the new IP address.

  4. Start the Workload Balancing services by running the service workloadbalancing start command on the virtual appliance.

Stop Workload Balancing

Because Workload Balancing is configured at the pool level, when you want it to stop managing a pool, you must do one of the following:

  • Pause Workload Balancing. Pausing Workload Balancing stops XenCenter from displaying recommendations for the specified resource pool and managing the pool. Pausing is designed for a short period and lets you resume monitoring without having to reconfigure. When you pause Workload Balancing, data collection stops for that resource pool until you enable Workload Balancing again.

  • Disconnect the pool from Workload Balancing. Disconnecting from the Workload Balancing virtual appliance breaks the connection between the pool and if possible, deletes the pool data from the Workload Balancing database. When you disconnect from Workload Balancing, Workload Balancing stops collecting data on the pool.

  1. In the Resource pane of XenCenter, select the resource pool for which you want to disable Workload Balancing.

  2. In the WLB tab, click Pause. A message appears on the WLB tab indicating that Workload Balancing is paused.

    Tip:

    To resume monitoring, click the Resume button in the WLB tab.

  3. In the Infrastructure pane of XenCenter, select the resource pool on which you want to stop Workload Balancing.

  4. From the Infrastructure menu, select Disconnect Workload Balancing Server. The Disconnect Workload Balancing server dialog box appears.

  5. Click Disconnect to stop Workload Balancing from monitoring the pool permanently.

Tip:

If you disconnected the pool from the Workload Balancing virtual appliance, to re-enable Workload Balancing on that pool, you must reconnect to a Workload Balancing appliance. For information, see the Connect to the Workload Balancing Virtual Appliance.

Enter Maintenance Mode with Workload Balancing enabled

With Workload Balancing enabled, if you put a host in Maintenance Mode, XenServer migrates the VMs running on that host to their optimal hosts when available. XenServer migrates them based on Workload Balancing recommendations (performance data, your placement strategy, and performance thresholds).

If an optimal host is not available, the words Click here to suspend the VM appear in the Enter Maintenance Mode dialog box. In this case, because there is not a host with sufficient resources to run the VM, Workload Balancing does not recommend a placement. You can either suspend this VM or exit Maintenance Mode and suspend a VM on another host in the same pool. Then, if you reenter the Enter Maintenance Mode dialog box, Workload Balancing might be able to list a host that is a suitable candidate for migration.

Note:

When you take a host off-line for maintenance and Workload Balancing is enabled, the words “Workload Balancing” appear in the Enter Maintenance Mode wizard.

To enter maintenance mode with Workload Balancing enabled:

  1. In the Resources pane of XenCenter, select the physical host that you want to take off-line. From the Server menu, select Enter Maintenance Mode.

  2. In the Enter Maintenance Mode dialog box, click Enter maintenance mode. The VMs running on the host are automatically migrated to the optimal host based on the Workload Balancing performance data, your placement strategy, and performance thresholds.

To take the host out of maintenance mode, right-click the host and select Exit Maintenance Mode. When you remove a host from maintenance mode, XenServer automatically restores that host’s original VMs to that host.

Increase the Workload Balancing disk size

This procedure explains how to resize the virtual disk of the Workload Balancing virtual appliance. Shut down the virtual appliance before performing these steps. Workload Balancing is unavailable for approximately five minutes.

Warning:

Citrix recommends taking a snapshot of your data before performing this procedure. Incorrectly performing these steps can result in corrupting the Workload Balancing virtual appliance.

  1. Shut down the Workload Balancing virtual appliance.

    In the XenCenter resource pane, select the Workload Balancing virtual appliance (typically “Citrix WLB Virtual Appliance”).

  2. Click the Storage tab.

  3. Select the “vdi_xvda” disk, and click the Properties button.

  4. In the “vdi_xvda” Properties dialog, select Size and Location.

  5. Increase the disk size as needed, and click OK.

  6. Start the Workload Balancing virtual appliance and log in to it.

  7. Run the following command on the Workload Balancing virtual appliance:

     resize2fs /dev/xvda
    

    Note::

    If the resize2fs tool is not installed, ensure that you are connected to the internet and install it using the following command:

    yum install -y --enablerepo=base,updates --disablerepo=citrix-* e2fsprogs
    

If there is no internet access:

  1. Download the following from http://mirror.centos.org/centos-7/7.2.1511/os/x86_64/Packages/.

    • libss-1.42.9-7.el7.i686.rpm

    • e2fsprogs-libs-1.42.9-7.el7.x86_64.rpm

    • e2fsprogs-1.42.9-7.el7.x86_64.rpm

  2. Upload them to WLB VM using SCP or any other suitable tool.

  3. Run the following command from WLB VM:

    rpm -ivh libss-*.rpm e2fsprogs-*.rpm
    

    The tool resize2fs is now installed.

  4. Run the df –h command to confirm the new disk size.

Remove the Workload Balancing Virtual Appliance

Citrix recommends removing the Workload Balancing virtual appliance by using the standard procedure to delete VMs from XenCenter.

When you delete the Workload Balancing virtual appliance, the PostgreSQL database containing the Workload Balancing is deleted. To save this data, you must migrate it from the database before deleting the Workload Balancing virtual appliance.

Manage the Workload Balancing database

The Workload Balancing database is a PostgreSQL database. PostgreSQL is an open-source relational database. You can find documentation for PostgreSQL by searching the web.

The following information is intended for database administrators and advanced users of PostgreSQL who are comfortable with database administration tasks. If you are not experienced with PostgreSQL, Citrix recommends that you become familiar with it before you attempt the database tasks in the sections that follow.

By default, the PostgreSQL user name is postgres. You set the password for this account during Workload Balancing Configuration.

The amount of historical data you can store is based on the size of the virtual disk allocated to WLB and the minimum required space. By default, the size of the virtual disk allocated to WLB is 20 GB. For more information, see Database grooming parameters.

To store a lot of historical data, for example if you want to enable the Pool Audit trail Report, you can do either of the following:

  • Make the virtual disk size assigned to the Workload Balancing virtual appliance larger. To do so, import the virtual appliance, and increase the size of the virtual disk by following the steps in Increase the Workload Balancing Disk Size.

  • Create periodic duplicate backup copies of the data by enabling remote client access to the database and using a third-party database administration tool.

In terms of managing the database, you can control the space that database data consumes by configuring database grooming.

Access the database

The Workload Balancing virtual appliance has firewall configured in it. Before you can access database, you must add the postgresql server port to the iptables.

From the Workload Balancing virtual appliance console, run the following command:

iptables -A INPUT -i eth0 -p tcp -m tcp --dport 5432 -m \
state --state NEW,ESTABLISHED -j ACCEPT

(Optional.) To make this configuration persist after the virtual appliance is rebooted, run the following command:

iptables-save > /etc/sysconfig/potables
Control database grooming

The Workload Balancing database automatically deletes the oldest data whenever the VPX reaches the minimum amount of disk space Workload Balancing requires to run. By default, the minimum amount of required disk space is set to 1,024 MB.

The Workload Balancing database grooming options are controlled through the wlb.conf file.

When there is not enough disk space left on the Workload Balancing virtual appliance, Workload Balancing automatically starts grooming historical data. The process is as follows:

  1. At a predefined grooming interval, the Workload Balancing data collector checks if grooming is required. Grooming is required if the database data has grown to the point where the only space that remains unused is the minimum required disk space. Use GroomingRequiredMinimumDiskSizeInMB to set the minimum required disk space.

    You can change the grooming interval if desired using GroomingIntervalInHour. However, by default Workload Balancing checks to see if grooming is required once per hour.

  2. If grooming is required, Workload Balancing begins by grooming the data from the oldest day. Workload Balancing then checks to see if there is now enough disk space for it to meet the minimum disk-space requirement.

  3. If the first grooming did not free enough disk space, then Workload Balancing repeats grooming up to GroomingRetryCounter times without waiting for GroomingIntervalInHour hour.

  4. If the first or repeated grooming freed enough disk space, then Workload Balancing waits for GroomingIntervalInHour hour and returns to Step 1.

  5. If the grooming initiated by the GroomingRetryCounter did not free enough disk space, then Workload Balancing waits for GroomingIntervalInHour hour and returns to Step 1.

Database grooming parameters

There are five parameters in the wlb.conf file that control various aspects of database grooming. They are as follows:

  • GroomingIntervalInHour. Controls how many hours elapse before the next grooming check is done. For example, if you enter 1, Workload Balancing checks the disk space hourly. If you enter 2, Workload Balancing checks disk space every two hours to determine if grooming must occur.

  • GroomingRetryCounter. Controls the number of times Workload balancing tries rerunning the grooming database query.

  • GroomingDBDataTrimDays. Controls the number of days worth of data Workload Balancing deletes from the database each time it tries to groom data. The default value is one day.

  • GroomingDBTimeoutInMinute. Controls the number of minutes that the database grooming takes before it times out and is canceled. If the grooming query takes longer than is expected and does not finish running within the timeout period, the grooming task is canceled. The default value is 0 minutes, which means that database grooming never times out.

  • GroomingRequiredMinimumDiskSizeInMB. Controls the minimum amount of free space left in the virtual disk assigned to the Workload Balancing virtual appliance. When the data in the virtual disk grows until there is only minimum disk size left on the virtual disk, Workload Balancing triggers database grooming. The default value is 2,048 MB.

To edit these values, see the Edit the Workload Balancing configuration file.

Change the database password

While it is possible to change the database password by using the wlb.conf file, Citrix recommends running the wlbconfig command instead. For more information, see Modify the Workload Balancing configuration options.

Archive database data

To avoid having older historical data deleted, you can, optionally, copy data from the database for archiving. To do so, you must perform the following tasks:

  1. Enable client authentication on the database.

  2. Set up archiving using the PostgreSQL database administration tool of your choice.

Enable client authentication to the database

While you can connect directly to the database through the Workload Balancing console, you can also use a PostgreSQL database management tool. After downloading a database management tool, install it on the system from which you want to connect to the database. For example, you can install the tool on the same laptop where you run XenCenter.

Before you can enable remote client authentication to the database, you must:

  1. Modify the database configuration files, including pg_hba.conf file and the postgresql.conf, to allow connections.

  2. Stop the Workload Balancing services, restart the database, and then restart the Workload Balancing services.

  3. In the database-management tool, configure the IP address of the database (that is, the IP address of the Workload Balancing VPX) and the database password.

Modify the database configuration files

To enable client authentication on the database, you must modify two files on the Workload Balancing virtual appliance: the pg_hba.conf file and the postgresql.conf file.

To edit the pg_hba.conf file:

  1. Modify the pg_hba.conf file. From the Workload Balancing virtual appliance console, open the pg_hba.conf file with an editor, such as VI. For example:

    vi /var/lib/pgsql/9.0/data/pg_hba.conf
    
  2. If your network uses IPv4, add the IP address from the connecting computer to this file. For example:

    In the configuration section, enter the following under #IPv4 local connections:

    • TYPE: host
    • DATABASE: all
    • USER: all
    • CIDR-ADDRESS: 0.0.0.0/0
    • METHOD: trust
  3. Enter your IP address in the CIDR-ADDRESS field.

    Note:

    Instead of entering 0.0.0.0/0, you can enter your IP address and replace the last three digits with 0/24. The trailing “24” after the / defines the subnet mask and only allows connections from IP addresses within that subnet mask.

    When you enter trust for the Method field, it enables the connection to authenticate without requiring a password. If you enter password for the Method field, you must supply a password when connecting to the database.

  4. If your network uses IPv6, add the IP address from the connecting computer to this file. For example:

    Enter the following under #IPv6 local connections:

    • TYPE: host
    • DATABASE: all
    • USER: all
    • CIDR-ADDRESS: ::0/0
    • METHOD: trust

    Enter the IPv6 addresses in the CIDR-ADDRESS field. In this example, the ::0/0 opens the database up to connections from any IPv6 addresses.

  5. Save the file and quit the editor.

  6. After changing any database configurations, you must restart the database to apply the changes. Run the following command:

    service postgresql-9.0 restart
    

To edit the postgresql.conf file:

  1. Modify the postgresql.conf file. From the Workload Balancing virtual appliance console, open the postgresql.conf file with an editor, such as VI. For example:

    vi /var/lib/pgsql/9.0/data/postgresql.conf
    
  2. Edit the file so that it listens on any port and not just the local host. For example:

    1. Find the following line:

      # listen_addresses='localhost'
      
    2. Remove the comment symbol (#) and edit the line to read as follows:

      listen_addresses='*'
      
  3. Save the file and quit the editor.

  4. After changing any database configurations, you must restart the database to apply the changes. Run the following command:

    service postgresql-9.0 restart
    
Change the database maintenance window

Workload Balancing automatically performs routine database maintenance daily at 12:05AM GMT (00:05), by default. During this maintenance window, data collection occurs but the recording of data may be delayed. However, the Workload Balancing user interface controls are available during this period and Workload Balancing still makes optimization recommendations.

Database maintenance includes releasing allocated unused disk space and reindexing the database. Maintenance lasts for approximately 6 to 8 minutes. In larger pools, maintenance may last longer, depending on how long Workload Balancing takes to perform discovery.

Depending on your time zone, you may want to change the time when maintenance occurs. For example, in the Japan Standard Time (JST) time zone, Workload Balancing maintenance occurs at 9:05 AM (09:05), which can conflict with peak usage in some organizations. If you want to specify a seasonal time change, such as Daylight Saving Time or summer time, you must build the change into value you enter.

To change the maintenance time:

  1. In the Workload Balancing console, run the following command from any directory:

    crontab -e
    

    Workload Balancing displays the following:

    05 0 * * * /opt/vpx/wlb/wlbmaintenance.sh
    

    The value 05 0 represents the default time for Workload Balancing to perform maintenance in minutes (05) and then hours (0). (The asterisks represent the day, month, and year the job runs: Do not edit these fields.) The entry 05 0 indicates that database maintenance occurs at 12:05 AM, or 00:05, Greenwich Mean Time (GMT) every night. This setting means that if you live in New York, the maintenance runs at 7:05 PM (19:05) during winter months and 8:05 PM in summer months.

    Important:

    Do not edit the day, month, and year the job runs (as represented by asterisks). Database maintenance must run daily.

  2. Enter the time at which you want maintenance to occur in GMT. For example, assuming that you want the maintenance to run at midnight:

If your time zone is… UTC Offset Value for Maintenance to Run at 12:05 AM Local Time Value in Daylight Saving Time
Pacific Time Zone (PST) in the United States (for example, California) UTC-08 05 8 05 7
Japan Standard Time (JST) UTC+09 05 15 N/A
Chinese Standard Time UTC +08 04 15 N/A
  1. Save the file and quit the editor.

Customize Workload Balancing

Workload Balancing provides several methods of customization:

  • Command lines for scripting. See the commands in Workload Balancing commands.

  • Host Power On scripting support. You can also customize Workload Balancing (indirectly) through the Host Power On scripting.

Upgrade Workload Balancing

Online upgrading of Workload Balancing has been deprecated for security reasons. Customers cannot to upgrade by using the yum repo anymore. Customers can upgrade WLB to the latest version by importing the latest WLB VPX downloadable at https://www.citrix.com/downloads/xenserver/product-software/.

Troubleshoot Workload Balancing

While Workload Balancing usually runs smoothly, this series of sections provides guidance in case you encounter issues.

General troubleshooting tips

  • Start troubleshooting by reviewing the Workload Balancing log files (LogFile.log and wlb_install_log.log). You can find these logs in Workload Balancing virtual appliance in this location (by default):

    • /var/log/wlb
  • Check the logs in the XenCenter Logs tab for more (different) information.

  • To check the Workload Balancing virtual appliance build number, run the following command on a hosts in a pool that the VPX monitors:

     xe pool-retrieve-wlb-diagnostics | more
    

    The Workload Balancing version number appears at the top of the output.

Error messages

Workload Balancing displays errors on screen as dialog boxes and as error messages in the Logs tab in XenCenter.

If an error message appears, review the XenCenter event log for additional information. For information about the location of this log, see the XenCenter Help.

Issues entering Workload Balancing credentials

If you cannot successfully enter the virtual appliance user account and password while configuring the Connect to WLB Server dialog, try the following:

  • Ensure that Workload Balancing virtual appliance imported and was configured correctly and all of its services are running. For more information, see wlb-start.

  • Check to ensure that you are entering the correct credentials. The default credentials appear in the Workload Balancing Quick Start.

  • You can enter a host name in the Address box, but it must be the fully qualified domain name (FQDN) of the Workload Balancing virtual appliance. Do not enter the host name of the physical server hosting the appliance. For example, yourcomputername{#vrnmN320059}.yourdomain.net{#vrnmN32005F}. If you are having trouble entering a computer name, try using the Workload Balancing appliance’s IP address instead.

  • Verify that the host is using the correct DNS server and the XenServer host can contact Workload Balancing server using its FQDN. To do this check, ping the Workload Balancing appliance using its FQDN from the XenServer host. For example, enter the following in the XenServer host console:

     ping wlb-vpx-1.mydomain.net
    

Issues with firewalls

The following error appears if the Workload Balancing virtual appliance is behind a (hardware) firewall, and you did not configure the appropriate firewall settings: “There was an error connecting to the Workload Balancing server: <pool name> Click Initialize WLB to reinitialize the connection settings.” This error may also appear if the Workload Balancing appliance is otherwise unreachable.

Resolution:

If the Workload Balancing virtual appliance is behind a firewall, open port 8012.

Likewise, the port XenServer uses to contact Workload Balancing (8012 by default), must match the port number specified when you ran the Workload Balancing Configuration wizard.

Lose the connection to Workload Balancing

If, after configuring and connecting to Workload Balancing, you receive a connection error, the credentials may no longer be valid. To isolate this issue, try:

  • Verifying the credentials you entered in the Connect to WLB Server dialog box match the credentials:

    • You created during Workload Balancing Configuration

    • On XenServer (that is, the pool master credentials)

  • Verifying the IP address or FQDN for the Workload Balancing virtual appliance that you entered in the Connect to WLB Server dialog box is correct.

  • Verifying the user name you created during Workload Balancing Configuration matches the credentials you entered in the Connect to WLB Server dialog box.

Workload Balancing connection errors

If you receive a connection error in the Workload Balancing Status line on the WLB tab, you might need to reconfigure Workload Balancing on that pool.

Click the Connect button on the WLB tab and reenter the server credentials.

Workload Balancing stops working

If Workload Balancing doesn’t work (for example, it doesn’t let you save changes to settings), check the Workload Balancing log file for the following error message:

dwmdatacolsvc.exe: Don't have a valid pool. Trying again in 10 minutes.

Cause:

This error typically occurs in pools that have one or more problematic VMs. When VMs are problematic, you might see the following behavior:

  • Windows. The Windows VM crashes due to a stop error (“blue screen”).
  • Linux. The Linux VM may be unresponsive in the console and typically does not shut down.

Workaround:

  1. Force the VM to shut down. To do so, you can do one of the following on the host with the problematic VM:

    • In XenCenter, select the VM, and then from the VM menu, click Force Shutdown.

    • Run the vm-shutdown xe command with the force parameter set to true as described in the XenServer Administrator’s Guide. For example:

       xe vm-shutdown \
       force=true \
       uuid=vm_uuid \
      

      You can find the host UUID on the General tab for that host (in XenCenter) or by running the host-list xe command. You can find the VM UUID on the General tab for the VM or by running the vm-list xe command. For more information, see Command line interface.

  2. In the xsconsole of the XenServer hosting the crashed VM or in XenCenter, migrate all of the VMs to another host, then run the xe-toolstack-restart command.

Issues changing Workload Balancing servers

If you connect a pool to a different Workload Balancing server without disconnecting from Workload Balancing, both old and new Workload Balancing servers monitor the pool.

To solve this problem, you can take one of the following actions:

  • Shut down and delete the old Workload Balancing virtual appliance.
  • Manually stop the Workload Balancing services. These services are analysis, data collector, and Web service.

Note:

Do not use the pool-deconfigure-wlb xe command to disconnect a pool from the Workload Balancing virtual appliance or use the pool-initialize-wlb xe command to specify a different appliance.

Workload Balancing commands

This section provides a reference for the Workload Balancing commands. You can perform these commands from the XenServer host or console to control Workload Balancing or configure Workload Balancing settings on the XenServer host. This appendix includes xe commands and service commands.

Run the following service commands on the Workload Balancing appliance. To do so, you must log in to the Workload Balancing virtual appliance.

Log in to the Workload Balancing Virtual Appliance

Before you can run any service commands or edit the wlb.conf file, you must log in to the Workload Balancing virtual appliance. To do so, you must enter a user name and password. Unless you created extra user accounts on the virtual appliance, log in using the root user account. You specified this account when you ran Workload Balancing Configuration wizard (before you connected your pool to Workload Balancing). You can, optionally, use the Console tab in XenCenter to log in to the appliance.

To log in to the Workload Balancing virtual appliance:

  1. At the name-of-your-WLB-VPX login prompt, enter the account user name. For example, where wlb-vpx-pos-pool is the name of your Workload Balancing appliance:

    wlb-vpx-pos-pool login: root
    
  2. At the Password prompt, enter the password for the account:

    wlb-vpx-pos-pool login: root
    

    Note:

    To log off the Workload Balancing virtual appliance, simply type logout at the command prompt.

wlb restart

Run the wlb restart command from anywhere in the Workload Balancing appliance to stop and then restart the Workload Balancing Data Collection, Web Service, and Data Analysis services.

wlb start

Run the wlb start command from anywhere in the Workload Balancing appliance to start the Workload Balancing Data Collection, Web Service, and Data Analysis services.

wlb stop

Run the wlb stop command from anywhere in the Workload Balancing appliance to stop the Workload Balancing Data Collection, Web Service, and Data Analysis services.

wlb status

Run the wlb status command from anywhere in the Workload Balancing appliance to determine the status of the Workload Balancing server. After you execute this command, the status of the three Workload Balancing services (the Web Service, Data Collection Service, and Data Analysis Service) is displayed.

Modify the Workload Balancing configuration options

Many Workload Balancing configurations, such as the database and web-service configuration options, are stored in the wlb.conf file. The wlb.conf file is a configuration file on the Workload Balancing virtual appliance.

To make it easier to modify the most commonly used options, Citrix provides a command, wlb config. Running the wlb config command on the Workload Balancing virtual appliance lets you rename the Workload Balancing user account, change its password, or change the PostgreSQL password. After you execute this command, the Workload Balancing services are restarted.

To run the wlb config command:

  1. Run the following from the command prompt:

    wlb config
    

The screen displays a series of questions guiding you through changing your Workload Balancing user name and password and the PostgreSQL password. Follow the questions on the screen to change these items.

Important:

Double-check any values you enter in the wlb.conf file: Workload Balancing does not validate values in the wlb.conf file. Therefore, if the configuration parameters you specify are not within the required range, Workload Balancing does not generate an error log.

Edit the Workload Balancing configuration file

You can modify Workload Balancing configuration options by editing the wlb.conf file, which is stored in /opt/vpx/wlb directory on the Workload Balancing virtual appliance. In general, only change the settings in this file with guidance from Citrix. However, there are three categories of settings you can change if desired:

  • Workload Balancing account name and password. It is easier to modify these credentials by running the wlb config command.
  • Database password. This value can be modified using the wlb.conf file. However, Citrix recommends modifying it through the wlb config command since this command modifies the wlb.conf file and automatically updates the password in the database. If you choose to modify the wlb.conf file instead, you must run a query to update the database with the new password.
  • Database grooming parameters. You can modify database grooming parameters, such as the database grooming interval, using this file by following the instructions in the database management section. However, if you do so, Citrix recommends using caution.

For all other settings in the wlb.conf file, Citrix currently recommends leaving them at their default, unless Citrix instructed you to modify them.

To edit the wlb.conf file:

  1. Run the following from the command prompt on the Workload Balancing virtual appliance (using VI as an example):

    vi /opt/vpx/wlb/wlb.conf
    

    The screen displays several different sections of configuration options.

  2. Modify the configuration options, and exit the editor.

You do not need to restart Workload Balancing services after editing the wlb.conf file. The changes go into effect immediately after exiting the editor.

Important:

Double-check any values you enter in the wlb.conf file: Workload Balancing does not validate values in the wlb.conf file. Therefore, if the configuration parameters you specify are not within the required range, Workload Balancing does not generate an error log.

Increase the detail in the Workload Balancing log

The Workload Balancing log provides a list of events on the Workload Balancing virtual appliance, including actions for the analysis engine, database, and audit log. This log file is found in this location: /var/log/wlb/LogFile.log.

You can, if desired, increase the level of detail the Workload Balancing log provides. To do so, modify the Trace flags section of the Workload Balancing configuration file (wlb.conf), which is found in the following location: /opt/vpx/wlb/wlb.conf. Enter a 1 or true to enable logging for a specific trace and a 0 or false to disable logging. For example, to enable logging for the Analysis Engine trace, enter:

AnalEngTrace=1

You may want to increase logging detail before reporting an issue to Citrix Technical Support or when troubleshooting.

Logging Option Trace Flag Benefit or Purpose
Analysis Engine Trace AnalEngTrace Logs details of the analysis engine calculations. Shows details of the decisions the analysis engine is making and potentially gain insight into the reasons Workload Balancing is not making recommendations.
Database Trace DatabaseTrace Logs details about database reads/writes. However, leaving this trace on increases the log file size quickly.
Data Collection Trace DataCollectionTrace Logs the actions of retrieving metrics. This value lets you see the metrics Workload Balancing is retrieving and inserting into the Workload Balancing data store. However, leaving this trace on increases the log file size quickly.
Data Compaction Trace DataCompactionTrace Logs details about how many milliseconds it took to compact the metric data.
Data Event Trace DataEventTrace This trace provides details about events Workload Balancing catches from XenServer.
Data Grooming Trace DataGroomingTrace This trace provides details about the database grooming.
Data Metrics Trace DataMetricsTrace Logs details about the parsing of metric data. Leaving this trace on increases the log-file size quickly.
Queue Management Trace QueueManagementTrace Logs details about data collection queue management processing. (This option is for internal use.)
Data Save Trace DataSaveTrace Logs details about the pool being saved to the database.
Score Host Trace ScoreHostTrace Logs details about how Workload Balancing is arriving at a score for a host. This trace shows the detailed scores generated by Workload Balancing when it calculates the star ratings for selecting optimal servers for VM placement.
Audit Log Trace AuditLogTrace Shows the action of the audit log data being captured and written. (This option is only for internal use and does not provide information that is captured in the audit log.) However, leaving this trace on increases the log file size quickly.
Scheduled Task Trace ScheduledTaskTrace Logs details about scheduled tasks. For example, if your scheduled mode changes are not working, you might want to enable this trace to investigate the cause.
Web Service Trace WlbWebServiceTrace Logs details about the communication with the web-service interface.