Data granularity and retention
Aggregation of data values
The Monitor Service collects various data, including user session usage, user logon performance details, session load balancing details, and connection and machine failure information. Data is aggregated differently depending on its category. Understanding the aggregation of data values presented using the OData Method APIs is critical to interpreting the data. For example:
- Connected Sessions and Machine Failures occur over a period. Therefore, they are exposed as maximums over a time period.
- LogOn Duration is a measure of the length of time, therefore is exposed as an average over a time period.
- LogOn Count and Connection Failures are counts of occurrences over a period, therefore are exposed as sums over a time period.
Concurrent data evaluation
Sessions must be overlapping to be considered concurrent. However, when the time interval is 1 minute, all sessions in that minute (whether they overlap) are considered concurrent. The size of the interval is so small that the performance overhead involved in calculating the precision is not worth the value added. If the sessions occur in the same hour, but not in the same minute, they are not considered to overlap.
Correlation of summary tables with raw data
The data model represents metrics in two different ways:
- The summary tables represent aggregate views of the metrics in per minute, hour, and day time granularities.
- The raw data represents individual events or current state tracked in the session, connection, application, and other objects.
When attempting to correlate data across API calls or within the data model itself, it is important to understand the following concepts and limitations:
- No summary data for partial intervals. Metrics summaries are designed to meet the needs of historical trends over long periods of time. These metrics are aggregated into the summary table for complete intervals. There are no summary data for a partial interval at the beginning (oldest available data) of the data collection nor at the end. When viewing aggregations of a day (Interval=1440), this means that the first and most recent incomplete days do not have any data. Although raw data might exist for those partial intervals, it is never summarized. You can determine the earliest and latest aggregate interval for a particular data granularity by pulling the min and max SummaryDate from a particular summary table. The SummaryDate column represents the start of the interval. The Granularity column represents the length of the interval for the aggregate data.
- Correlating by time. Metrics are aggregated into the summary table for complete intervals as described in the preceding section. They can be used for historical trends, but raw events might be more current in the state than what has been summarized for trend analysis. Any time-based comparison of summary to raw data must consider that there are no summary data for partial intervals that might occur or for the beginning and ending of the time period.
- Missed and latent events. Metrics that are aggregated into the summary table might be slightly inaccurate if events are missed or latent to the aggregation period. Although the Monitor Service attempts to maintain an accurate current state, it does not go back in time to recompute aggregation in the summary tables for missed or latent events.
- Connection High Availability. During connection high availability, there will be gaps in the summary data counts of current connections, but the session instances will still be running in the raw data.
- Data retention periods. Data in the summary tables is retained on a different grooming schedule from the schedule for raw event data. Data might be missing because it has been groomed away from summary or raw tables. Retention periods might also differ for different granularities of summary data. Lower granularity data (minutes) is groomed more quickly than higher granularity data (days). If data is missing from one granularity due to grooming, it might be found in a higher granularity. Since the API calls only return the specific granularity requested, receiving no data for one granularity does not mean that the data doesn’t exist for a higher granularity for the same time period.
- Time zones. Metrics are stored with UTC time stamps. Summary tables are aggregated on hourly time zone boundaries. For time zones that don’t fall on hourly boundaries, there might be some discrepancy as to where data is aggregated.
Granularity and retention
The granularity of aggregated data retrieved by Director is a function of the time (T) span requested. The rules are as follows:
- 0 < T <= 1 hour - uses per-minute granularity
- 0 < T <= 30 days - uses per-hour granularity
- T > 31 days - uses per-day granularity
Requested data that does not come from aggregated data comes from the raw Session and Connection information. This data tends to grow fast, and therefore has its own grooming setting. Grooming ensures that only relevant data is kept long term. Grooming ensures better performance while maintaining the granularity required for reporting. Customers on Premium licensed sites can change the grooming retention to their desired number of retention days, otherwise the default is used. In case there was a connectivity loss with the Site database, Monitor Service will use the default retention days for Premium entitlement as specified in the table below.
To access the settings, run the following PowerShell commands on the Delivery Controller:
Set-MonitorConfiguration -<setting name> <value>
|Default value Premium (days)
|Default value non-Premium (days)
|Session and Connection records retention after Session termination
|MachineFailureLog and ConnectionFailureLog records
|Machine, Catalog, DesktopGroup, and Hypervisor entities that have a LifecycleState of ‘Deleted’. This setting also deletes any related Session, SessionDetail, Summary, Failure, or LoadIndex records.
|DesktopGroupSummary, FailureLogSummary, and LoadIndexSummary records. Aggregated data - daily granularity.
|Hotfixes applied to the VDA and Controller machines
|Aggregated data - minute granularity
|Aggregated data - hourly granularity
|Application Instance history
|Notification Log records
|Resource utilization data - raw data
|Resource utilization summary data - minute granularity
|Resource utilization summary data - hour granularity
|Resource utilization summary data - day granularity
|Process utilization data - raw data
|Process utilization data - minute granularity
|Process utilization data - hour granularity
|Process utilization data - day granularity
|Session metrics data
|Machine metrics data
|Machine metrics summary data
|Application error data
|Application failure data
Modifying values on the Monitor Service database requires restarting the service for the new values to take effect. You are advised to make changes to the Monitor Service database only under the direction of Citrix Support.
The settings GroomProcessUsageRawDataRetentionDays, GroomResourceUsageRawDataRetentionDays, and GroomSessionMetricsDataRetentionDays are limited to their default values of 1, while GroomProcessUsageMinuteDataRetentionDays is limited to its default value of 3. The PowerShell commands to set these values have been disabled, as the process usage data tends to grow quickly. Also, license based retention settings are as follows:
- Premium licensed sites - the grooming retention for all settings is limited to 1000 days (Citrix recommends 365 days).
- Advanced licensed sites - the grooming retention for all settings is limited to 31 days.
- All other sites - the grooming retention for all settings is limited to 7 days.
- GroomApplicationInstanceRetentionDays can be set only in Premium licensed sites.
- GroomApplicationErrorsRetentionDays and GroomApplicationFaultsRetentionDays are limited to 31 days in Premium licensed sites.
Retaining data for long periods have the following implications on table sizes:
Hourly data. If hourly data is allowed to stay in the database for up to two years, a site of 1000 delivery groups can cause the database to grow as follows:
1000 delivery groups x 24 hours/day x 365 days/year x 2 years = 17,520,000 rows of data. The performance impact of such a large amount of data in the aggregation tables is significant. Given that the dashboard data is drawn from this table, the requirements on the database server might be large. Excessively large amounts of data might have a dramatic impact on performance.
Session and event data. Data collected every time a session is started and a connection/reconnection is made. For a large site (100 K users), this data grows fast. For example, two years’ worth of these tables would gather more than a TB of data, requiring a high-end enterprise-level database.