Data granularity and retention

Aggregation of data values

The Monitor Service collects various data, including user session usage, user logon performance details, session load balancing details, and connection and machine failure information. Data is aggregated differently depending on its category. Understanding the aggregation of data values presented using the OData Method APIs is critical to interpreting the data. For example:

  • Connected Sessions and Machine Failures occur over a period. Therefore, they are exposed as maximums over a time period.
  • Logon Duration is a measure of the length of time, therefore is exposed as an average over a time period.
  • Logon Count and Connection Failures are counts of occurrences over a period, therefore are exposed as sums over a time period.

Concurrent data evaluation

Your sessions must be overlapping to be considered concurrent. However, when the time interval is 1 minute, all sessions in that minute (whether they overlap) are considered concurrent. The size of the interval is so small that the performance overhead involved in calculating the precision is not worth the value added. If the sessions occur in the same hour, but not in the same minute, they are not considered to overlap.

Correlation of summary tables with raw data

The data model represents metrics in two different ways:

  • The summary tables represent aggregate views of the metrics in per minute, hour, and day time granularities.
  • The raw data represents individual events or current state tracked in the session, connection, application, and other objects.

When attempting to correlate data across API calls or within the data model itself, it is important to understand the following concepts and limitations:

  • No summary data for partial intervals. Metrics summaries are designed to meet the needs of historical trends over long periods. These metrics are aggregated into the summary table for complete intervals. There is no summary data for a partial interval at the beginning (oldest available data) of the data collection nor at the end. When viewing aggregations of a day (Interval=1440), this means that the first and most recent incomplete days have no data. Although raw data might exist for those partial intervals, it is never summarized. Pull the min and max SummaryDate from a particular summary table to determine the earliest and latest aggregate interval for a particular data granularity. The SummaryDate column represents the start of the interval. The Granularity column represents the length of the interval for the aggregate data.
  • Correlating by time. Metrics are aggregated into the summary table for complete intervals as described in the preceding section. They can be used for historical trends, but raw events might be more current in the state than what has been summarized for trend analysis. Any time-based comparison of summary to raw data must take into account that there is no summary data for partial intervals that might occur or for the beginning and ending of the time period.
  • Missed and latent events. Metrics that are aggregated into the summary table might be slightly inaccurate if events are missed or latent to the aggregation period. Although the Monitor Service attempts to maintain an accurate current state, it does not go back in time to recompute aggregation in the summary tables for missed or latent events.
  • Connection High Availability. During connection HA, there are gaps in the summary data counts of current connections, but the session instances are still running in the raw data.
  • Data retention periods. Data in the summary tables is retained on a different grooming schedule from the schedule for raw event data. Data might be missing because it has been groomed away from summary or raw tables. Retention periods might also differ for different granularities of summary data. Lower granularity data (minutes) is groomed more quickly than higher granularity data (days). If data is missing from one granularity due to grooming, it might be found in a higher granularity. Since the API calls only return the specific granularity requested, receiving no data for one granularity does not mean that the data doesn’t exist for a higher granularity for the same time period.
  • Time zones. Metrics are stored with UTC time stamps. Summary tables are aggregated on hourly time zone boundaries. For time zones that don’t fall on hourly boundaries, there might be some discrepancy as to where data is aggregated.

Granularity and retention

The granularity of aggregated data retrieved by Monitor is a function of the time (T) span requested. The rules are as follows:

  • 0 < T <= 30 days use per-hour granularity
  • T > 31 days use per-day granularity

Requested data that does not come from aggregated data comes from the raw Session and Connection information. This data tends to grow fast, and therefore has its own grooming setting. Grooming ensures that only relevant data is kept long term. This ensures better performance while maintaining the granularity required for reporting.

Setting name Affected grooming Retention days for Premium Retention days for Advanced
  1 GroomSessionsRetentionDays Session and Connection records retention after Session termination 90 31
  2 GroomFailuresRetentionDays MachineFailureLog and ConnectionFailureLog records 90 31
  3 GroomLoadIndexesRetentionDays LoadIndex records 90 31
  4 GroomDeletedRetentionDays Machine, Catalog, DesktopGroup, and Hypervisor entities that have a LifecycleState of ‘Deleted’. This also deletes any related Session, SessionDetail, Summary, Failure, or LoadIndex records. 90 31
  5 GroomSummariesRetentionDays DesktopGroupSummary, FailureLogSummary, and LoadIndexSummary records. Aggregated data - daily granularity. 365 31
  6 GroomMachineHotfixLogRetentionDays Hotfixes applied to the VDA and Controller machines 90 31
  7 GroomHourlyRetentionDays Aggregated data - hourly granularity 32 31
  8 GroomApplicationInstanceRetentionDays Application Instance history 90 Not applicable
  9 GroomNotificationLogRetentionDays Notification Log records 90 Not applicable
  10 GroomResourceUsageRawDataRetentionDays Resource utilization data - raw data 3 3
  11 GroomResourceUsageHourDataRetentionDays Resource utilization summary data - hour granularity 30 30
  12 GroomResourceUsageDayDataRetentionDays Resource utilization summary data - day granularity 365 31
  13 GroomProcessUsageRawDataRetentionDays Process utilization data - raw data 1 1
  14 GroomProcessUsageHourDataRetentionDays Process utilization data - hour granularity 7 7
  15 GroomProcessUsageDayDataRetentionDays Process utilization data - day granularity 30 30
  16 GroomSessionMetricsDataRetentionDays Session metrics data 1 1
  17 GroomMachineMetricDataRetentionDays Machine metrics data 3 3
  18 GroomMachineMetricDaySummaryDataRetentionDays Machine metrics summary data 365 31
  19 GroomApplicationErrorsRetentionDays Application error data 1 1
  20 GroomApplicationFaultsRetentionDays Application failure data 1 1

Caution:

You cannot modify the values on the Monitor Service database.

Retaining data for long periods has the following implications on table sizes:

  • Hourly data. If hourly data is allowed to stay in the database for up to two years, a site of 1000 delivery groups can cause the database to grow as follows:

    1000 delivery groups x 24 hours/day x 365 days/year x 2 years = 17,520,000 rows of data. The performance impact of such a large amount of data in the aggregation tables is significant. Given that the dashboard data is drawn from this table, the requirements on the database server might be large. Excessively large amounts of data can have a dramatic impact on performance.

  • Session and event data. This is the data that is collected every time a session is started and a connection/reconnection is made. For a large site (100 K users), this data grows fast. For example, two years’ worth of these tables would gather more than a TB of data, requiring a high-end enterprise-level database.

Data granularity and retention