User Experience (UX) Factors
The UX Factors page provides an insight into the factor and subfactor level experience of the set of users you select in the UX dashboard.
Click any of the Excellent, Fair, or Poor UX category in the UX dashboard to open the UX Factors page. It quantifies the effect of factor and subfactor metrics on the user experience. This page classifies the selected set of users based on their experience concerning the factors - Session Availability, Session Responsiveness, Session Resiliency, and Session Logon Duration. Further, the selected users are also classified based on their experience concerning the subfactors within these factors. This drilldown enables you to identify the actual subfactor responsible for poor experience of users in your environment.
How to use the User Experience (UX) Factors page?
To drill deeper into factor metrics affecting the user experience, click the number in any of the Excellent, Fair, or Poor category in the UX dashboard.
Consider the scenario, where all Sites had 5 Excellent Users, 841 Fair Users and 25 Poor Users during last month. To understand the reason for 25 users facing a poor user experience, click the number 25 from the User Experience dashboard.
The User Experience (UX) factors screen with a drilldown of the factors affecting the User Experience of Poor users in all the Sites during the last month is displayed.
The left panel displays the selection filters for the User Experience and the factors. Use this to further filter the data.
Click the Selected users number to access the Self-Service Search page for the specific set of users.
The sections in the UX factors page classify the selected set of users further based on the factors Session Availability, Session Responsiveness, Session Resiliency, and Session Logon Duration. Expand (click >) each factor section to see the user classification based on experience across respective subfactors. The factors are sorted based on the number of users with poor factor experience.
The overall user experience classification might not match with the user count at the factor level. And, a poor experience across one or more factors might not necessarily mean an overall poor user experience.
Similarly, the user count at individual subfactor levels might not add up to the user count at the factor level. For example a user with high GPOs might not necessarily have a poor logon experience as the user’s experience with other subfactors might have been excellent.
The classification of users at factor and subfactor levels helps identify and troubleshoot the precise cause of poor overall user experience.
Not Categorized users
Not Categorized classification refers to the count of users whose experience cannot be classified under any of the Excellent, Fair, or Poor categories. This might happen at the factor or subfactor levels when the corresponding measurement is unavailable.
Apart from system errors in obtaining the measurements, a user experience might be not categorized due to the following reasons.
- In case a user fails to establish the session, this user is not categorized with respect to all factors except Session Availability.
- Session Responsiveness subfactor measurements are available only when the user connects through a Citrix Gateway version 12.1 or later, configured with Citrix Analytics for Performance. For more information, see Configuring on-prem Citrix Gateway.
- To specifically understand the reasons for a user’s Session Logon Duration subfactor experience being not categorized, see the Session logon Duration subfactors section.
Session Logon Duration
Session Logon Duration is the time taken to launch a session. It is measured as the period from the time the user connects from the Citrix Workspace app to the time when the app or desktop is ready to use. This section classifies users based on the session logon duration readings. The logon duration thresholds for classification of the experience as Excellent, Fair, or Poor are calculated dynamically. For more information on the Dynamic thresholds for Session Logon Duration, see the Dynamic Thresholds section.
Clicking the classified user count numbers lead to the Self-Service screen displaying the actual performance factor measurements for the selected set of users.
Session Logon Duration is broken down into subfactors that represent individual phases in the complex launch sequence. Each row in the Session Logon Duration drilldown table represents the user categorization for the individual phases occurring during session launch. This helps troubleshoot and identify specific user logon issues.
The user counts for Excellent, Fair, and Poor category related to each subfactor experience are displayed. Use this information to analyze specific subfactor phases that might be contributing to longer logon duration.
For example, if GPOs show the highest number of users facing poor experiences, review the GPO policies applicable for these users to help improve logon duration experience.
The last Not Categorized column displays the number of users for whom specific subfactor measurements are not available for the selected time period. Specific reasons are elaborated with individual subfactor descriptions.
Session Logon Duration subfactors
It is the time taken to apply group policy objects during logon. GPOs measurement is available only if the Group Policy settings are configured and enabled on the virtual machines.
It is the time taken for the user profile to load. Profile load measurement is available only if profile settings are configured for the user or the virtual machine.
Profile Load Insights
The Insights column in the Session Logon Duration subfactor table currently provides possible reasons responsible for poor profile load experience.
Clicking the Possible Reasons link in the Insights column displays the number of users having a profile size larger than the average profile size of users with Excellent and Fair Profile Load experience. Profile sizes measured over the last 30 days are used to calculate the average. The insights are not derived if this data is unavailable.
Profile load insights are derived only in cases where the slow profile load for a poor user is caused by the profile size. Other reasons for slow profile load might be the presence of a large number of profile files.
The time taken to “hand off” keyboard and mouse control to the user after the user profile has been loaded. It is normally the longest duration of all the phases of the logon process.
The time taken to decide which desktop to assign to the user.
If the session required a machine start, it is the time taken to start the virtual machine. This measurement is not available for non-power managed machines.
The time taken to complete the steps required in setting up the HDX connection from endpoint to the virtual machine.
The time taken to complete authentication to the remote session.
It is the time taken for the logon scripts to be executed. This measurement is available only if logon scripts are configured for the session.
Once a session is established, the Session Responsiveness factor measures the screen lag that a user experiences while interacting with an app or desktop. Session Responsiveness is measured using the ICA Round Trip Time (ICA RTT) that represents the time elapsed from when the user pushes down a key until the graphical response is displayed back.
ICA RTT is measured as the sum of traffic delays in the server and endpoint machine networks, and the time taken to launch an application. ICA RTT is an important metric that gives an overview of the actual user experience.
The Session Responsiveness thresholds for classification of the experience as Excellent, Fair, or Poor are calculated dynamically. For more information on the Dynamic thresholds for Session Responsiveness, see the Dynamic Thresholds section.
The Session Responsiveness Drilldown represents the classification of users based on the ICA RTT readings of the sessions. Clicking these numbers drills down to the metrics for that category. The excellent users in Session Responsiveness had highly reactive sessions while the poor users faced lag in their sessions.
While the ICA RTT readings are obtained from the Citrix Virtual Apps and Desktops, the subfactor measurements are obtained from the Citrix Gateway. Hence, the subfactor values are available only when the user is connecting to an app or a desktop via a configured Citrix Gateway. For steps to configure Citrix Gateway with Citrix Analytics for Performance, see Configuring on-prem Citrix Gateway.
Further, these measurements are available for sessions,
- launched from VDAs enabled for NSAP
- new CGP (Common Gateway Protocol) sessions, and not reconnected sessions.
The rows in the Session Responsiveness drilldown table represent the user categorization in the subfactor measurements. For each subfactor, the number of users in each category is displayed in the Excellent, Fair, and Poor columns. This helps analyze the specific subfactor that is contributing to poor user experience.
For example, highest number of Poor Users recorded for Data Center Latency indicates an issue with the server-side network.
The last Not Categorized column displays the number of users for whom the specific subfactor measurement was not available during the selected time period.
Session Responsiveness subfactors
The following subfactors contribute to the Session Responsiveness. However, the total ICA RTT is not a sum of the subfactor metrics, as only subfactors of ICA RTT that occur till Layer 4 are measurable.
Data Center Latency: This is the latency measured from the Citrix Gateway to the server. A high Data Center Latency indicates delays due to a slow server network.
WAN Latency: This is the latency measured from the virtual machine to the Gateway. A high WAN Latency indicates sluggishness in the endpoint machine network. WAN latency increases when the user is geographically farther from the Gateway.
Host Latency: This measures the Server OS induced delay. A high ICA RTT with low Data Center and WAN latencies, and a high Host Latency indicates an application error on the host server.
A high number of poor users in any of the subfactors helps understand where the issue lies. You can further troubleshoot the issue using Layer 4 delay measurements. None of these latency metrics account for packet loss, out of order packets, duplicate acknowledgments, or retransmissions. Latency might increase in these cases.
Session Availability is calculated based on the failure rate. It is rate of failed session connections with respect to the total number of attempted session connections.
The Session Availability experience is categorized based on the session failure rate as follows:
Excellent: Failure rate is less than 10%. An excellent Session Availability factor indicates the users being able to successfully connect to and use the app or desktop.
Fair: Failure rate is 10–20%.
Poor: Failure rate is more than 20%. Many users with poor Session Availability experience indicates inability to connect and use sessions.
Since failure to launch sessions disrupts user productivity, it is an important factor in quantifying the overall user experience.
The rows in the Session Reliability drilldown table display the failure types categorized with the number of users and the number of failures in each category. Use the listed Failure types to further troubleshoot the failures.
For more information about the possible reasons within an identified failure type, see the Failure Reasons Troubleshooting document.
Session Resiliency indicates the number of times the Citrix Workspace app auto reconnected to recover from network disruptions. Auto reconnect keeps sessions active when network connectivity is interrupted. Users continue to see the application they are using until network connectivity resumes. An excellent Session Resiliency factor indicates a smooth user experience and lesser number of reconnects due to network disruptions.
Auto reconnect is enabled when the Session Reliability or the Auto Client Reconnect policies are in effect. When there is a network interruption on the endpoint, the following Auto reconnect policies come into effect:
- Session Reliability policy comes into effect (by default in 3 minutes) where the Citrix Workspace app tries to connect to the VDA.
- Auto Client Reconnect policy comes into effect between 3 and 5 minutes where the endpoint machine tries to connect to the VDA.
For each user, the number of auto reconnects are measured during every 15 min interval across the selected time period. Based on the number of auto reconnects in the majority of the 15 min intervals, the experience is classified as Excellent, Fair, or Poor.
The Session Resiliency experience is categorized based on the reconnect rate as follows:
Excellent: In majority of the 15 min intervals in the chosen time period, the number of reconnects were less than 1.
Fair: In majority of the 15 min intervals in the chosen time period, the number of reconnects were 1.
Poor: In majority of the 15 min intervals in the chosen time period, the number of reconnects were greater than 1.