Nutanix AHV (Acropolis)
A Nutanix AHV connector configuration contains the credentials and storage container that the appliance needs to connect to Nutanix Acropolis.
You can use this connector configuration to access a specific location in your Nutanix environment when you:
- Package layers as part of creating a Platform or App layer, or as part of adding a version to a layer.
- Publish layered images to Nutanix.
Before you start
You can use your Nutanix Acropolis environment for creating layers and publishing layered images. Each connector configuration accesses a specific storage container in your Nutanix Acropolis environment where you can create your layers or publish layered images.
You need more than one Nutanix Acropolis connector configuration to access the correct container for each purpose. Furthermore, it is important to publish each layered image to a container that is conveniently accessible to the systems you are provisioning with the published image. For more about connectors and connector configurations, see Connector configurations.
App Layering uses the Prism Elements web console and does not support the Prism Central console.
When using Nutanix connectors, App Layering requires direct NFS access to the hosts to work correctly. In older versions of Nutanix AHV (5.6 and 5.7), this direct NFS access to hosts was not allowed if a Prism Element host or cluster was registered with Prism Central. Make sure that your Nutanix setup allows this access. For details about this issue on various Nutanix versions, see Adding layer versions with Nutanix fails with error: Failed to execute the script
When configuring the Nutanix connector be sure to enter the URL for the Prism Elements console.
If Prism Central is used in the connector configuration, you receive the error, “internal error 500.”
Make sure that the appliance is added to your Nutanix allow list so that it can access the appropriate storage containers, as needed. This can be accomplished through configuring the file system and container-level allow list settings. For details about adding an allow list with Nutanix, see the Nutanix documentation.
The Nutanix connector configuration lets you define the credentials and container to use for a new configuration.
The fields are case-sensitive. Any values that you enter manually must match the case of the object at Nutanix, or else the validation fails.
- Connector Configuration Name: A useful name to help identify this connector configuration.
- Web Console (Prism) Address: The host name (resolvable via DNS) or IP address of the Prism Web Console. This address is the same one that you use to access the Nutanix Prism Web Console.
User Name/Password: Credentials that are used when interacting with the Nutanix system. The specified user must have sufficient privileges for the following operations:
- VM operations:
- power on/off
- attach virtual disks
- Image operations:
- update (aka upload)
- Virtual disks:
- attach to VMs
- VM operations:
- Virtual Machine Template (recommended): Virtual Machine Template that can be used to clone a VM with the hardware settings for Nutanix, including memory, CPUs, and video settings. You can specify the host, datastore, and network for configuring the resulting VMs. Since there is no concept of a “template” at Nutanix, these “templates” are actual VMs. The OS version used by the selected “template” must match the OS version that you are using for building layers or publishing layered images. The template must not have any disks attached and must have at least one network card attached. If it does not, you see an error when trying to validate or save the configuration.
- Storage Container: Lets you select the storage container for the images (virtual disks, VHDs) that are uploaded, and the resulting virtual disks that are created from those images. When creating app layers and OS layer versions, mount the storage container as an NFS mount point. Configure the allow list using the Nutanix web console or Nutanix CLI tools. Set the allow list to the cluster and every storage container on the cluster, even the ones you are not using. Note: If the appliance is not allow-listed for the selected storage container, the validation phase fails, and the error is indicated with the storage container selection.
- Layer Disk Cache Size in GB (optional): Specifies the size of the cache allowed for each layer.
- Offload Compositing: Enables the layer packaging or image publishing process to run on the specified Nutanix server. This feature increases performance and it allows you to use a native disk format and either BIOS or UEFI virtual machines. This is enabled by default.
- Packaging Cache Size in GB (recommended): Amount of cache size space (in gigabytes) to use for packaging. Accept the recommended value or modify it.
Nutanix does not provide a mechanism for organizing virtual machines. Because of this, it may be difficult to find the virtual machines created by your appliance when the total number of virtual machines is large. To help you find these VMs, the following naming conventions are used:
Packaging Machines (virtual machines created during the process of creating an App Layer or OS Version)
- The virtual machine name starts with the layer name that is being created/modified
- The virtual machine names end with the following text: (Packaging Machine)
Layered Image Virtual Machines (virtual machines created as a result of publishing a layered image)
- The virtual machine name starts with the image name that was published
- The virtual machine name ends with the following text: (Published Image)
When viewing virtual machines through the Nutanix web console, you can search for virtual machines by filtering on:
- “Citrix App Layering” to find all virtual machines created by the App Layering service.
- “Citrix App Layering Packaging Machine” to find all virtual machines created for layer management jobs.
- “Citrix App Layering Published Image” to find all virtual machines created to publish a layered image.
- Image name or layer name to find virtual machines related to a specific layered image publishing job or App or OS creation.
The virtual network settings of the source template specified in the Nutanix AHV connector configuration will be carried over when creating any VMs through the Nutanix Acropolis Hypervisor (AHV) Connector. There is no option in the Connector Configuration UI to override the network settings.
Create a connector configuration
To enter values:
- You must manually enter the first three Connector fields. Once the credentials in those fields are validated, you can select values for the remaining fields from the drop-down menus.
- To enter values manually, click to put the cursor in the field and type the value, making sure that the case matches the value in Acropolis.
- To select a value from a drop-down list, click once to put the cursor in the field, and a second time to display the list of possible values.
- Log in to the management console as an administrator.
- Select the Connectors > Add connector configuration.
- Select Nutanix AHV from the connector Type drop-down menu and click New. This opens the connector configuration.
- Enter the configuration Name, and the Acropolis Address, User Name, and Password. For guidance, see the above field definitions.
- Click the Connect button below the Acropolis Configuration fields. The Virtual Machine Clone Settings field is then enabled if the connection is successful. Any connection problems are reported on the connector configuration blade. If there were server certificate errors found, you see an Ignore Certificate Errors and Continue button.
- Select the Virtual Machine Template.
- Select the Storage Repository.
- Click Confirm and Complete. If there are no errors, a summary page is displayed.
- Click Save. Verify that the new connector configuration is listed on the Connectors page.