Convert a Citrix ADC MPX 14020/14030/14040/ 14060/14080/14100 appliance to a Citrix ADC SDX 14020/14030/14040/ 14060/14080/14100 appliance
You can convert a Citrix ADC MPX appliance to a Citrix ADC SDX appliance by upgrading the software through a new solid-state drive (SSD). Citrix supplies a field conversion kit to migrate a Citrix ADC MPX appliance to an SDX appliance.
The conversion requires minimum of four SSDs.
Note Citrix recommends that you configure the Lights Out Management (LOM) port of the Citrix ADC appliance before starting the conversion process. For more information on the LOM port of the Citrix ADC appliance, see Lights out management port of the Citrix ADC SDX appliance.
To convert a Citrix ADC MPX appliance to an SDX appliance, you must access the appliance through a console cable attached to a computer or terminal. Before connecting the console cable, configure the computer or terminal to support the following configuration:
- VT100 terminal emulation
- 9600 baud
- 8 data bits
- 1 stop bit
- Parity and flow control set to NONE
Connect one end of the console cable to the RS232 serial port on the appliance, and the other end to the computer or terminal.
Note: To use a cable with an RJ-45 converter, insert the optional converter into the console port and attach the cable to it.
Citrix recommends that you connect a VGA monitor to the appliance to monitor the conversion process, because the LOM connection might be lost during the conversion process.
With the cable attached, verify that the MPX appliance’s components are functioning correctly. You are then ready to begin the conversion. The conversion process modifies the Basic Input-Output System (BIOS), installs XenServer hypervisor and a Service Virtual Machine image, and copies the VPX image to the solid-state drive.
The conversion process also sets up a redundant array of independent disks (RAID) controller for local storage (SSD slot # 1 and SSD slot # 2) and Citrix ADC VPX storage (SSD slot # 3 and SSD slot # 4).
After the conversion process, you modify the configuration of the appliance and apply a new license. You can then provision the VPX instances through the Management Service on what is now an SDX appliance.
To verify proper operation of the MPX appliance’s components
- Access the console port and enter the administrator credentials.
- Run the following command from the command line interface of the appliance to display the serial number: show hardware
The serial number might be helpful in case you want to contact Citrix Technical Support.
show hardware Platform: NSMPX-14000 12*CPU+16*IX+2*E1K+2*CVM N3 250101 Manufactured on: 10/2/2015 CPU: 2600MHZ Host Id: 234913926 Serial no: JSW4UCKKM5 Encoded serial no: JSW4UCKKM5 Done
3. Run the following command to display the status of the active 10G interfaces: show interface 4. In the show interface command’s output, verify that all of the interfaces are enabled and the status of every interface is shown as UP/UP. Note: If you do not have an SFP+ transceiver for every port, verify the interfaces in stages. After checking the first set of interfaces, unplug the SFP+ transceivers and plug them in to the next set of ports. 5. Run the following command for each of the interfaces that are not in the UP/UP state: *enable interface 10/x where x is the new interface number.
6. Run the following command to verify that the status of the power supplies is normal: stat system -detail 7. Run the following command to generate a tar of system configuration data and statistics: show techsupport Note: The output of the command is available in the /var/tmp/support /collector_<IP_address>_P_<date>.tar.gz file. Copy this file to another computer for future reference. The output of the command might be helpful in case you want to contact Citrix Technical Support. 8. At the command line interface, switch to the shell prompt. Type: shell 9. Run the following command to verify the number of Cavium cards available depending upon your appliance: root@ns# grep “cavium” /var/nslog/dmesg.boot
root@ns# grep "cavium" /var/nslog/dmesg.boot Cavium cavium_probe : found card 0x177d,device=0x11 cavium0 mem 0xdd600000-0xdd6fffff irq 32 at device 0.0 on pci3 Cavium cavium_probe : found card 0x177d,device=0x11 cavium1 mem 0xfaa00000-0xfaafffff irq 64 at device 0.0 on pci136
10. Run the following command to verify the RAM memory reserved for shared memory depending upon your appliance: root@ns# grep “memory” /var/nslog/dmesg.boot
root@ns# grep "memory" /var/nslog/dmesg.boot real memory = 70866960384 (67584 MB) avail memory = 66267971584 (63198 MB)
11. Run the following command to verify the number of CPU cores depending upon your appliance: root@ns# grep “cpu” /var/nslog/dmesg.boot
root@ns# grep "cpu" /var/nslog/dmesg.boot cpu0 (BSP): APIC ID: 0 cpu1 (AP): APIC ID: 2 cpu2 (AP): APIC ID: 4 cpu3 (AP): APIC ID: 6 cpu4 (AP): APIC ID: 8 cpu5 (AP): APIC ID: 10 cpu6 (AP): APIC ID: 32 cpu7 (AP): APIC ID: 34 cpu8 (AP): APIC ID: 36 cpu9 (AP): APIC ID: 38 cpu10 (AP): APIC ID: 40 cpu11 (AP): APIC ID: 42 cpu0: <ACPI CPU> on acpi0 acpi_throttle0: <ACPI CPU Throttling> on cpu0 cpu1: <ACPI CPU> on acpi0 acpi_throttle1: <ACPI CPU Throttling> on cpu1 cpu2: <ACPI CPU> on acpi0 acpi_throttle2: <ACPI CPU Throttling> on cpu2 cpu3: <ACPI CPU> on acpi0 acpi_throttle3: <ACPI CPU Throttling> on cpu3 cpu4: <ACPI CPU> on acpi0 acpi_throttle4: <ACPI CPU Throttling> on cpu4 cpu5: <ACPI CPU> on acpi0 acpi_throttle5: <ACPI CPU Throttling> on cpu5 cpu6: <ACPI CPU> on acpi0 acpi_throttle6: <ACPI CPU Throttling> on cpu6 cpu7: <ACPI CPU> on acpi0 acpi_throttle7: <ACPI CPU Throttling> on cpu7 cpu8: <ACPI CPU> on acpi0 acpi_throttle8: <ACPI CPU Throttling> on cpu8 cpu9: <ACPI CPU> on acpi0 acpi_throttle9: <ACPI CPU Throttling> on cpu9 cpu10: <ACPI CPU> on acpi0 acpi_throttle10: <ACPI CPU Throttling> on cpu10 cpu11: <ACPI CPU> on acpi0 acpi_throttle11: <ACPI CPU Throttling> on cpu11 root@ns#
12. Run the following command to verify that the /var drive is mounted as /dev/ad8s1e: root@ns# df -h 13. Run the following command to execute the ns_hw_err.bash script, which checks for latent hardware errors: root@ns# ns_hw_err.bash Example
root@ns# ns_hw_err.bash NetScaler NS10.1: Build 133.11.nc, Date: Sep 21 2015, 17:59:51 platform: serial JSW4UCKKM5 platform: sysid 250101 - NSMPX-14000 12*CPU+16*IX+2*E1K+2*CVM N3 HDD MODEL: ar0: 227328MB <Intel MatrixRAID RAID1> status: READY Generating the list of newnslog files to be processed... Generating the events from newnslog files... Checking for HDD errors... Checking for HDD SMART errors... Checking for Flash errors... /var/nslog/dmesg.prev:* DEVELOPER mode - run NetScaler manually! ****************************************** FOUND 1 Flash errors: DEVELOPER mode - run NetScaler manually ****************************************** Checking for SSL errors... Checking for BIOS errors... Checking for SMB errors... Checking for MotherBoard errors... Checking for CMOS errors... License year: 2015: OK License server failed at startup. Check /var/log/license.log Vendor daemon failed at startup. Check /var/log/license.log Checking for SFP/NIC errors... Checking for Firmware errors... Checking for License errors... Checking for Undetected CPUs... Checking for DIMM flaps... Checking for LOM errors... Checking the Power Supply Errors... root@ns#
14. Important: Physically disconnect all ports except the LOM port, including the management port, from the network. 15. At the shell prompt, switch to the Citrix ADC command line. Type: exit 16. Run the following command to shut down the appliance: shutdown -p now
To upgrade the appliance
- Power off the Citrix ADC appliance.
Locate two solid-state drives (SSDs) on the back of the appliance in slot #1 and slot #2, as shown in the following figure:
- Verify that the replacement SSDs are the ones required for your Citrix ADC model. The conversion requires minimum of four SSDs. The Citrix label is on the top of one of the solid-state drives, which is pre-populated with a new version of BIOS and a recent build of the required SDX Management Service. This SSD must be installed in slot # 1.
- Remove the SSDs by pushing the safety latch of the drive cover down while pulling the drive handle.
- On the new Citrix Certified SSD drive, open the drive handle completely to the left, and then insert the new drive into the slot #1 as far as possible.
To seat the drive, close the handle flush with the rear side of the appliance so that the drive locks securely into the slot.
Important: The orientation of the SSD is important. When you insert the drive, make sure that the Citrix product label is at the top.
Insert a second Citrix certified SSD, which matches the capacity of the SSD in slot #1, in slot # 2. Insert other blank Citrix certified SSDs in slots #3 and #4.
Note: If the license of your appliance is 14040, insert other blank Citrix certified SSDs in slots #3, #4, #5, and #6. If the license of your appliance is 14060/14080/14100, insert other blank Citrix certified SSDs in slots #3, #4, #5, #6, #7, and #8.
Important: Mixing and matching of old and new SSDs is not supported. SSDs in slot #1 and slot # 2, which constitute the first RAID pair (local storage), must be of the same size and type. Similarly, SSDs in slot # 3 and slot # 4, which constitute the second RAID pair (VPX storage), must be of the same size and type. Do not use any other drives that are not part of the provided conversion kit.
- Store the old SSDs for future handling.
- Disconnect all network cables from the data ports and the management ports.
Start the Citrix ADC appliance. For instructions, see “Switching on the Appliance” in Installing the hardware. The conversion process can run for approximately 30 minutes, during which you must not power cycle the appliance. The entire conversion process might not be visible on the console and might appear to be unresponsive. The conversion process updates the BIOS, installs the XenServer hypervisor and the Management Service Operating system, and copies the Citrix ADC VPX image to the SSD for instance provisioning, and forms the Raid1 pair. Note: The serial number of the appliance remains the same.
- Keep the console cable attached during the conversion process. Allow the process to complete, at which point the netscaler-sdx login: prompt appears.
- During the conversion process the LOM port connection may be lost as it resets the IP address to the default value of 192.168.1.3. The conversion status output is available on the VGA monitor.
- To make sure that the conversion is successful, verify that the FVT result indicates success. Run the following command: tail /var/log/fvt/fvt.log
[root@netscaler-sdx ~]# tail /var/log/fvt/fvt.log Wed, 28 Oct 2015 04:40:47 /opt/xensource/packages/files/fvt/workers/check_vf_count --pf_device="0000:89:00.1" --vf_count="40" Wed, 28 Oct 2015 04:40:47 => PASS Wed, 28 Oct 2015 04:40:47 /opt/xensource/packages/files/fvt/workers/check_vf_count --pf_device="0000:03:00.0" --vf_count="8" Wed, 28 Oct 2015 04:40:47 => PASS Wed, 28 Oct 2015 04:40:47 /opt/xensource/packages/files/fvt/workers/check_vf_count --pf_device="0000:88:00.0" --vf_count="8" Wed, 28 Oct 2015 04:40:47 => PASS Wed, 28 Oct 2015 04:40:47 FVT RESULT: SUCCESS!<br /> [root@netscaler-sdx ~]#
To reconfigure the converted appliance
After the conversion process, the appliance no longer has its previous working configuration. Therefore, you can access the appliance through a web browser only by using the default IP address: 192.168.100.1/16. Configure a computer on network 192.168.0.0 and connect it directly to the appliance’s management port (0/1) with a cross-over Ethernet cable, or access the SDX appliance through a network hub by using a straight through Ethernet cable. Use the default credentials to log on (Username: nsroot and Password: nsroot), and then do the following:
- Select the Configuration tab.
- Verify that the System Resource section displays the accurate number of CPU cores, SSL cores, and the total memory for your SDX appliance.
- Select the System node and, under Set Up Appliance, click Network Configuration to modify the IP address of the Management Service.
- In the Configure Network Configuration dialog box, specify the following details:
- Interface*—The interface through which clients connect to the Management Service. Possible values: 0/1, 0/2. Default: 0/1.
- XenServer IP Address*—The IP address of XenServer hypervisor.
- Management Service IP Address*—The IP address of the Management Service.
- Netmask*—The subnet mask for the subnet in which the SDX appliance is located.
- Gateway*—The default gateway for the network.
- DNS Server—The IP address of the DNS server.
*A mandatory parameter
- Click OK. Connection to the Management Service is lost as the network information was changed.
- Connect the SDX appliance’s management port 0/1 to a switch to access it through the network. Browse to the IP address used above and log on with the default credentials.
- Apply the new licenses. For instructions, see SDX Licensing Overview.
- Navigate to Configuration > System and, in the System Administration group, click Reboot Appliance. Click Yes to confirm. You are now ready to provision the VPX instances on the NetScaler SDX appliance. For instructions, see Provisioning Citrix ADC Instances.