-
Issues related to installation, licensing, upgrade, and uninstallation
-
If -makeresponsefile is used with VxFS file system mounted, installer gives an error(4117011)
-
Upgrades to 8.0.2 may cause configuration errors in VVR replication (4115707)
-
Rolling upgrade from InfoScale 7.4.1 to 8.0 gets stuck during phase 1 (4037913)
-
GAB processes fail to stop during a full upgrade from 6.2.1 to 8.0 (4055693)
-
Switch fencing in enable or disable mode may not take effect if VCS is not reconfigured [3798127]
-
LLT may fail to start after upgrade on Solaris 11 (3770835)
-
On SunOS, drivers may not be loaded after a reboot [3798849]
-
On Oracle Solaris, drivers may not be loaded after stop and then reboot [3763550]
-
During an upgrade process, the AMF_START or AMF_STOP variable values may be inconsistent [3763790]
-
Log messages are displayed when VRTSvcs is uninstalled on Solaris 11 [2919986]
-
Cluster goes into STALE_ADMIN_WAIT state during upgrade from VCS 5.1 to 6.1 or later [2850921]
-
Flash Archive installation not supported if the target system's root disk is encapsulated
-
The Installer fails to unload GAB module while installation of SF packages [3560458]
-
On Solaris 11 non-default ODM mount options will not be preserved across package upgrade (2745100)
-
Upgrades from previous SF Oracle RAC versions may fail on Solaris systems (3256400)
-
After a locale change restart the vxconfig daemon (2417547, 2116264)
-
Storage Foundation known issues
-
InfoScale Volume Manager known issues
-
Issues with host prefix values in case of NVME disks (4017022)
-
vradmin delsec fails to remove a secondary RVG from its RDS (3983296)
-
Core dump issue after restoration of disk group backup (3909046)
-
Failed verifydata operation leaves residual cache objects that cannot be removed (3370667)
-
vxdisksetup -if fails on PowerPath disks of sizes 1T to 2T [3752250]
-
VRAS verifydata command fails without cleaning up the snapshots created [3558199]
-
Root disk encapsulation fails for root volume and swap volume configured on thin LUNs (3538594)
-
SmartIO VxVM cache invalidated after relayout operation (3492350)
-
Disk greater than 1TB goes into error state [3761474, 3269099]
-
Importing an exported zpool can fail when DMP native support is on (3133500)
-
Server panic after losing connectivity to the voting disk (2787766)
-
Performance impact when a large number of disks are reconnected (2802698)
-
device.map must be up to date before doing root disk encapsulation (2202047)
-
Volume Manager (VxVM) might report false serial split brain under certain scenarios (1834513)
-
After changing the preferred path from the array side, the secondary path becomes active (2490012)
-
vxresize does not work with layered volumes that have multiple plexes at the top level (3301991)
-
The vxcdsconvert utility is supported only on the master node (2616422)
-
Re-enabling connectivity if the disks are in local failed (lfailed) state (2425977)
-
Issues with the disk state on the CVM slave node when vxconfigd is restarted on all nodes (2615680)
-
CVMVolDg agent may fail to deport CVM disk group when CVMDeportOnOffline is set to 1
-
The vxsnap print command shows incorrect value for percentage dirty [2360780]
-
When dmp_native_support is set to on, commands hang for a long time on SAN failures (3084656)
-
File System (VxFS) known issues
-
Warning message sometimes appear in the console during system startup (2354829)
-
On Solaris11U2, /dev/odm may show 'Device busy' status when the system mounts ODM [3661567]
-
Oracle Disk Manager (ODM) may fail to start after upgrade to 8.0.2 on Solaris 11 [3739102]
-
On the cluster file system, clone dispose may fail [3754906]
-
VRTSvxfs verification reports error after upgrading to 8.0.2 [3463479]
-
spfile created on VxFS and ODM may contain uninitialized blocks at the end (3760262)
-
A restored volume snapshot may be inconsistent with the data in the SmartIO VxFS cache (3760219)
-
The file system may hang when it has compression enabled (3331276)
-
-
Switching the VVR logowner to another node causes the replication to pause (4114096)
-
Secondary RVG creation using addsec command fails with a hostname not responding error (4113218)
-
Syslog gets flooded with vxconfigd daemon V-5-1-15599 error messages (4115620)
-
RVG goes into secondary log error state after secondary site reboot in CVR environments (4046182)
-
vradmind may appear hung or may fail for the role migrate operation (3968642, 3968641)
-
vradmin repstatus command reports secondary host as "unreachable"(3896588)
-
vradmin functionality may not work after a master switch operation [2158679]
-
Cannot relayout data volumes in an RVG from concat to striped-mirror (2129601)
-
vradmin verifydata may report differences in a cross-endian environment (2834424)
-
vradmin verifydata operation fails if the RVG contains a volume set (2808902)
-
While vradmin commands are running, vradmind may temporarily lose heartbeats (3347656, 3724338)
-
Write I/Os on the primary logowner may take a long time to complete (2622536)
-
After performing a CVM master switch on the secondary node, both rlinks detach (3642855)
-
Initial autosync operation takes a long time to complete for data volumes larger than 3TB (3966713)
-
-
Issues related to the VCS engine
-
Extremely high CPU utilization may cause HAD to fail to heartbeat to GAB [1744854]
-
The hacf -cmdtocf command generates a broken main.cf file [1919951]
-
Service group is not auto started on the node having incorrect value of EngineRestarted [2653688]
-
Group is not brought online if top level resource is disabled [2486476]
-
NFS resource goes offline unexpectedly and reports errors when restarted [2490331]
-
Parent group does not come online on a node where child group is online [2489053]
-
Cannot modify temp attribute when VCS is in LEAVING state [2407850]
-
Service group may fail to come online after a flush and a force flush operation [2616779]
-
The ha commands may fail for non-root user if cluster is secure [2847998]
-
Running -delete -keys for any scalar attribute causes core dump [3065357]
-
RemoteGroup agent and non-root users may fail to authenticate after a secure upgrade [3649457]
-
Java console and CLI do not allow adding VCS user names starting with '_' character (3870470)
-
Issues related to the bundled agents
-
Entry points that run inside a zone are not cancelled cleanly [1179694]
-
Solaris mount agent fails to mount Linux NFS exported directory
-
The zpool command runs into a loop if all storage paths from a node are disabled
-
Process and ProcessOnOnly agent rejects attribute values with white spaces [2303513]
-
The zpool commands hang and remain in memory till reboot if storage connectivity is lost [2368017]
-
Offline of zone resource may fail if zoneadm is invoked simultaneously [2353541]
-
Password changed while using hazonesetup script does not apply to all zones [2332349]
-
RemoteGroup agent does not failover in case of network cable pull [2588807]
-
Share resource goes offline unexpectedly causing service group failover [1939398]
-
Mount agent does not support all scenarios of loopback mounts
-
Zone root configured on ZFS with ForceAttach attribute enabled causes zone boot failure (2695415)
-
Error message is seen for Apache resource when zone is in transient state [2703707]
-
Monitor falsely reports NIC resource as offline when zone is shutting down (2683680)
-
NIC resource may fault during group offline or failover on Solaris 11 [2754172]
-
NFS client reports error when server is brought down using shutdown command [2872741]
-
NFS client reports I/O error because of network split brain [3257399]
-
Mount resource does not support spaces in the MountPoint and BlockDevice attribute values [3335304]
-
IP Agent fails to detect the online state for the resource in an exclusive-IP zone [3592683]
-
SFCache Agent fails to enable caching if cache area is offline [3644424]
-
RemoteGroup agent may stop working on upgrading the remote cluster in secure mode [3648886]
-
(Solaris 11 x64) Application may not failover when a cable is pulled off from the ESX host [3842833]
-
-
Issues related to the VCS database agents
-
VCS ASMDG resource status does not match the Oracle ASMDG resource status (3962416)
-
ASMDG agent does not go offline if the management DB is running on the same (3856460)
-
Sometimes ASMDG reports as offline instead of faulted (3856454)
-
The ASMInstAgent does not support having pfile/spfile for the ASM Instance on the ASM diskgroups
-
VCS agent for ASM - Health check monitoring is not supported for ASMInst agent
-
Oracle agent fails to offline pluggable database (PDB) resource with PDB in backup mode [3592142]
-
Clean succeeds for PDB even as PDB staus is UNABLE to OFFLINE [3609351]
-
Second level monitoring fails if user and table names are identical [3594962]
-
-
Issues related to Intelligent Monitoring Framework (IMF)
-
AMF panics cluster node if some Solaris SRUs are installed (4057959)
-
Registration error while creating a Firedrill setup [2564350]
-
IMF does not fault zones if zones are in ready or down state [2290883]
-
IMF does not detect the zone state when the zone goes into a maintenance state [2535733]
-
VCS engine shows error for cancellation of reaper when Apache agent is disabled [3043533]
-
Terminating the imfd daemon orphans the vxnotify process [2728787]
-
Agent cannot become IMF-aware with agent directory and agent file configured [2858160]
-
ProPCV fails to prevent a script from running if it is run with relative path [3617014]
-
-
-
The cpsadm command fails if LLT is not configured on the application cluster (2583685)
-
When I/O fencing is not up, the svcs command shows VxFEN as online (2492874)
-
The vxfentsthdw utility may not run on systems installed with partial SFHA stack [3333914]
-
Stale .vxfendargs file lets hashadow restart vxfend in Sybase mode (2554886)
-
Storage Foundation Cluster File System High Availability known issues
-
In an FSS environment, creation of mirrored volumes may fail for SSD media [3932494]
-
CVMVOLDg agent is not going into the FAULTED state. [3771283]
-
The fsappadm subfilemove command moves all extents of a file (3258678)
-
Certain I/O errors during clone deletion may lead to system panic. (3331273)
-
Panic due to null pointer de-reference in vx_bmap_lookup() (3038285)
-
Importing Linux FS to AIX/Solaris/HP with fscdsconv -i option fails (4113627)
-
Storage Foundation for Oracle RAC known issues
-
Storage Foundation Oracle RAC issues
-
Oracle database or grid installation using the product installer fails (4004808)
-
CSSD configuration fails if OCR and voting disk volumes are located on Oracle ASM (3914497)
-
PrivNIC and MultiPrivNIC agents not supported with Oracle RAC 11.2.0.2 and later versions
-
CSSD agent forcibly stops Oracle Clusterware if Oracle Clusterware fails to respond (3352269)
-
The vxconfigd daemon fails to start after machine reboot (3566713)
-
Health check monitoring fails with policy-managed databases (3609349)
-
PrivNIC resource faults in IPMP environments on Solaris 11 systems (2838745)
-
Error displayed on removal of VRTSjadba language package (2569224)
-
Volume Manager cannot identify Oracle Automatic Storage Management (ASM) disks (2771637)
-
Oracle Universal Installer fails to start on Solaris 11 systems (2784560)
-
CVM requires the T10 vendor provided ID to be unique (3191807)
-
Change in naming scheme is not reflected on nodes in an FSS environment (3589272)
-
-
Storage Foundation for Databases (SFDB) tools known issues
-
Attempt to use certain names for tiers results in error (2581390)
-
Clone operation failure might leave clone database in unexpected state (2512664)
-
Clone command fails if PFILE entries have their values spread across multiple lines (2844247)
-
Flashsnap clone fails under some unusual archivelog configuration on RAC (2846399)
-
vxdbd process is online after Flash archive installation (2869269)
-
In the cloned database, the seed PDB remains in the mounted state (3599920)
-
If one of the PDBs is in the read-write restricted state, then cloning of a CDB fails (3516634)
-
If a CDB has a tablespace in the read-only mode, then the cloning fails (3512370)
-
Benign message displayed upon execution of vxsfadm -a oracle -s filesnap -o destroyclone (3901533)
This content has been machine translated dynamically.
Dieser Inhalt ist eine maschinelle Übersetzung, die dynamisch erstellt wurde. (Haftungsausschluss)
Cet article a été traduit automatiquement de manière dynamique. (Clause de non responsabilité)
Este artículo lo ha traducido una máquina de forma dinámica. (Aviso legal)
此内容已经过机器动态翻译。 放弃
このコンテンツは動的に機械翻訳されています。免責事項
이 콘텐츠는 동적으로 기계 번역되었습니다. 책임 부인
Este texto foi traduzido automaticamente. (Aviso legal)
Questo contenuto è stato tradotto dinamicamente con traduzione automatica.(Esclusione di responsabilità))
This article has been machine translated.
Dieser Artikel wurde maschinell übersetzt. (Haftungsausschluss)
Ce article a été traduit automatiquement. (Clause de non responsabilité)
Este artículo ha sido traducido automáticamente. (Aviso legal)
この記事は機械翻訳されています.免責事項
이 기사는 기계 번역되었습니다.책임 부인
Este artigo foi traduzido automaticamente.(Aviso legal)
这篇文章已经过机器翻译.放弃
Questo articolo è stato tradotto automaticamente.(Esclusione di responsabilità))
Translation failed!
LLT may fail to start after upgrade on Solaris 11 (3770835)
On Solaris 11, after you upgrade SF for Oracle RAC, VCS, SFHA, SFCFSHA to the appropriate InfoScale 8.0.2 product, you may encounter the following error:
“llt failed to start on systemName “
Workaround:
To resolve this issue, restart the system, and then run the following command:
# **/opt/VRTS/install/installer -start**
Share
Share
In this article
This Preview product documentation is Citrix Confidential.
You agree to hold this documentation confidential pursuant to the terms of your Citrix Beta/Tech Preview Agreement.
The development, release and timing of any features or functionality described in the Preview documentation remains at our sole discretion and are subject to change without notice or consultation.
The documentation is for informational purposes only and is not a commitment, promise or legal obligation to deliver any material, code or functionality and should not be relied upon in making Citrix product purchase decisions.
If you do not agree, select I DO NOT AGREE to exit.