-
Issues related to installation, licensing, upgrade, and uninstallation
-
Rolling upgrade from InfoScale 7.4.1 to 8.0 gets stuck during phase 1 (4037913)
-
Uninstallation fails if system panics while uninstalling VRTSvxvm in a DMP environment (4055903)
-
Switch fencing in enable or disable mode may not take effect if VCS is not reconfigured [3798127]
-
During an upgrade process, the AMF_START or AMF_STOP variable values may be inconsistent [3763790]
-
NetBackup 6.5 or older version is installed on a VxFS file system (2056282)
-
After a locale change restart the vxconfig daemon (2417547, 2116264)
-
Unable to update edge server details by running the installer (3964611)
-
Storage Foundation known issues
-
InfoScale Volume Manager known issues
-
Issues with host prefix values in case of NVME disks (4017022)
-
vradmin delsec fails to remove a secondary RVG from its RDS (3983296)
-
Core dump issue after restoration of disk group backup (3909046)
-
Failed verifydata operation leaves residual cache objects that cannot be removed (3370667)
-
Mounting CFS under VVR may fail, after rolling upgrade phase 1 on one node. [3764652]
-
VRAS verifydata command fails without cleaning up the snapshots created [3558199]
-
SmartIO VxVM cache invalidated after relayout operation (3492350)
-
Performance impact when a large number of disks are reconnected (2802698)
-
device.map must be up to date before doing root disk encapsulation (2202047)
-
Volume Manager (VxVM) might report false serial split brain under certain scenarios (1834513)
-
vxresize does not work with layered volumes that have multiple plexes at the top level (3301991)
-
The vxcdsconvert utility is supported only on the master node (2616422)
-
Re-enabling connectivity if the disks are in local failed (lfailed) state (2425977)
-
Issues with the disk state on the CVM slave node when vxconfigd is restarted on all nodes (2615680)
-
CVMVolDg agent may fail to deport CVM disk group when CVMDeportOnOffline is set to 1
-
The vxsnap print command shows incorrect value for percentage dirty [2360780]
-
Mksysb restore fails if physical volumes have identical PVIDs (3133542)
-
vxconfigd daemon hangs when InfoScale is run on AIX7.2SP1 or any earlier version (3901325)
-
-
File System (VxFS) known issues
-
The fsappadm subfilemove command moves all extents of a file [3760225]
-
dchunk_enable does not get set through vxtunefs in AIX (3551030)
-
Cannot use some commands from inside an automounted Storage Checkpoint (2490709)
-
A restored volume snapshot may be inconsistent with the data in the SmartIO VxFS cache (3760219)
-
The file system may hang when it has compression enabled (3331276)
-
Unaligned large reads may lead to performance issues (3064877)
-
-
Switching the VVR logowner to another node causes the replication to pause (4114096)
-
Secondary RVG creation using addsec command fails with a hostname not responding error (4113218)
-
Syslog gets flooded with vxconfigd daemon V-5-1-15599 error messages (4115620)
-
RVG goes into secondary log error state after secondary site reboot in CVR environments (4046182)
-
vradmind may appear hung or may fail for the role migrate operation (3968642, 3968641)
-
vradmin repstatus command reports secondary host as "unreachable"(3896588)
-
vradmin functionality may not work after a master switch operation [2158679]
-
Cannot relayout data volumes in an RVG from concat to striped-mirror (2129601)
-
vradmin verifydata may report differences in a cross-endian environment (2834424)
-
vradmin verifydata operation fails if the RVG contains a volume set (2808902)
-
While vradmin commands are running, vradmind may temporarily lose heartbeats (3347656, 3724338)
-
Write I/Os on the primary logowner may take a long time to complete (2622536)
-
After performing a CVM master switch on the secondary node, both rlinks detach (3642855)
-
Initial autosync operation takes a long time to complete for data volumes larger than 3TB (3966713)
-
-
-
Issues related to the VCS engine
-
Extremely high CPU utilization may cause HAD to fail to heartbeat to GAB [1744854]
-
The hacf -cmdtocf command generates a broken main.cf file [1919951]
-
VCS fails to validate processor ID while performing CPU Binding [2441022]
-
Service group is not auto started on the node having incorrect value of EngineRestarted [2653688]
-
Group is not brought online if top level resource is disabled [2486476]
-
NFS resource goes offline unexpectedly and reports errors when restarted [2490331]
-
Parent group does not come online on a node where child group is online [2489053]
-
Cannot modify temp attribute when VCS is in LEAVING state [2407850]
-
Service group may fail to come online after a flush and a force flush operation [2616779]
-
System sometimes displays error message with vcsencrypt or vcsdecrypt [2850899]
-
The ha commands may fail for non-root user if cluster is secure [2847998]
-
Every ha command takes longer time to execute on secure FIPS mode clusters [2847997]
-
Running -delete -keys for any scalar attribute causes core dump [3065357]
-
RemoteGroup agent and non-root users may fail to authenticate after a secure upgrade [3649457]
-
Java console and CLI do not allow adding VCS user names starting with '_' character (3870470)
-
-
Issues related to the bundled agents
-
MultiNICB resource may show unexpected behavior with IPv6 protocol [2535952]
-
LPAR agent may not show the correct state of LPARs [2425990]
-
RemoteGroup agent does not failover in case of network cable pull [2588807]
-
VCS does not monitor applications inside an already existing WPAR [2494532]
-
Error messages for wrong HMC user and HMC name do not communicate the correct problem
-
LPAR agent may dump core when all configured VIOS are down [2850898]
-
NFS client reports I/O error because of network split brain [3257399]
-
Mount resource does not support spaces in the MountPoint and BlockDevice attribute values [3335304]
-
Mount agent fails to online Mount resource due to OS issue [3508584]
-
SFCache Agent fails to enable caching if cache area is offline [3644424]
-
RemoteGroup agent may stop working on upgrading the remote cluster in secure mode [3648886]
-
Issues related to the VCS database agents
-
VCS ASMDG resource status does not match the Oracle ASMDG resource status (3962416)
-
ASMDG agent does not go offline if the management DB is running on the same (3856460)
-
Sometimes ASMDG reports as offline instead of faulted (3856454)
-
The ASMInstAgent does not support having pfile/spfile for the ASM Instance on the ASM diskgroups
-
VCS agent for ASM - Health check monitoring is not supported for ASMInst agent
-
IMF registration fails if sybase server name is given at the end of the configuration file [2365173]
-
Oracle agent fails to offline pluggable database (PDB) resource with PDB in backup mode [3592142]
-
Clean succeeds for PDB even as PDB staus is UNABLE to OFFLINE [3609351]
-
Second level monitoring fails if user and table names are identical [3594962]
-
-
Issues related to Intelligent Monitoring Framework (IMF)
-
Registration error while creating a Firedrill setup [2564350]
-
VCS engine shows error for cancellation of reaper when Apache agent is disabled [3043533]
-
Terminating the imfd daemon orphans the vxnotify process [2728787]
-
Agent cannot become IMF-aware with agent directory and agent file configured [2858160]
-
ProPCV fails to prevent a script from running if it is run with relative path [3617014]
-
-
Storage Foundation Cluster File System High Availability known issues
-
After an SFCFSHA upgrade is completed, the installer may fail to stop the veki module (4001089)
-
In an FSS environment, creation of mirrored volumes may fail for SSD media [3932494]
-
CVMVOLDg agent is not going into the FAULTED state. [3771283]
-
The fsappadm subfilemove command moves all extents of a file (3258678)
-
Certain I/O errors during clone deletion may lead to system panic. (3331273)
-
Panic due to null pointer de-reference in vx_bmap_lookup() (3038285)
-
Importing Linux FS to AIX/Solaris/HP with fscdsconv -i option fails (4113627)
-
-
Storage Foundation for Oracle RAC known issues
-
Storage Foundation Oracle RAC issues
-
Oracle database or grid installation using the product installer fails (4004808)
-
CSSD configuration fails if OCR and voting disk volumes are located on Oracle ASM (3914497)
-
PrivNIC and MultiPrivNIC agents not supported with Oracle RAC 11.2.0.2 and later versions
-
CSSD agent forcibly stops Oracle Clusterware if Oracle Clusterware fails to respond (3352269)
-
The vxconfigd daemon fails to start after machine reboot (3566713)
-
Health check monitoring fails with policy-managed databases (3609349)
-
Volume Manager cannot identify Oracle Automatic Storage Management (ASM) disks (2771637)
-
CVM requires the T10 vendor provided ID to be unique (3191807)
-
Change in naming scheme is not reflected on nodes in an FSS environment (3589272)
-
-
Storage Foundation for Databases (SFDB) tools known issues
-
Filesnap clone operation fails for PDB when OMF is enabled (4001463)
-
The database clone operation using the vxsfadm -o clone(1M) command fails (3313715)
-
In an off-host scenario, a clone operation may fail with an error message (3313572)
-
Attempt to use certain names for tiers results in error (2581390)
-
Clone operation failure might leave clone database in unexpected state (2512664)
-
Clone command fails if PFILE entries have their values spread across multiple lines (2844247)
-
Flashsnap clone fails under some unusual archivelog configuration on RAC (2846399)
-
If one of the PDBs is in the read-write restricted state, then cloning of a CDB fails (3516634)
-
If a CDB has a tablespace in the read-only mode, then the cloning fails (3512370)
-
Benign message displayed upon execution of vxsfadm -a oracle -s filesnap -o destroyclone (3901533)
-
This content has been machine translated dynamically.
Dieser Inhalt ist eine maschinelle Übersetzung, die dynamisch erstellt wurde. (Haftungsausschluss)
Cet article a été traduit automatiquement de manière dynamique. (Clause de non responsabilité)
Este artículo lo ha traducido una máquina de forma dinámica. (Aviso legal)
此内容已经过机器动态翻译。 放弃
このコンテンツは動的に機械翻訳されています。免責事項
이 콘텐츠는 동적으로 기계 번역되었습니다. 책임 부인
Este texto foi traduzido automaticamente. (Aviso legal)
Questo contenuto è stato tradotto dinamicamente con traduzione automatica.(Esclusione di responsabilità))
This article has been machine translated.
Dieser Artikel wurde maschinell übersetzt. (Haftungsausschluss)
Ce article a été traduit automatiquement. (Clause de non responsabilité)
Este artículo ha sido traducido automáticamente. (Aviso legal)
この記事は機械翻訳されています.免責事項
이 기사는 기계 번역되었습니다.책임 부인
Este artigo foi traduzido automaticamente.(Aviso legal)
这篇文章已经过机器翻译.放弃
Questo articolo è stato tradotto automaticamente.(Esclusione di responsabilità))
Translation failed!
In an FSS environment, creation of mirrored volumes may fail for SSD media [3932494]
In an FSS environment where SSD devices are used from Storage Access Layer (SAL), the creation of mirrored volumes may fail if vxconfigd is restarted on the master node.
This issue occurs because the Mediatype attribute for a device is inconsistently propagated from the kernel during vxconfigd startup.
Workaround: Before creating a disk group, set the media type attribute to SSD
vxdisk set -f diskname mediatype=ssd
Share
Share
In this article
This Preview product documentation is Citrix Confidential.
You agree to hold this documentation confidential pursuant to the terms of your Citrix Beta/Tech Preview Agreement.
The development, release and timing of any features or functionality described in the Preview documentation remains at our sole discretion and are subject to change without notice or consultation.
The documentation is for informational purposes only and is not a commitment, promise or legal obligation to deliver any material, code or functionality and should not be relied upon in making Citrix product purchase decisions.
If you do not agree, select I DO NOT AGREE to exit.