Advertisements

Category: Failover Clustering



Hello there! As the title hints, this post is about solving false alerts being generated in SCOM for non-existing clustered VMs / resources.

I have recently come across a situation where in SCOM I see lot of false alerts generated for hyper-v 2012 r2 clustered resources; reporting VM resource groups are in critical state. However the VMs are deleted from cluster, so the cluster resource monitoring MP should monitor only what actual resources exist on the cluster. “Alert monitor” generated alerts for deleted cluster resources must be closed manually in SCOM as the monitor keeps checking about the non-existing resource to see it state change update., and even after doing this, the alerts for deleted VMs keeps coming back in console.

The whole problem started when couple of cluster nodes in hyper-v cluster are set in Maintenance mode for some activity and the hosts were shutdown as part of process, and during this period, couple of VMs were deleted using VMM management server, and those VMs were gone from cluster as expected, however SCOM picked data from online cluster nodes and was not able to pick data about the deleted VMs from offline cluster nodes. When the shutdown hyper-v hosts were brought online, SCOM started behaving weird, it is still thinking deleted VMs are with the shutdown hyper-v hosts and generated lot of false alerts for deleted VMs in SCOM reporting VMs are in critical state.

At this point, the data with the SCOM in its database is inconsistent. There is no way to remove a clustered resource from cluster management pack view dashboard. We can only place the resource group in MM.

To solve this bug / data inconsistent behavior with SCOM, cluster monitoring Management packs must be deleted and we have to import the cluster monitoring management packs again – this needs to be done when all cluster node are brought back online and active in the cluster, so MPs can pick the data from all Hyper-v nodes. 

Any custom management packs created depending on the cluster MPs needs to be exported from Administration view of SCOM and after deleting all Cluster MPs, and re-importing MPs back in SCOM along with custom MPs will fix this issue. It will take about 1 hour or more to pick / update status of clusters.

Advertisements

Howdy! Today’s blog post is all about Microsoft’s Windows Server Failover Clustering. I’ve noticed that there are a couple of limitations in Windows Server Failover Clustering (WSFC). I am gonna keep adding the identified limitations, so keep checking.

 

First of all Shared VHDX issue. Shared VHDX is a clustering storage feature introduced in 2012R2 for windows server cluster participating nodes. If you’re wondering what is Shared VHDX and how it works, Please see here

 

So, now say, in a 2 node Shared VHDX cluster, you attach 4 Disks to the cluster resource, that is SQL considered as an example here, and both the cluster nodes have these 4 disks in Shared VHDX mode. This will help present the storage as Shared storage, so both the nodes in the cluster can see them. Now if I wanted to move the SQL form Node A to Node B, then all the Shared VHDXs on SQL owning Node will go to Reserve state and will come online on Node B, since we have moved the SQL to node B; and eventually SQL associated Disks and components will move.

Cluster Resource Move failure

Now, if for some reason, one of shared disks are not presented to Node B via Hyper-V manager settings, then Failing over the SQL to Node B will fail to move to Node B. The only error you get is “Cluster disk not connected”. And generating the cluster logs via powershell using “get-clusterlog -uselocatime -timespan 5 -destination D:\logs” too results the below logs.

 

“ERR   [RCM] rcm::RcmApi::MoveGroup: ERROR_CLUSTER_DISK_NOT_CONNECTED(5963)’ because of ‘Move of group SQL Server (MSSQLSERVER) to node CLUSTERNODE2 is not approved’”

 

Now the limitation I’m talking about here is, the cluster is not helping out you identify which exact Shared VHDX is not visible to the Node B. So, if Disk 2 is not presented to Node B, then the cluster knows in the background that it is failing to bring the cluster Disk 2 on Node B, so it should log all that “Bringing Disk 1 online on Node X — Pass, Bringing Disk 2 Online on Node X…” like so, that will help you identify the missing Shared VHDX on the Nodes.

In the above command I’ve used timespan of 5 minutes to pull logs regarding cluster. This avoid me generating a big file and to read all the unwanted stuff, since I’ve just tried to move the SQL off of Node A within the last 5 minutes.

Now, you may feel that you can use Disk management to see the Disks differences, but it works if you have few disks and they all represent different data sizes. If you have 15 or like Storage disks presented via Hyper-V and then almost all are same size, like 500 GB in sizes, then it would be kinda time waste to go through all those disk numbers comparing the disks on each nodes side by side.

 

Now, when I say limitation in Shared VHDX perspective, it could also apply to SAN storage presented via EMC powerpath or like that to the Cluster Nodes. But in that SAN storage directly presented to Cluster node, we can use Powerpath console to identify the disks missing using the reference naming convention used to label disks pushed to the cluster nodes while zoning. But it is still I feel a limitation exists in Windows clustering that is much needed to address at earliest.

 

And here, with Shared VHDX there’s a big issue with the Redirected I/Os that will kill your critical applications because of poor disk performance. A heavy disk utilising cluster resource must not use Shared VHDx as its storage for this reason. I will write more about this Redirected IOs issue in a separate post.

%d bloggers like this: