Tag Archive: Hyper-v cluster



Howdy! Today’s blog post is all about Microsoft’s Windows Server Failover Clustering. I’ve noticed that there are a couple of limitations in Windows Server Failover Clustering (WSFC). I am gonna keep adding the identified limitations, so keep checking.

 

First of all Shared VHDX issue. Shared VHDX is a clustering storage feature introduced in 2012R2 for windows server cluster participating nodes. If you’re wondering what is Shared VHDX and how it works, Please see here

 

So, now say, in a 2 node Shared VHDX cluster, you attach 4 Disks to the cluster resource, that is SQL considered as an example here, and both the cluster nodes have these 4 disks in Shared VHDX mode. This will help present the storage as Shared storage, so both the nodes in the cluster can see them. Now if I wanted to move the SQL form Node A to Node B, then all the Shared VHDXs on SQL owning Node will go to Reserve state and will come online on Node B, since we have moved the SQL to node B; and eventually SQL associated Disks and components will move.

Cluster Resource Move failure

Now, if for some reason, one of shared disks are not presented to Node B via Hyper-V manager settings, then Failing over the SQL to Node B will fail to move to Node B. The only error you get is “Cluster disk not connected”. And generating the cluster logs via powershell using “get-clusterlog -uselocatime -timespan 5 -destination D:\logs” too results the below logs.

 

“ERR   [RCM] rcm::RcmApi::MoveGroup: ERROR_CLUSTER_DISK_NOT_CONNECTED(5963)’ because of ‘Move of group SQL Server (MSSQLSERVER) to node CLUSTERNODE2 is not approved’”

 

Now the limitation I’m talking about here is, the cluster is not helping out you identify which exact Shared VHDX is not visible to the Node B. So, if Disk 2 is not presented to Node B, then the cluster knows in the background that it is failing to bring the cluster Disk 2 on Node B, so it should log all that “Bringing Disk 1 online on Node X — Pass, Bringing Disk 2 Online on Node X…” like so, that will help you identify the missing Shared VHDX on the Nodes.

In the above command I’ve used timespan of 5 minutes to pull logs regarding cluster. This avoid me generating a big file and to read all the unwanted stuff, since I’ve just tried to move the SQL off of Node A within the last 5 minutes.

Now, you may feel that you can use Disk management to see the Disks differences, but it works if you have few disks and they all represent different data sizes. If you have 15 or like Storage disks presented via Hyper-V and then almost all are same size, like 500 GB in sizes, then it would be kinda time waste to go through all those disk numbers comparing the disks on each nodes side by side.

 

Now, when I say limitation in Shared VHDX perspective, it could also apply to SAN storage presented via EMC powerpath or like that to the Cluster Nodes. But in that SAN storage directly presented to Cluster node, we can use Powerpath console to identify the disks missing using the reference naming convention used to label disks pushed to the cluster nodes while zoning. But it is still I feel a limitation exists in Windows clustering that is much needed to address at earliest.

 

And here, with Shared VHDX there’s a big issue with the Redirected I/Os that will kill your critical applications because of poor disk performance. A heavy disk utilising cluster resource must not use Shared VHDx as its storage for this reason. I will write more about this Redirected IOs issue in a separate post.


I have been haunted by this weird TCP spurious retransmissions and TCP DUP ACK issue since past 1 month – It almost started/I’ve noticed on November last week. Our production FTP server is a Red Lion device See here sitting in our manufacturing site, whereas our source servers are hosted on Hyper-V clusters. This setup has no Firewalls; only Cisco Nexus Switches 3064  & 3048 Models – that’s 3 3064’s and 2 3048 models connected in a HA model. Our Hyper-V clusters are connected to Cisco 3064 Switches in HA model; 2 Nic cables pulled from each VM Host to 2 3064 Switches for HA. Red Lion – FTP/HTTP device has been attached to 3048 model. These 3064’s are connected to the 3048 Switches directly – no firewalls.

STP is configured properly and running A-okay. Other than Red Lion device, I was able to route traffic as desired and can reach data transfer rates at 250 MB/s. But if this same Red Lion device is moved and connected to a different network that’s having Cisco Catalyst switches, this Device is working fine. No retransmissions issue.

There are a lot of packet retransmissions happening just before the FTP application failing with error – BTW, I am using Filezilla client to transfer data to the FTP Box. Same is the case when browsing the FTP/HTTP site hosted on the Red Lion box via IE from my machines.

TCP_Retransmissions

Wireshark Analysis

I’ve analysed the network connection between these servers in question and noticed that there are a lot of packet retransmissions happening. TCP “RST” (RESET), “Spurious Retransmissions” (Source Retransmitted the packet even though the DEST ACK; assuming the DEST hasn’t ACK) are noticed in high numbers. This is not the case when I tried to capture traffic between the other sources.

TCP RST couldn’t be considered as the issue normally because this happens after every session closure. But in our case the packet retransmissions and failing communication are resetting the RPC port communication and thus these messages are seen. So obviously, in both success and failure cases we will see this kind of messages.

TCP Segment Length

TCP Segment Length

I have noticed that the Maximum Segment Size; MSS of the destination server – Redlion box is “1280” and the source server is “1460”. Pinging with 1460 without fragmentation to the destination server which has 1280 MSS value is responding fine; data that remote server responds with has same data length size – “data.len>1460” filter applied shows that ICMP data of 1460 is transmittable both ways. Both the source and destination servers acknowledged to communicate using 1280 MSS value as they should be per application protocols standards; verified this as per “tcp.len>1200” filter applied and could see traffic generated has no TCP segment length that is using higher segment size than 1280 size in the application communications and thus eliminating the possible MSS size issue for packet retransmissions.

portqry

Port Query Results

ICMP packets are fine, they don’t have any issues. Only FTP/HTTP traffic is getting affected. This means no issues until Network layer, but with Application/session layer the traffic is getting worse. And at times the Portqry too failing with Filtered messages on port 21 from Source to destination FTP box.

Right now I am doubting the Speed/duplex settings on these switches and VM Hosts. Our VM Hosts are 10G capable NICs and Switches too. It is hard-coded in Nexus switches regarding speed at VM Hosts interface, so technically switches are controlling the speed, so I got nothing to do on VM Hosts speed/duplex settings; anything I want to modify is left with Nexus switch.  End device Red Lion FTP box is only 100 MB Capable. Cannot blame if source talking at full 10 Gig speed and end device is failing to respond with same speed. Because the normal SYNC, ACK communication too getting affected with the TCP retransmissions; at this same time, I cannot assume this couldn’t be the reason. It still needs analysis to rule out things.

Worked with Cisco and they say Nexus switches don’t support buffering, so 10 Gig source and 100 MB destination don’t work in the nexus environment. Buffering is not capable they say in Nexus switches. An alternative they propose to fix is to update the IOS on these Nexus switches; but that’s tentative solution.

 

—— Update on 23rd Jan 2016—–

<<We’ve updated the Nexus IOS version to the latest, yet we see the same issues. Still banging head to get this fixed.>>

 

I will keep on updating this thread as more progress is made… Comments are welcome.

 

Cheers!

Chaladi