Much has been written about SQL Server Always On Availability Groups, but the topic of SQL Server Failover Cluster Instances (FCI) that span both availability zones and regions is far less discussed. However, for organizations that require SQL Server high availability (HA) and disaster recovery (DR) without the added licensing costs of Enterprise Edition, SQL Server FCI remains a powerful and cost-effective solution.
In this article, we will explore how to deploy a resilient SQL Server FCI in Microsoft Azure, leveraging Windows Server Failover Clustering (WSFC) and various Azure services to ensure both high availability and disaster recovery. While deploying an FCI in a single availability zone is relatively straightforward, configuring it to span multiple availability zones—and optionally, multiple regions—introduces a set of unique challenges, including cross-zone and cross-region failover, storage replication, and network latency.
To overcome these challenges, we must first establish a properly configured network foundation that supports multi-region SQL Server FCI deployments. This article includes a comprehensive PowerShell script that automates the necessary networking configuration, ensuring a seamless and resilient infrastructure. This script:
- Creates two virtual networks (vNets) in different Azure paired regions
- Establishes secure peering between these vNets for seamless cross-region communication
- Configures network security groups (NSGs) to control inbound and outbound traffic, ensuring SQL Server and WSFC can function properly
- Associates the NSGs with subnets, enforcing security policies while enabling necessary connectivity
By automating these steps, we lay the groundwork for SQL Server FCI to operate effectively in a multi-region Azure environment. Additionally, we will cover key technologies such as Azure Shared Disks, SIOS DataKeeper, Azure Load Balancer, and quorum configuration within WSFC to complete the deployment. By the end of this discussion, you will have a clear roadmap for architecting a SQL Server FCI deployment that is highly available, disaster-resistant, and optimized for minimal downtime across multiple Azure regions.
Pre-requisites
Before deploying a SQL Server Failover Cluster Instance (FCI) across availability zones and regions in Azure, ensure you have the following prerequisites in place:
- Azure subscription with necessary permissions
- You must have an active Azure subscription with sufficient permissions to create virtual machines, manage networking, and configure storage. Specifically, you need Owner or Contributor permissions on the target resource group.
- Access to SQL Server and SIOS DataKeeper installation media
- SQL Server installation media: Ensure you have the SQL Server Standard or Enterprise Edition installation media available. You can download it from the Microsoft Evaluation Center.
- SIOS DataKeeper installation media: You will need access to SIOS DataKeeper Cluster Edition for block-level replication. You can request an evaluation copy from SIOS Technology.
Configuring networking for SQL Server FCI across Azure paired regions
To deploy a SQL Server Failover Cluster Instance (FCI) across availability zones and regions, you need to configure networking appropriately. This section outlines the automated network setup using PowerShell, which includes:
- Creating two virtual networks (vNets) in different Azure paired regions
- Creating Subnets – two in the primary region and one in the DR region
- Peering between vNets to enable seamless cross-region communication
- Configuring a network security group (NSG) to:
- Allow full communication between vNets (essential for SQL and cluster traffic)
- Enable secure remote desktop (RDP) access for management purposes
The PowerShell script provided in this session automates these critical networking tasks, ensuring that your SQL Server FCI deployment has a robust, scalable, and secure foundation. Once the network is in place, we will proceed to the next steps in configuring SQL Server FCI, storage replication, and failover strategies.
# Define Variables
$PrimaryRegion = "East US 2"
$DRRegion = "Central US"
$ResourceGroup = "MySQLFCIResourceGroup"
$PrimaryVNetName = "PrimaryVNet"
$DRVNetName = "DRVNet"
$PrimaryNSGName = "SQLFCI-NSG-Primary"
$DRNSGName = "SQLFCI-NSG-DR"
$PrimarySubnet1Name = "SQLSubnet1"
$DRSubnetName = "DRSQLSubnet"
$PrimaryAddressSpace = "10.1.0.0/16"
$PrimarySubnet1Address = "10.1.1.0/24"
$DRAddressSpace = "10.2.0.0/16"
$DRSubnetAddress = "10.2.1.0/24"
$SourceRDPAllowedIP = "98.110.113.146/32" # Replace with your actual IP
$DNSServer = "10.1.1.102" #set this to your Domain controller
# Create Resource Group if not exists
Write-Output "Creating Resource Group ($ResourceGroup) if not exists..."
New-AzResourceGroup -Name $ResourceGroup -Location $PrimaryRegion -ErrorAction SilentlyContinue
# Create Primary vNet with a subnet
Write-Output "Creating Primary VNet ($PrimaryVNetName) in $PrimaryRegion..."
$PrimaryVNet = New-AzVirtualNetwork -ResourceGroupName $ResourceGroup -Location $PrimaryRegion -Name $PrimaryVNetName -AddressPrefix $PrimaryAddressSpace -DnsServer $DNSServer
$PrimarySubnet1 = Add-AzVirtualNetworkSubnetConfig -Name $PrimarySubnet1Name -AddressPrefix $PrimarySubnet1Address -VirtualNetwork $PrimaryVNet
Set-AzVirtualNetwork -VirtualNetwork $PrimaryVNet
# Create DR vNet with a subnet
Write-Output "Creating DR VNet ($DRVNetName) in $DRRegion..."
$DRVNet = New-AzVirtualNetwork -ResourceGroupName $ResourceGroup -Location $DRRegion -Name $DRVNetName -AddressPrefix $DRAddressSpace -DnsServer $DNSServer
$DRSubnet = Add-AzVirtualNetworkSubnetConfig -Name $DRSubnetName -AddressPrefix $DRSubnetAddress -VirtualNetwork $DRVNet
Set-AzVirtualNetwork -VirtualNetwork $DRVNet
# Configure Peering Between vNets
Write-Output "Configuring VNet Peering..."
$PrimaryVNet = Get-AzVirtualNetwork -Name $PrimaryVNetName -ResourceGroupName $ResourceGroup
$DRVNet = Get-AzVirtualNetwork -Name $DRVNetName -ResourceGroupName $ResourceGroup
# Create Peering from Primary to DR
Write-Output "Creating Peering from $PrimaryVNetName to $DRVNetName..."
$PrimaryToDRPeering = Add-AzVirtualNetworkPeering -Name "PrimaryToDR" -VirtualNetwork $PrimaryVNet -RemoteVirtualNetworkId $DRVNet.Id
Start-Sleep -Seconds 10
# Create Peering from DR to Primary
Write-Output "Creating Peering from $DRVNetName to $PrimaryVNetName..."
$DRToPrimaryPeering = Add-AzVirtualNetworkPeering -Name "DRToPrimary" -VirtualNetwork $DRVNet -RemoteVirtualNetworkId $PrimaryVNet.Id
Start-Sleep -Seconds 10
# Retrieve and update Peering settings
$PrimaryToDRPeering = Get-AzVirtualNetworkPeering -ResourceGroupName $ResourceGroup -VirtualNetworkName $PrimaryVNetName -Name "PrimaryToDR"
$DRToPrimaryPeering = Get-AzVirtualNetworkPeering -ResourceGroupName $ResourceGroup -VirtualNetworkName $DRVNetName -Name "DRToPrimary"
$PrimaryToDRPeering.AllowVirtualNetworkAccess = $true
$PrimaryToDRPeering.AllowForwardedTraffic = $true
$PrimaryToDRPeering.UseRemoteGateways = $false
Set-AzVirtualNetworkPeering -VirtualNetworkPeering $PrimaryToDRPeering
$DRToPrimaryPeering.AllowVirtualNetworkAccess = $true
$DRToPrimaryPeering.AllowForwardedTraffic = $true
$DRToPrimaryPeering.UseRemoteGateways = $false
Set-AzVirtualNetworkPeering -VirtualNetworkPeering $DRToPrimaryPeering
Write-Output "VNet Peering established successfully."
# Create Network Security Groups (NSGs)
Write-Output "Creating NSGs for both regions..."
$PrimaryNSG = New-AzNetworkSecurityGroup -ResourceGroupName $ResourceGroup -Location $PrimaryRegion -Name $PrimaryNSGName
$DRNSG = New-AzNetworkSecurityGroup -ResourceGroupName $ResourceGroup -Location $DRRegion -Name $DRNSGName
# Define NSG Rules (Allow VNet communication and RDP)
$Rule1 = New-AzNetworkSecurityRuleConfig -Name "AllowAllVNetTraffic" -Priority 100 -Direction Inbound -Access Allow -Protocol * `
-SourceAddressPrefix VirtualNetwork -SourcePortRange * -DestinationAddressPrefix VirtualNetwork -DestinationPortRange *
$Rule2 = New-AzNetworkSecurityRuleConfig -Name "AllowRDP" -Priority 200 -Direction Inbound -Access Allow -Protocol TCP `
-SourceAddressPrefix $SourceRDPAllowedIP -SourcePortRange * -DestinationAddressPrefix "*" -DestinationPortRange 3389
# Apply Rules to NSGs
$PrimaryNSG.SecurityRules = @($Rule1, $Rule2)
$DRNSG.SecurityRules = @($Rule1, $Rule2)
Set-AzNetworkSecurityGroup -NetworkSecurityGroup $PrimaryNSG
Set-AzNetworkSecurityGroup -NetworkSecurityGroup $DRNSG
Write-Output "NSGs created and configured successfully."
# Associate NSGs with Subnets
Write-Output "Associating NSGs with respective subnets..."
$PrimaryVNet = Get-AzVirtualNetwork -Name $PrimaryVNetName -ResourceGroupName $ResourceGroup
$DRVNet = Get-AzVirtualNetwork -Name $DRVNetName -ResourceGroupName $ResourceGroup
$PrimaryNSG = Get-AzNetworkSecurityGroup -Name $PrimaryNSGName -ResourceGroupName $ResourceGroup
$DRNSG = Get-AzNetworkSecurityGroup -Name $DRNSGName -ResourceGroupName $ResourceGroup
$PrimarySubnet1 = Set-AzVirtualNetworkSubnetConfig -VirtualNetwork $PrimaryVNet -Name $PrimarySubnet1Name `
-AddressPrefix $PrimarySubnet1Address -NetworkSecurityGroup $PrimaryNSG
$DRSubnet = Set-AzVirtualNetworkSubnetConfig -VirtualNetwork $DRVNet -Name $DRSubnetName `
-AddressPrefix $DRSubnetAddress -NetworkSecurityGroup $DRNSG
Set-AzVirtualNetwork -VirtualNetwork $PrimaryVNet
Set-AzVirtualNetwork -VirtualNetwork $DRVNet
Write-Output "NSGs successfully associated with all subnets!"
Write-Output "Azure network setup completed successfully!"
Deploying SQL Server virtual machines in Azure with high availability
To achieve HA and DR, we deploy SQL Server Failover Cluster Instance (FCI) nodes across multiple Availability Zones (AZs) within Azure regions. By distributing the SQL Server nodes across separate AZs, we qualify for Azure’s 99.99% SLA for virtual machines, ensuring resilience against hardware failures and zone outages.
Each SQL Server virtual machine (VM) is assigned a static private and public IP address, ensuring stable connectivity for internal cluster communication and remote management. Additionally, each SQL Server node is provisioned with an extra 20GB Premium SSD, which will be used by SIOS DataKeeper Cluster Edition to create replicated cluster storage across AZs and regions. Because Azure does not natively provide shared storage spanning multiple AZs or regions, SIOS DataKeeper enables block-level replication, ensuring that all clustered SQL Server nodes have synchronized copies of the data, allowing for seamless failover with no data loss.
In a production environment, multiple domain controllers (DCs) would typically be deployed, spanning both AZs and regions to ensure redundancy and fault tolerance for Active Directory services. However, for the sake of this example, we will keep it simple and deploy a single domain controller (DC1) in Availability Zone 3 in East US 2 to provide the necessary authentication and cluster quorum support.
The PowerShell script below automates the deployment of these SQL Server VMs, ensuring that:
- SQLNode1 is deployed in Availability Zone 1 in East US 2
- SQLNode2 is deployed in Availability Zone 2 in East US 2
- SQLNode3 is deployed in Availability Zone 1 in Central US
- DC1 is deployed in Availability Zone 3 in East US 2
By following this deployment model, SQL Server FCI can span multiple AZs and even multiple regions, providing a highly available and disaster-resistant database solution.
# Define Variables
$ResourceGroup = "MySQLFCIResourceGroup"
$PrimaryRegion = "East US 2"
$DRRegion = "Central US"
$VMSize = "Standard_D2s_v3" # VM Size
$AdminUsername = "sqladmin"
$AdminPassword = ConvertTo-SecureString "YourSecurePassword123!" -AsPlainText -Force
$Credential = New-Object System.Management.Automation.PSCredential ($AdminUsername, $AdminPassword)
# Get Virtual Networks
$PrimaryVNet = Get-AzVirtualNetwork -Name "PrimaryVNet" -ResourceGroupName $ResourceGroup
$DRVNet = Get-AzVirtualNetwork -Name "DRVNet" -ResourceGroupName $ResourceGroup
# Get Subnets
$PrimarySubnet1 = Get-AzVirtualNetworkSubnetConfig -VirtualNetwork $PrimaryVNet -Name "SQLSubnet1"
$DRSubnet = Get-AzVirtualNetworkSubnetConfig -VirtualNetwork $DRVNet -Name "DRSQLSubnet"
# Define Static Private IPs
$IP1 = "10.1.1.100" # SQLNode1 in East US, AZ1
$IP2 = "10.1.1.101" # SQLNode2 in East US, AZ2
$IP3 = "10.2.1.100" # SQLNode3 in West US (Availability Zones may not be supported)
$IP4 = "10.1.1.102" # DC1 in East US, AZ3 (No extra disk)
# Function to Create a VM with Static Private & Public IP, Availability Zone, and attach an extra disk (except for DC1)
Function Create-SQLVM {
param (
[string]$VMName,
[string]$Location,
[string]$SubnetId,
[string]$StaticPrivateIP,
[string]$AvailabilityZone,
[bool]$AttachExtraDisk
)
# Create Public IP Address (Static)
Write-Output "Creating Public IP for $VMName..."
$PublicIP = New-AzPublicIpAddress -ResourceGroupName $ResourceGroup -Location $Location `
-Name "$VMName-PublicIP" -Sku Standard -AllocationMethod Static
# Create Network Interface with Static Private & Public IP
Write-Output "Creating NIC for $VMName in $Location (Zone $AvailabilityZone)..."
$NIC = New-AzNetworkInterface -ResourceGroupName $ResourceGroup -Location $Location `
-Name "$VMName-NIC" -SubnetId $SubnetId -PrivateIpAddress $StaticPrivateIP -PublicIpAddressId $PublicIP.Id
# Create VM Configuration with Availability Zone (if supported)
Write-Output "Creating VM $VMName in $Location (Zone $AvailabilityZone)..."
if ($Location -eq $DRRegion) {
# Check if Availability Zones are supported for West US
$VMConfig = New-AzVMConfig -VMName $VMName -VMSize $VMSize -Zone $AvailabilityZone | `
Set-AzVMOperatingSystem -Windows -ComputerName $VMName -Credential $Credential | `
Set-AzVMSourceImage -PublisherName "MicrosoftWindowsServer" -Offer "WindowsServer" -Skus "2022-Datacenter" -Version "latest" | `
Add-AzVMNetworkInterface -Id $NIC.Id | `
Set-AzVMOSDisk -CreateOption FromImage
Write-Output "Warning: Availability Zones not supported in West US. Deploying without AZ."
} else {
# Use Availability Zone for East US
$VMConfig = New-AzVMConfig -VMName $VMName -VMSize $VMSize -Zone $AvailabilityZone | `
Set-AzVMOperatingSystem -Windows -ComputerName $VMName -Credential $Credential | `
Set-AzVMSourceImage -PublisherName "MicrosoftWindowsServer" -Offer "WindowsServer" -Skus "2022-Datacenter" -Version "latest" | `
Add-AzVMNetworkInterface -Id $NIC.Id | `
Set-AzVMOSDisk -CreateOption FromImage
}
# Conditionally Attach an Extra 20 GB Premium SSD LRS Disk in the same Availability Zone (Not for DC1)
if ($AttachExtraDisk) {
Write-Output "Attaching extra 20GB Premium SSD disk to $VMName in Zone $AvailabilityZone..."
$DiskConfig = New-AzDiskConfig -SkuName "Premium_LRS" -Location $Location -Zone $AvailabilityZone -CreateOption Empty -DiskSizeGB 20
$DataDisk = New-AzDisk -ResourceGroupName $ResourceGroup -DiskName "$VMName-Disk" -Disk $DiskConfig
$VMConfig = Add-AzVMDataDisk -VM $VMConfig -Name "$VMName-Disk" -CreateOption Attach -ManagedDiskId $DataDisk.Id -Lun 1
}
# Deploy VM
New-AzVM -ResourceGroupName $ResourceGroup -Location $Location -VM $VMConfig
}
# Deploy SQL Nodes in the specified Availability Zones with Static Public IPs
Create-SQLVM -VMName "SQLNode1" -Location $PrimaryRegion -SubnetId $PrimarySubnet1.Id -StaticPrivateIP $IP1 -AvailabilityZone "1" -AttachExtraDisk $true
Create-SQLVM -VMName "SQLNode2" -Location $PrimaryRegion -SubnetId $PrimarySubnet1.Id -StaticPrivateIP $IP2 -AvailabilityZone "2" -AttachExtraDisk $true
Create-SQLVM -VMName "SQLNode3" -Location $DRRegion -SubnetId $DRSubnet.Id -StaticPrivateIP $IP3 -AvailabilityZone "1" -AttachExtraDisk $true # West US AZ fallback
# Deploy DC1 in East US, AZ3 with Static Public IP but without an extra disk
Create-SQLVM -VMName "DC1" -Location $PrimaryRegion -SubnetId $PrimarySubnet1.Id -StaticPrivateIP $IP4 -AvailabilityZone "3" -AttachExtraDisk $false
Write-Output "All VMs have been successfully created with Static Public & Private IPs in their respective Availability Zones!"
Completing the SQL Server FCI deployment
With the SQL Server virtual machines deployed across multiple AZs and regions, the next steps involve configuring networking, setting up Active Directory, enabling clustering, and installing SQL Server FCI. These steps will ensure HA and DR for SQL Server across Azure regions.Â
1. Create a domain on DC1
The domain controller (DC1) will provide authentication and Active Directory services for the SQL Server Failover Cluster. To set up the Active Directory Domain Services (AD DS) on DC1, we will:
- Install the Active Directory Domain Services role.
- Promote DC1 to a domain controller.
- Create a new domain (e.g., corp.local).
- Configure DNS settings to ensure domain resolution.
Once completed, this will allow the cluster nodes to join the domain and participate in Windows Server Failover Clustering (WSFC).
2. Join SQLNode1, SQLNode2, and SQLNode3 to the domain
Now that DC1 is running Active Directory, we will join SQLNode1, SQLNode2, and SQLNode3 to the new domain (e.g., datakeeper.local). This is a critical step, as Windows Server Failover Clustering (WSFC) and SQL Server FCI require domain membership for proper authentication and communication.
Steps:
- Join each SQL node to the Active Directory domain.
- Restart the servers to apply changes.
3. Enable Windows Server Failover Clustering (WSFC)
With all nodes now part of the Active Directory domain, the next step is to install and enable WSFC on all three SQL nodes. This feature provides the foundation for SQL Server FCI, allowing for automatic failover between nodes.
Steps:
1. Install the Failover Clustering feature on all SQL nodes.
Install-WindowsFeature -Name Failover-Clustering -IncludeManagementTools
2. Enable the Cluster service.
New-Cluster -Name SQLCluster -Node SQLNode1,SQLNode2,SQLNode3 -NoStorage

SIOS
4. Create a cloud storage account for Cloud Witness
To ensure quorum resiliency, we will configure a Cloud Witness as the cluster quorum mechanism. This Azure storage account-based witness is a lightweight, highly available solution that ensures the cluster maintains quorum even in the event of an AZ or regional failure.
Steps:
1. Create an Azure storage account in a third, independent region.
New-AzStorageAccount -ResourceGroupName "MySQLFCIResourceGroup" `
-Name "cloudwitnessstorageacct" `
-Location "westus3" `
-SkuName "Standard_LRS" `
-Kind StorageV2
2. Get the key that will be used to create the Cloud Witness.
Get-AzStorageAccountKey -ResourceGroupName "MySQLFCIResourceGroup" -Name "cloudwitnessstorageacct"
KeyName Value Permissions CreationTime
------- ----- ----------- ------------
key1 dBIdjU/lu+86j8zcM1tdg/j75lZrB9sVKHUKhBEneHyMOxYTeZhtVeuzt7MtBOO9x/8QtYlrbNYY+AStddZZOg== Full 3/28/2025 2:38:00 PM
key2 54W5NdJ6xbFwjTrF0ryIOL6M7xGOylc1jxnD8JQ94ZOy5dQOo3BAJB2TYzb22KaDeYrv09m6xVsW+AStBxRq6w== Full 3/28/2025 2:38:00 PM
3. Configure the WSFC cluster quorum settings to use Cloud Witness as the tie-breaker. This PowerShell script can be run on any of the cluster nodes.
$parameters = @{
CloudWitness = $true
AccountName = 'cloudwitnessstorageacct'
AccessKey = 'dBIdjU/lu+86j8zcM1tdg/j75lZrB9sVKHUKhBEneHyMOxYTeZhtVeuzt7MtBOO9x/8QtYlrbNYY+AStddZZOg=='
Endpoint = 'core.windows.net'
}
Set-ClusterQuorum @parameters
 5. Validate the configuration
With WSFC enabled and Cloud Witness configured, we can now create the base Windows Failover Cluster. This involves running Cluster Validation to ensure all cluster nodes meet requirements.
Test-Cluster
Once the base cluster is operational, we move on to configuring storage replication with SIOS DataKeeper.
6. Install SIOS DataKeeper on all three cluster nodes
Because Azure does not support shared storage across AZs and regions, we use SIOS DataKeeper Cluster Edition to replicate block-level storage and create a stretched cluster.
Steps:
- Install SIOS DataKeeper Cluster Edition on SQLNode1, SQLNode2, and SQLNode3.
- Restart the nodes after installation.
- Ensure the SIOS DataKeeper service is running on all nodes.
7. Format the 20GB Disk as the F: drive
Each SQL node has an additional 20GB Premium SSD, which will be used for SQL Server data storage replication.
Steps:
- Initialize the extra 20GB disk on SQLNode1.
- Format it as the F: drive.
- Assign the same drive letter (F:) on SQLNode2 and SQLNode3 to maintain consistency.
8. Create the DataKeeper job to replicate the F: drive
Now that the F: drive is configured, we create a DataKeeper replication job to synchronize data between the nodes:
- Synchronous replication between SQLNode1 and SQLNode2 (for low-latency, intra-region failover).
- Asynchronous replication between SQLNode1 and SQLNode3 (for cross-region disaster recovery).
Steps:
- Launch DataKeeper and create a new replication job.
- Configure synchronous replication for the F: drive between SQLNode1 and SQLNode2.
- Configure asynchronous replication between SQLNode1 and SQLNode3.
The screenshots below walk through the process of creating the DataKeeper job that replicates the F: drive between the three servers.

SIOS

SIOS

SIOS

SIOS

SIOS
To add the second target, right-click on the existing Job and choose “Create a Mirror.”

SIOS

SIOS

SIOS

SIOS

SIOS
Once replication is active, SQLNode2 and SQLNode3 will have an identical copy of the data stored on SQLNode1’s F: drive.
If you look in Failover Cluster Manager, you will see “DataKeeper Volume F” in Available Storage. Failover clustering will treat this like it is a regular shared disk.

SIOS
 9. Install SQL Server on SQLNode1 as a new clustered instance
With WSFC configured and storage replication active, we can now install SQL Server FCI.
Steps:
- On SQLNode1, launch the SQL Server installer.
- Choose “New SQL Server failover cluster installation.”
- Complete the installation and restart SQLNode1.
You will notice during the installation, that the “DataKeeper Volume F” is presented as an available storage location.

SIOS
10. Install SQL Server on SQLNode2 and SQLNode3 (Add Node to Cluster)
To complete the SQL Server FCI, we must add the remaining nodes to the cluster.
Steps:
- Run SQL Server setup on SQLNode2 and SQLNode3.
- Choose “Add node to an existing SQL Server failover cluster.”
- Validate cluster settings and complete the installation.
Once SQL Server is installed on all three cluster nodes, Failover Cluster Manager will look like this.

SIOS
11. Update SQL Server to use a distributed network name (DNN)
By default, SQL Server FCI requires an Azure load balancer (ALB) to manage client connections. However, Azure now supports distributed network names (DNNs), eliminating the need for an ALB.
Steps:
- Update SQL Server FCI to use DNN instead of a traditional floating IP.
- Ensure name resolution works across all nodes.
- Validate client connectivity to SQL Server using DNN.
Detailed instructions on how to update SQL Server FCI to use DNN can be found in the Microsoft documentation.
Add-ClusterResource -Name sqlserverdnn -ResourceType "Distributed Network Name" -Group "SQL Server (MSSQLSERVER)"
Get-ClusterResource -Name sqlserverdnn | Set-ClusterParameter -Name DnsName -Value FCIDNN
Start-ClusterResource -Name sqlserverdnn
You can now connect to the clustered SQL instance using the DNN “FCIDNN.”
12. Install SQL Server Management Studio (SSMS) on all three nodes
For easier SQL Server administration, install SQL Server Management Studio (SSMS) on all three nodes.
Steps:
- Download the latest version of SSMS from Microsoft.
- Install SSMS on SQLNode1, SQLNode2, and SQLNode3.
- Connect to the SQL Server cluster using DNN.
 13. Test failover and switchover scenarios
Finally, we validate HA and DR functionality by testing failover and switchover scenarios:
- Perform a planned failover (manual switchover) from SQLNode1 to SQLNode2.
- Simulate an AZ failure and observe automatic failover.
- Test cross-region failover from SQLNode1 (East US 2) to SQLNode3 (Central US).
This confirms that SQL Server FCI can seamlessly failover within AZs and across regions, ensuring minimal downtime and data integrity.
Four nines uptime
By following these steps, we have successfully deployed, configured, and tested a multi-AZ, multi-region SQL Server FCI in Azure. This architecture provides 99.99% uptime, seamless failover, and disaster recovery capabilities, making it ideal for business-critical applications.Â
Dave Bermingham is senior technical evangelist at SIOS Technology.
—
New Tech Forum provides a venue for technology leaders—including vendors and other outside contributors—to explore and discuss emerging enterprise technology in unprecedented depth and breadth. The selection is subjective, based on our pick of the technologies we believe to be important and of greatest interest to InfoWorld readers. InfoWorld does not accept marketing collateral for publication and reserves the right to edit all contributed content. Send all inquiries to [email protected].