Topic Last Modified: 2011-03-02

The metropolitan site resiliency topology can include different types of server roles, as follows.

Front End Pool

This pool hosts all Lync Server users. Each site, North and South, contains four identically configured Front End Servers. The Back-End Database is deployed as two Active/Passive SQL Server 2008 geographically dispersed cluster nodes, running on the Windows Server 2008 R2 Failover Clustering service. Synchronous data replication is required between the two Back-End Database Servers.

In our test topology, the Mediation Server was collocated with Front End Server. Topologies with stand-alone Mediation Server are also supported.

Our test topology used DNS load balancing to balance the SIP traffic in the pool, with hardware load balancers deployed for the HTTP traffic.

Topologies that use only hardware load balancers to balance all types of traffic are also supported for site resiliency.

A/V Conferencing Pool

We deployed a single A/V Conferencing pool with four A/V Conferencing Servers, two in each site.

Director Pool

We deployed a single Director pool with four Directors, two in each site.

Edge Pool

The Edge Servers ran all services (Access Edge service, A/V Conferencing Edge service, and Web Conferencing Edge service), but we tested them only for remote-user scenarios. Federation and public IM connectivity are beyond the scope of this document.

We recommend DNS load balancing for your Edge pool, but we also support using hardware load balancers. If you use hardware load balancers for the Edge pool, the hardware load balancer at one site serves as the primary load balancer and responds to requests with the virtual IP address of the appropriate Edge service. If the primary load balancer is unavailable, the secondary hardware load balancer at the other site would take over. Each site has its own IP subnet; perimeter networks were not stretched across the North and South sites.

Group Chat Servers

Each site hosts both a Channel service and a Lookup service, but these services can be active in only one of the sites at a time. The Channel service and the Lookup service in the other site must be stopped or disabled. In the event of site failover, manual intervention is required to start these services at the failover site.

Each site also hosts a Compliance Server, but only one of these servers can be active at a time. In the event of site failover and failback, manual intervention is required to restore the service. For details, see Backing Up the Compliance Server in the Operations documentation.

We deployed the Group Chat back-end database as two Active/Passive SQL Server 2008 geographically dispersed cluster nodes running on top of Windows Server 2008 R2 Failover Clustering. Data replication between the two back-end database servers must be synchronous. A single database instance is used for both Group Chat and compliance data.

Monitoring Server and Archiving Server

For Monitoring Server and Archiving Server, we recommend a hot standby deployment. Deploy these server roles in both sites, on a single server in each site. Only one of these servers is active, and the pools in your deployment are all associated with that active server. The other server is deployed and installed, but not associated with any pool.

If the primary server becomes unavailable, you use Topology Builder to manually associate the pools with the standby server, which then becomes the primary server.

File Server Cluster

We deployed a file server as a two-node geographically dispersed cluster resource using Windows Server 2008 R2 Failover Clustering. Synchronous data replication was required. Any Lync Server function that requires a file share and is split across the two sites must use this file share cluster. This includes the following:

  • Meeting content location

  • Meeting metadata location

  • Meeting archive location

  • Address Book Server file store

  • Application data store

  • Client Update data store

  • Group Chat compliance file repository

  • Group Chat upload files location

Reverse Proxy

A reverse proxy server is deployed at each site. In our test topology, these servers ran Microsoft Forefront Threat Management Gateway. Each server running Microsoft Forefront Threat Management Gateway ran independently of one another. A hardware load balancer was deployed at each site.