Included in Windows is the ability to hot swap core hardware components, such as replacing memory, processors, and PCI adapter cards to a server that supports this feature. In an IT environment where zero downtime means that an IT administrator cannot even shut down a system to replace failed components, having hot-swappable capabilities built in to the operating system helps organizations minimize system downtime.
In Windows , with properly supported hardware, failed memory can be swapped out while the server is running. In addition, processor boards can be hot swapped, and PCI adapters such as network adapters or communications adapters can be added or removed from the system. Many IT operations already enjoy some of these capabilities as several server hardware vendors have provided plug-ins to Windows to support this type of functionality.
However with this capability now built in to Windows , an IT professional can perform the hot swaps and both the operating system and applications running on the operating system will acknowledge the hardware changes without the use of special add-in software components. SMB2 is a protocol that handles the transfer of files between systems. Effectively, SMB2 combines file communications and through a larger communications buffer is able to reduce the number of round-trips needed when transmitting data between systems.
For the old-timers reading this chapter, it is analogous to the difference between the copy command and the xcopy command in DOS. The copy command reads, writes, reads, writes information. The xcopy command reads, reads, reads information and then writes, writes, writes the information. Because more information is read into a buffer and transferred in bulk, the information is transmitted significantly faster.
Most users on a high-speed local area network LAN won't notice the improvements when opening and saving files out of something like Microsoft Office against a Windows server; however, for users who might be copying up large image files or datasets between systems will find the information copying 10 to 30 times faster. The performance improvement is very noticeable in wide area network WAN situations on networks with high latency. Because a typical transfer of files requires short read and write segments of data, a file could take minutes to transfer across a WAN that can transfer in seconds between SMB2 connected systems because the round-trip chatter is drastically reduced.
For SMB2 to work effectively, the systems on both ends need to be Windows systems, Windows Vista systems, or a combination of the two. In Windows , the Session Manager Subsystem smss. In the past with Windows or earlier, there was only a single instance of smss.
With parallel processing of sessions, technologies like Windows Terminal Services greatly benefit from this enhancement. Rather than having seven Terminal Services clients queued up to log on and run thin client sessions, on an eight-core processor server, each of the seven client sessions can simultaneously log on and run applications at processor speed.
Again, this is a technology that a network administrator does not install, configure, or run separately, but is now built in to Windows , which ultimately improves the raw performance of applications and tasks that used to queue up serially on a server that can now be handled in parallel with each core processor handling the added tasks. This service helps to ensure user sessions are completely terminated when a user logs off of a system. It removes temporary file content, cache memory content, and other information typically generated during a user session, but deemed unnecessary for longer-term storage.
This service is particularly useful for organizations using Windows Terminal Services where user sessions are routinely created on a server, and for security purposes, the user profile data is removed when the user logs off of the session. Hyper-V is a technology built in to the core of the operating system in Windows that greatly enhances the performance and capabilities of server virtualization in a Windows environment. In the past, virtual server software sat on top of the network operating system and each guest session was dependent on many shared components of the operating system.
Hyper-V provides a very thin layer between the hardware abstract layer of the system and the operating system that provides guest sessions in a virtualized environment to communicate directly with the hardware layer of the system.
Without having the host operating system in the way, guest sessions can perform significantly faster than in the past, and guest sessions can operate independent of the host operating system in terms of better reliability from eliminating host operating system bottlenecks. Hyper-V and server virtualization is covered in more detail in Chapter 37, "Deploying and Using Windows Virtualization. As much as there have been significant improvements in Windows under the hood that greatly enhance the performance, reliability, and scalability of Windows in the enterprise, Windows servers have always been exceptional application servers hosting critical business applications for organizations.
Windows continues the tradition of the operating system being an application server with common server roles being included in the operating system. When installing Windows , the Server Manager console provides a list of server roles that can be added to a system, as shown in Figure 1.
This book focuses on the Windows operating system and the planning, migration, security, administration, and support of the operating system. Windows is also the base network operating system on top of which all future Windows Server applications will be built. I would like to receive exclusive offers and hear about products from InformIT and its family of brands.
I can unsubscribe at any time. Pearson Education, Inc. This privacy notice provides an overview of our commitment to privacy and describes how we collect, protect, use and share personal information collected through this site.
Please note that other Pearson websites and online products and services have their own separate privacy policies. To conduct business and deliver products and services, Pearson collects and uses personal information in several ways in connection with this site, including:. For inquiries and questions, we collect the inquiry or question, together with name, contact details email address, phone number and mailing address and any other additional information voluntarily submitted to us through a Contact Us form or an email.
We use this information to address the inquiry and respond to the question. We use this information to complete transactions, fulfill orders, communicate with individuals placing orders or visiting the online store, and for related purposes. Technology will be the major focus of the next two articles, but for now, we need to consider the wider implications of high availability. We normally concentrate on the servers and ensure that the hardware has the maximum levels of resiliency.
On top of this, we need to consider other factors:. Highly-available systems explicitly mean higher costs due to the technology and people we need to utilize.
The more availability we want, the higher the costs will rise. A business decision must be made regarding the cost of implementing the highly-available system when compared against the risk to the business of the system not being available.
This calculation should include the cost of downtime internally together with potential loss of business and reputation. When a system is unavailable and people can't work, the final costs can be huge leading to the question "We lost how much? We need high availability to ensure our business processes keep functioning. This ensures our revenue streams and business reputation are protected.
We achieve high availability through the correct mixture of people, processes, and technology. In Article 1, we explored the reasons for needing high availability in our Windows Server environments. This article will examine the high-availability options that are native to Windows.
We will discuss the advances in clustering that Windows Server brings including Windows Server R2 and look at some of the possible obstructions to adopting the high-availability technologies that are "out-of-the-box" in Windows Server However, we can also achieve high availability through the use of multi-instance applications and Network Load Balancing NLB.
In both cases, there are a number of servers that can supply the functionality. Data is automatically replicated between the domain controllers and DNS servers. Client machines are configured to use more than one of the possible targets. This approach works in a limited number of cases.
It also supplies a disaster recovery capability in that the replication can be across sites. We still need to back up the data! NLB is used to provide load balancing and high availability across a number of servers. However, it is possible to load balance other TCP protocols. The nodes of the NLB cluster will balance the traffic to the cluster between themselves. If a node fails, the traffic will be re-distributed amongst the remaining nodes of the cluster.
There are some applications that won't work with NLB and need hardware load balancing instead. The majority of our high availability instances will involve clustering of Windows servers.
These clusters could support databases, email systems, or other business-critical applications. We could cluster our Web servers, though we wouldn't be able to put as many nodes into the cluster as we could with NLB and we wouldn't gain load balancing.
In many organizations, the standard approach is to build a two-node cluster for each application. One node is active and the other is passive waiting to host the application in the event of failure on the first node. This setup ensures that each application has a dedicated failover path and that the correct level of resources memory and CPU is available for the application.
The maximum number of nodes supportable in a Windows Server failover cluster depends on the version of the operating system OS that is being used:. Windows Server R2 is bit only so it will support up to 16 nodes in a failover cluster. It is not possible to support both and bit servers in the same cluster. These figures suggest that it may be possible to reduce the cost of clustering by "sharing" the passive nodes.
Most new clusters will be 64 bit, as email and database systems can take advantage of the memory that becomes available through using a bit OS. In a two-node cluster, the second node is the only failover target. The possible failover nodes for each application need to be managed to ensure that resources are available and all the applications don't end up on a single node.
Organizations have attempted to reduce the cost of failover clustering by running the cluster in an active-active configuration. In this case, both nodes are running an application, often database instances, and they are configured to failover to the other node. This can lead to both applications on the same node with an adverse impact on performance.
Activeactive cluster configuration is not recommended, and modern applications are appearing that no longer support it. We have seen that Windows Server can support more nodes in a failover cluster than previous versions of Windows. What else is new? One issue with failover clustering in earlier versions of Windows has been the restrictions on hardware that was supported when clustering.
The cluster configuration of servers and storage had to be on the approved, and tested, list before the cluster configuration was fully supported. One major issue was the support of hardware drivers. If the drivers hadn't been tested and approved, the cluster couldn't be upgraded to the new drivers.
This has changed, as we will see. There are still hardware restrictions, but they are of a more "common-sense" variety than hard-and-fast rules. When creating a cluster, it makes sense to use matching servers. There is an argument that says that the passive node could be less powerful than the active node because the passive node won't really be used much. This is a false economy. In the event of failover, the passive node may become the only node available in a two-node cluster.
We don't want business-critical systems suffering performance problems. Use identical servers when building the cluster. The servers should also be as resilient as possible with as much redundancy built in as possible, for instance fans, power supplies, and network cards. A Windows Server cluster is self-tested and validated. Hardware should still be certified for Windows all major manufacturers do this and building clusters with servers from different manufacturers is not a recommended practice.
Clustering is now a feature for Windows Server rather than being treated as a service. Once the cluster servers, and storage, are assembled, the Failover Clustering feature can be enabled on each node. The wizard will ask for the names of the servers in the cluster that the validation process will examine:. After testing, the results are saved to all nodes in the cluster.
The cluster configuration will be supported by Microsoft if it passes the validation wizard's testing. In previous versions of Windows, all IP addresses for a cluster had to be static. It is possible now to use DHCP-supplied addresses for a cluster.
If this practice is adopted, ensure that the addresses are reserved in DHCP. Windows Server and earlier versions had a single point of failure in terms of the quorum disk. This disk had to be available for the cluster to continue as it determined which node controlled the cluster. Failure was not a common occurrence, as clusters usually use SAN storage with its higher reliability. The use of the quorum disk was the most common scenario due to the prevalence of two-node clusters.
Windows Server introduced the majority node set model where each node has the quorum resource replicated to local storage. This model provides better resiliency at the cost of reduced flexibility in terms of an increase in the number of nodes that must be online for the cluster to function. Windows Server combines these models into a majority quorum model. Each node in the cluster plus the quorum now known as the witness resource is assigned a vote.
A majority of votes controls the cluster. If only the witness resource is assigned a vote, the configuration duplicates the Windows Server quorum disk behavior, alternatively only assigning votes to the nodes duplicating the majority node set configuration. The witness can be a separate disk or even a file share on a server outside of the cluster.
This share can't be part of a DFS environment. It is possible to change the quorum model after the cluster has been created, but this is not recommended. We have already seen some of the networking changes that failover clustering in Windows Server introduces.
The biggest change is that the cluster nodes no longer need to be on the same logical subnet. This restriction has been lifted, enabling us to create geographically dispersed clusters without the need for VLANs spanning our sites. The heartbeat timeout between the cluster nodes is configurable, which means that the network latency within reason doesn't become an issue for a dispersed cluster. At first sight, this may seem to solve our high availability and disaster recovery issues in one go.
However, there are still a few points to consider:. The enhancements to failover clustering are welcome, but there are still some obstructions to using native clustering:. The primary high-availability option within Windows Server is failover clustering. This is enhanced and easier to work with compared with previous versions. There are still some obstructions to the use of native high-availability options.
We will see possible solutions to these obstructions in the next article. The previous two articles in this series covered the need for high availability and how we can satisfy that need with the native Windows technologies. There are situations where those technologies do not meet our needs and we have to look to third-party or non-native solutions.
Those options will be covered in this article. There are a number of possibilities for supplying high availability to our systems. We must remember that not all options are suitable for a given scenario and that "just because we can doesn't mean we should. When shopping for a solution, we must remember the criticality of the systems we want to protect and whether the cost of downtime justifies the cost of the solution.
We must also think of the skills available to our IT department. In many cases, these solutions are ideal for an organization with limited IT skills and presence. This could be small to midsize organizations or the "Branch Office" scenario in a larger, more distributed environment.
We have seen that for true high availability, we need to protect the server and data. One method of protecting the data is by using storage-controlled replication. This is sometimes referred to as block-level replication. The concept is simple in that two sets of storage are linked across the network. Changes that occur on the primary are replicated to the secondary storage.
The replication works at the disk block level to minimize the replication traffic. Data replication of this sort involves additional software for controlling the storage and replication. If both sets of storage are linked to the nodes of the cluster, it is possible for the storage to failover to the secondary in the event of a failure in the primary storage. Although it might seem to be an ideal solution, there are some downsides to consider.
The first potential issue is latency. Any delay in replication invites a situation where a failure means that the data on the secondary storage is incomplete, which could lead to errors in the business process.
If the replication occurs continuously, there is the possibility that corrupt data will be replicated between the storage units. The network links between the storage units can contribute to the latency and need to be resilient to ensure a network failure doesn't prevent replication.
One other potential issue we need to consider is transactional consistency. If we are replicating database files in this manner, we have to ensure that the database transactions are replicated correctly so that the database will be in a consistent state in the event of failover.
Storage-based replication can be used as part of a virtualization solution to enable cross-site high availability. Virtualization is a major part of a modern infrastructure. Server virtualization is the first thought when this topic is mentioned, but we can also virtualize applications, application presentation, storage, and the desktop.
Varying degrees of high availability can be achieved using these solutions. Server virtualization in a production environment will consist of one, or more, hosts that are connected to storage. The hosts will run a hypervisor that enables them to have a number of virtual servers as guests.
Each guest has at least one file containing the operating system OS and data. Virtualization enables us to make best use of our hardware, but if that hardware fails, we could lose more systems than we would in a physical environment.
High availability is achieved by configuring the virtual hosts in a cluster. Failover clustering has been a component of Microsoft Windows server products beginning with NT 4. Since those early days, the failover cluster component has much evolved, especially in terms of ease of configuration and supported applications.
If you use Windows Server with Hyper-V as your virtualization platform, you can integrate failover clustering as part of your high-availability strategy for your virtualized infrastructure. A Windows Server failover cluster consists of at least two servers nodes that are connected through multiple network links, one of which enables monitoring the status of each node. Each failover cluster node is connected to a common storage array such as a Storage Area Network SAN , and only one node in a cluster can own the set of network and disk resources associated with an application or service at any one time.
In terms of scale, a Windows Server failover cluster can contain up to 16 nodes. The nodes monitor each other using a network heartbeat to determine if nodes are responsive. If a node becomes unresponsive, the application or service running on the failed cluster node will be restarted on another cluster node after it has taken ownership of resources.
Beginning with Windows Server , geographically-dispersed or stretch clusters can also be implemented without requiring custom or specialized hardware. This provides you with the ability to implement a failover cluster that can manage unplanned downtime by failing over to another local node in the case of a single server failure, or to a node in another geographical region in the event of a more severe local disruption such as might be caused by an extended power outage, natural disaster, or other large-scale problem.
Using failover clustering with Windows Server and Hyper-V provides the ability to implement a high-availability strategy that can manage both unplanned and planned downtime in a virtualized infrastructure. There are two different levels at which you can implement a failover cluster in a Hyper-V environment: at the virtualization host level, and at the guest operating system level. As shown in Figure 1, a guest operating system failover cluster is implemented between two or more virtual machines running on separate Hyper-V hosts and that are connected to a shared storage system.
In order to implement this option, you have to run an operating system in the virtual machine that supports failover clustering, such as Windows Server R2 up to 8 nodes or Windows Server Enterprise or Datacenter up to 16 nodes editions. This means that the application has been developed with specific features that allow it to interact with the cluster service and enable it to failover and restart with all required resources on a different cluster node.
If you are planning a guest operating system failover cluster, iSCSI is the only shared storage access protocol that is supported for this configuration. You should also dedicate one or more physical network cards and configure individual virtual networks on each Hyper-V host for iSCSI storage access.
It is important to note this configuration does not support directly attaching an iSCSI target to the virtual machine as a boot device. A guest operating system failover cluster is capable of supporting planned and unplanned downtime for cluster-aware applications.
In fact, this configuration will manage unplanned downtime caused by a failure or crash that occurs within the virtual machine, as well as a failure or crash that occurs at the Hyper-V host platform level. The second failover cluster option consists of two or more Windows Server servers with the Hyper-V role installed, each configured as a cluster node and with connections to a shared storage system.
0コメント