Updated: March 12, 1996 |
E-mail: tonys@microsoft.com
Overview
Windows NT Server Performance Profile
Microsoft SQL Server Performance Profile
Systems Management Server Performance Profile
SNA Server Performance Profile
Mail Server and Schedule+ Performance Profiles
Hardware Planning for Performance
Microsoft BackOffice Tuning
Appendix A
Microsoft® BackOffice is a collection five tightly integrated technology components:
These technology components collectively form the building blocks of an enterprise solution architecture. Hence, as constituents of an architecture, the optimal coexistence of each technology component is crucial. Accordingly, this paper addresses the optimization and tuning of the Microsoft BackOffice architecture.
The manner in which this paper approaches the optimal coexistence of the Microsoft BackOffice services and operating environment will be through the examination of performance profiles associated with each component. In turn, the complementary and antagonistic performance characteristics of the services will be identified. Thus, having gained knowledge of the performance characteristics, appropriate application of this information within the context of standard Microsoft Windows NT performance tuning will yield an optimal Microsoft BackOffice architecture.
In essence, the optimization and tuning of Microsoft BackOffice is an exercise in application tuning. Therefore, since Windows NT Server is a self-optimizing operating environment, we need only be concerned with the Microsoft BackOffice services and the respective hardware platform on which these services reside in many possible combinations. Consequently, the following Microsoft BackOffice solution architecture phases are discussed in detail with respect to performance considerations and practices.
However, this paper does not discuss in detail the optimization and tuning of these individual Microsoft BackOffice technology components. For such information, please refer to the referenced Tech Ed 95 papers and associated documentation in Appendix A of this document.
Before proceeding further it is suggested that you become familiar with the Windows NT Performance Monitor and its use for gathering performance-related data. Moreover, since it is unfeasible to cover all performance-oriented concepts and techniques in detail within the scope of this paper, the information presented will address topics that have the greatest or most strategic impact on Microsoft BackOffice performance. Furthermore, it is expected that this information will be used to conduct performance experiments that will lead to the best performance within your own Microsoft BackOffice environment. The information presented in this paper should be viewed as a set of guidelines, since every Microsoft BackOffice environment is different. Finally, the order of topic presentation will correspond to the order in which such subjects should be considered during the design and implementation of the Microsoft BackOffice architecture solution process.
Microsoft Windows NT Server is an adaptive operating system from a performance perspective. Windows NT Server is built upon a collection of adaptive algorithms that aid the system in constantly tuning itself during operations. In fact, Windows NT Server is specifically adapted to the demands of the enterprise server environment. However, although it is very efficient at optimally compensating for a wide range of operational demands, some areas do require manual tuning in order to achieve optimal performance.
These manual performance tuning efforts are typically associated with the optimization of the four primary resources on any server running Windows NT Server, namely:
The performance optimization goal is to eliminate resource bottlenecks, thereby achieving a balance between application and system resource requirements. Hence, from a Microsoft BackOffice perspective, Windows NT Server is the provider and manager of system resources. Therefore, as long as the basic resource requirements for Windows NT Server are satisfied, there are no antagonistic performance characteristics that should affect the other Microsoft BackOffice components.
Accordingly, the following information provides you with optimization information with respect to the Windows NT Server system resources described. Moreover, this information addresses resource requirements in the context of a Microsoft BackOffice solution architecture. Once you have applied this information and feel your system is optimized, it is then time to gather data on current system capacity. The data will allow you to do the following:
This information is rather technical in nature and assumes that you already know a great deal about Windows NT Server. However, it only touches the surface of optimization. Therefore, please refer to the appropriate references in Appendix A of this paper.
The following optimization guidelines will help you to optimize your Windows NT Server environment. In addition, application of the guidelines will be unique to each Microsoft BackOffice system and thus may be dependent on other Microsoft BackOffice component requirements.
Use this dialog box to change the relative responsiveness of applications that are running at the same time. When more than one application is running in Windows NT Server, by default the foreground application receives more processor time, and so responds better, than applications running in the background.
The "Maximize Throughput for Network Applications" is the optimal setting for Microsoft BackOffice applications such as Microsoft SQL Server, SNA Server, and Systems Management Server. With this option set, network application access has priority over file cache access to memory (4 MB of memory is allocated to available memory for starting up and running local applications). In addition, the amount of memory allocated to the server service (for such resources as InitWorkItems, MaxWorkItems, RawWorkItems, MaxPagedMemory, MaxNonPagedMem, ThreadCountAdd, BlockingThreads, MinFreeConnections, and MaxFreeConnection) is appropriately optimized for this choice.
Microsoft SQL Server is a robust and full-featured enterprise relational database system. As such, SQL Server requires dedicated system resources in order to function in an optimal manner. If the necessary system resources are not made available to SQL Server, then the possibility of poor performance is great. Furthermore, contention for system resources between SQL Server and other Microsoft BackOffice applications will be commonplace, if the resource requirements of SQL Server are not satisfied.
Accordingly, the following information is a brief description of how SQL Server interacts with each of the four system resource areas. Thus, this information will aid you in addressing resource interaction and/or contention issues between SQL Server and other Microsoft BackOffice components.
In trying to determine which initial CPU architecture is right for your particular needs, you are attempting to estimate the level of CPU bound work that will be occurring on the hardware platform. As far as Microsoft SQL Server is concerned, CPU bound work can occur when a large cache is available and is being optimally used, or when a small cache is available with a great deal of disk I/O activity aside from that generated by transaction log writes. The type of questions that must be answered at this point are as follows:
The answer to these questions may have already come from the system or application requirements. If not, you should be able to make some reasonable estimates. The bottom line is purchase the most powerful CPU architecture you can justify. This justification should be based upon your estimates, user requirements, and the logical database design. However, based upon experience it is suggested that the minimum CPU configuration consist of at least a single 80486/50 processor.
Determining the optimal memory configuration for a Microsoft SQL Server solution is crucial to achieving stellar performance. SQL Server uses memory for its procedure cache, data and index page caching, static server overhead, and configurable overhead. SQL Server can use up to 2 GB of virtual memory, this being the maximum configurable value. In addition, it should not be forgotten that Windows NT Server and all of its associated services also require memory.
Windows NT Server provides each Win32® application programming interface application with a virtual address space of 4 GB. This address space is mapped by the Windows NT Server Virtual Memory Manager (VMM) to physical memory and can be 4 GB in size dependent upon the hardware platform. The Microsoft SQL Server application only knows about virtual addresses and thus cannot access physical memory directly. This is controlled by the VMM. In addition, Windows NT Server allows for the creation of virtual address space that exceeds the available physical memory. Therefore, it is possible to adversely affect performance of SQL Server by allocating more virtual memory than there is available physical memory. Hence, the following table contains rule-of-thumb recommendations for different SQL Server memory configurations based upon available physical memory.
Machine Memory (MB) Microsoft SQL Server Memory (MB) 16 4 24 6 32 16 48 28 64 40 128 100 256 216 512 464
These memory configurations are made for dedicated Microsoft SQL Server systems and should be appropriately adjusted if other activities, such as file and print sharing or application services, will be running on the same Microsoft BackOffice platform as SQL Server. However, in most cases it is recommended that a minimum physical memory configuration of 32 MB be installed. Such a configuration will reserve at least 16 MB for Windows NT. Again, these memory configuration recommendations are only guidelines for initial configuration estimates and will most likely require appropriate tuning. Nevertheless, it is possible to make a more accurate and optimal estimate for SQL Server memory requirements based upon the previous knowledge gained from user and application performance requirements.
In order to make a more accurate estimate for an optimal memory configuration, refer to the following table for SQL Server for Windows NT configurable and static overhead memory requirements.
Resource Configurable Default Bytes per Space (MB) Value Resource User Yes 25 18,000 0.43 Connections Open Yes 10 650 0.01 Databases Open Objects Yes 500 72 0.04 Locks Yes 5,000 28 0.13 Devices No 256 300 0.07 Static No N/A ~2,000,000 2.0 Server Overhead TOTAL 2.68 Overhead
You can use this information to calculate a more exact memory configuration estimate with respect to actual memory usage. This is done by taking the calculated TOTAL Overhead above and applying it to the following formula:
Microsoft SQL Server Physical Memory - TOTAL Overhead = SQL Server Memory Cache
The SQL Server memory cache is the amount of memory that is dedicated to the procedure cache and the data cache.
The procedure cache is the amount of the SQL Server memory cache that is dedicated to the caching of stored procedures, triggers, views, rules, and defaults. Consequently, if your system will take advantage of these data objects and the stored procedures are to be used by many users, then this value should be proportional to such requirements. Furthermore, these objects are stored in the procedure cache based upon the frequency of their use. Thus, you want the most utilized data objects to be accessed in cache versus retrieval from disk. The system default is 20% of the available memory cache.
The data or buffer cache is the amount of the SQL Server memory cache that is dedicated to the caching of data and index pages. These pages are stored to the data cache based upon the frequency of their use. Therefore, you want the data cache to be large enough to accommodate the most utilized data and index pages without having to read them from disk. The system default is 80% of the available memory cache.
Accordingly, the following example for a dedicated SQL Server illustrates a more accurate estimate of SQL Server memory requirements.
Resource Estimated Bytes per Space (MB) Value Resource User Connections 50 18,000 0.9 Open Databases 10-Default 650 0.01 Open Objects 500-Default 72 0.04 Locks 15,000 28 0.42 Devices 256 300 0.07 Static Server N/A ~2,000,000 2.0 Overhead TOTAL Overhead 3.44
Hence, as a result of such overhead requirements, you will have approximately 28 MB to work with on the SQL Server. As overhead requirements such as user connections and locks grow, this value will be reduced and may subsequently lead to performance problems, which will then require tuning.
Achieving optimal disk I/O is the most important aspect of designing an optimal Microsoft SQL Server solution. The disk subsystem configuration as addressed here consists of at least one disk controller device and one or more hard disk units, as well as consideration for disk configuration and associated file systems. The goal is to select a combination of these components and technologies that complements the performance characteristics of SQL Server. Hence, disk subsystem I/O as it relates to reads, writes, and caching defines the performance characteristics that are most important to SQL Server.
The disk subsystem components and features you should look for are as follows:
The determination of how many drives, of what size, of what configuration, and of what level of fault tolerance, is made by looking back to the user and application performance requirements, understanding the logical database design and the associated data, and understanding the interplay between system memory and disk I/O with respect to Windows NT Server, Microsoft SQL Server, and other Microsoft BackOffice components. While it is beyond the scope of this paper to thoroughly cover this topic in-depth, there are several key concepts and guidelines that aid in selection of an appropriate disk subsystem components.
Concept 1: Most database I/Os (reads and writes) are random with respect to data and indexes. This is true for online transaction processing and decision support systems.
Concept 2: Writes to the Microsoft SQL Server transaction log are sequential and occur as large bursts of page level I/O during the checkpoint process or update, insert, or delete operations.
Concept 3: Optimal access to randomly accessed data and indexes is achieved by distributing the database over several physical disk units, in a single stripped volume (RAID 0 or RAID 5). This results in multiple heads being able to access the data and indexes.
Concept 4: Optimal access to sequentially accessed data is achieved by isolating it from the randomly accessed data and index volume(s), on separate physical disk units, which may be RAID configured (usually RAID 1, mirrored for logs). Sequential access is faster via a single head that is able to move in one direction.
Concept 5: Duplexing of intelligent disk controllers (SCSI or Array) will usually yield greater performance. This is especially true of systems that must sustain high transaction throughputs, systems with small data (buffer) caches, and systems with large data volumes. In addition, if the number of physical disk units exceeds a controller's capacity, another controller will be necessary.
Concept 6: The minimum optimal disk subsystem configuration for any Microsoft SQL Server solution will consist of the SCSI type of controller and at least two SCSI drives. This disk configuration is necessary in order to isolate the SQL Server transaction log(s), placing them on one physical disk and the database devices or file(s) on the other physical disk.
These concepts should be used as guidelines and not as absolutes. Each SQL Server environment is unique, thereby requiring experimentation and tuning appropriate to the conditions and requirements.
As with intelligent disk controllers, the goal is to select an intelligent network interface card (NIC) that will not rob CPU or memory resources from the Microsoft SQL Server system. This network card should meet the following minimum recommendations.
The following optimization guidelines will aid in the optimization of Microsoft SQL Server as part of a Microsoft BackOffice solution. Optimal application of these guidelines are unique to each environment. Thus, you may wish to experiment with different configurations and values, in order to arrive at the best combination of settings for your particular Microsoft BackOffice system.
Since these options affect the priority at which SQL Server threads run, the following definitions are necessary for basic understanding of Windows NT thread scheduling.
Boosting Microsoft SQL Server's priority can improve performance and throughput on single- and multiple-processor hardware platforms. By default this option is turned off and SQL Server runs at a priority of 7. When selected on a single-processor platform, SQL Server runs at priority 13. When selected on a dedicated SMP platform, SQL Server runs at priority 24. The significance is of course that the Windows NT thread scheduler will favor SQL Server threads over threads of other processes.
If this option is turned on, it may degrade the performance of other processes. Hence, this option should only be turned on for dedicated SQL Server machines, or if slower performance of other processes is tolerable.
Microsoft SQL Server can take advantage of SMP platforms without this option being turned on. In this off state SQL Server runs at a priority level of 7. When this option is turned on the priority is increased to 13, thus increasing the scalability improvement multiple CPUs have on SQL Server performance.
As with the Boost SQL Server priority option, if turned on it may degrade the performance of other processes. Hence, this option should only be turned on for dedicated SQL Server-based machines.
If both options are turned on for SMP platforms, SQL Server runs at a priority of 24.
The following SQL Server configuration parameters are those that impact performance or performance-related resources. Each configuration parameter is defined with respect to its function and its impact on performance. See the SQL Server Configuration Guide for more details on sp_configure settings. It is recommended that you start with the default values and experiment with changing parameter values once you have obtained a baseline of performance. When adjusting parameters to tune performance, adjust one parameter at a time, and measure the difference in performance; changing multiple parameters in an ad hoc fashion is generally not productive.
Performance impact: Physical memory is used by SQL Server for server operation overhead, data (buffer) cache, and procedure cache. Hence, in order to reduce SQL Server page faults, an appropriate amount of memory should be configured. Please refer to the previous discussion in this paper concerning memory.
Performance impact: SQL Server for Windows NT uses the asynchronous I/O capability of the Windows NT operating system. Examples of these are the Win32 API calls ReadFile(), ReadFileEx(), WriteFile(), and WriteFileEx(). See the Win32 Software Development Kit (SDK) for more information. Asynchronous, or overlapped I/O, refers to the ability of a calling program to issue an I/O request and without waiting for completion to continue with another activity. When the I/O finishes, the operating system will notify the program via a callback or other Win32 synchronization mechanism.
Performance impact: Having a properly sized procedure cache will result in fewer page faults with respect to use of stored procedures, triggers, rules, and defaults. Please refer to the previous discussion in this paper concerning memory.
Performance impact: Forcing Tempdb into RAM may result in increased performance if a significant amount of processing involves the creation and use of "WORKTABLES" by the SQL Server optimizer. Execution of such processing in RAM is inherently faster than corresponding disk I/O from paging.
The tuning of system resources typically involves the discovery of "bottlenecks." A bottleneck is the single resource that consumes the most time during a task's execution. In the case of Microsoft SQL Server, such resource bottlenecks adversely affect the performance of normal relational database operations as well as causing contention with other Microsoft BackOffice applications. Hence, the following information pertains to the detection of SQL Server resource bottlenecks and the subsequent adjustment of the resource in order to relieve the demand and increase performance.
Processor tuning involves the detection of CPU-bound operations. The following processor bottleneck monitoring guidelines will aid in determining such problems.
Action: If this occurs you need to determine which Microsoft SQL Server User process is consuming the CPU. To determine which process is using up most of the CPU's time, monitor the SQLServer-Users: CPUtime for all of the process instances (spid). One or more will appear as using the greatest cumulative time. Having determined the offending process instance, examine the query for inefficient design. In addition, examine indexes and database design for inefficiencies with respect to excessive I/O, which consumes CPU cycles. (Wide tables and indexes cause more I/Os to occur as do table scans.)
Action: Examine the disk controller card and the network interface card. (See the topic under General Actions below.) In addition, if this is not a dedicated SQL Server system, look for other processes that meet the above criteria via Process: % Privlieged Time and Process: % User Time. If you find such processes eliminate them or schedule them to run at more convenient times.
Memory tuning involves the detection of memory-constrained operations. The following memory bottleneck monitoring guidelines will aid in determining such problems.
Action: Either allocate more memory to Microsoft SQL Server or increase the amount of system memory.
Action: Compare the SQLServer: Cache - Number of Free Buffers value against the LRUtheshold value. This value is derived by obtaining the total number of buffers allocated via the DBCC MEMUSAGE command and multiplying this value by the LRUthreshold percentage (default 0.03). If the number of free buffers is close to the derived value, then either allocate more memory to SQL Server or increase the amount of system memory.
Action: Increase the system memory or increase the memory dedicated to Windows NT, by decreasing the memory allocated to SQL Server or other processes. Moreover, you may also eliminate noncritical processes as these also utilize memory resources.
Action: Increase the memory allocated to SQL Server or decrease the procedure cache percentage, thereby increasing the data cache. If indexes are not being utilized, design intelligent indexes. If database tables are too wide, thus resulting in fewer data rows per data page, redesign the tables to be narrower.
Action: Increase the data cache size or the frequency of checkpoints. Checkpoints can be increased via the recovery interval value or by manual execution.
Disk subsystem tuning involves the detection of disk I/O constrained operations. Such bottleneck constraints may be caused by the disk controller, the physical disk drives, or lack of some other resource that results in excessive disk I/O generating activity. Furthermore, poor disk subsystem performance may also be caused by poor index or database design. The goal is to operate the Microsoft SQL Server with as few physical I/Os and associated interrupts as possible. The following disk I/O bottleneck monitoring guidelines will aid in achieving this goal.
Note In order to monitor low-level disk activity with respect to the PhysicalDisk Performance Monitor counters, it is necessary to enable the diskperf option. This can be accomplished by issuing the following command from the system prompt: diskperf -y. Running with this option enabled may result in a slight (0.1%-1.5%) degradation in performance. Hence, disable it when not required for use (diskperf -n).
When performance tuning the SQL Server disk subsystem, you should first attempt to isolate the disk I/O bottleneck with the SQLServer counters, using the PhysicalDisk and LogicalDisk counters for more detailed monitoring and refinement of an action plan.
Action: Observing either LogicalDisk: Disk Queue Length or PhysicalDisk: Disk Queue Length can reveal significant disk congestion. Typically, a value over 2 indicates disk congestion. Increasing the number of disk drives or obtaining faster drives will help performance.
Action: Observing either LogicalDisk: Disk Queue Length or PhysicalDisk: Disk Queue Length can reveal significant disk congestion. Typically, a value over 2 indicates disk congestion. Increasing the number of disk drives or obtaining faster drives will help performance.
Action: Obtaining faster disk drives or disk controllers will help to improve this value.
Network tuning with respect to Microsoft SQL Server performance is affected by the following:
Regarding the throughput of the LAN or WAN, this is beyond the scope of this paper and is not critical to the tuning of a specific SQL Server. However, when considering remote procedure calls between SQL Servers or data replication, LAN and/or WAN throughput will be an important concern. Thus, this section will deal with tuning issues related to the network interface card and system or SQL Server resources that affect the SQL Server's network performance. Accordingly, the following network bottleneck monitoring guidelines will deal with these issues.
Action: Determine if any processes or protocols extraneous to the operation of SQL Server are running. If so, eliminate them.
Action: Determine if any processes or protocols extraneous to the operation of SQL Server are running. If so, eliminate them.
Action: Look at the number of SQLServer: User Connections and the SQLServer: Network Command Queue Length. If these values are also high, especially Network Command Queue Length, then consider increasing the number of available worker threads via sp_configure and/or increase memory allocated to SQL Server. However, you may wish to restrict user connections via sp_configure in order to decrease the workload on the SQL Server. Remember, user connections and worker threads are counted as overhead against the SQL Server memory allocation. Thus, plan accordingly when adjusting these values.
An application can change the packet size by using the DB-Library DBSETLOACKET() call. The packet size may also be changed while using the BCP and ISQL utilities using the [/a packetsize] parameter. Increasing the packet size will only work for name pipes clients to SQL Server on Windows NT.
You can monitor the improvement in network read and write efficiency by viewing the SQLServer: Network Reads/sec and SQLServer: Network Writes/sec counters before and after changing the TDS packet size. Fewer reads and writes should occur after increasing the packet size.
Value Name Default Value Minimum Value Maximum Value PulseConcurren 20 1 500 cy Pulse 300 (5 60 (1 minute) 3600 (1 hour) minutes) Randomize 1 (1 second) 0 (0 seconds) 120 (2 minutes)
Pulse defines the typical pulse frequency (in seconds). All SAM/LSA (User/Security account database) changes made within this time are collected together. After this time, a pulse is sent to each BDC needing the changes. No pulse is sent to a BDC that is up to date.
PulseConcurrency defines the maximum number of simultaneous pulses the PDC will send to BDCs.
Netlogon sends pulses to individual BDCs. The BDCs respond by asking for any database changes. To control the maximum load these responses place on the PDC, the PDC will only have PulseConcurrency pulses "pending" at once. The PDC should be sufficiently powerful to support this many concurrent replication RPC calls (related directly to Server service tuning as well as the amount of memory in the machine).
Increasing PulseConcurrency increases the load on the PDC. Decreasing PulseConcurrency increases the time it takes for a domain with a large number of BDCs to get a SAM/LSA change to all of the BDCs. Consider that the time to replicate a SAM/LSA change to all the BDCs in a domain will be greater than: (Randomize/2) * NumberOfBdcsInDomain) / PulseConcurrency
For more in-depth information concerning Microsoft SQL Server optimization and tuning, refer to Appendix A.
Microsoft Systems Management Server is a computer management system that enables you to inventory and support computers on an enterprise network level. An enterprise network can be a single local area network (LAN) or a wide area network (WAN) that is composed of multiple LANs connected using WAN links (routers and bridges).
From a Microsoft BackOffice component perspective, Systems Management Server interacts directly with Windows NT Server as a collection of application services and utilizes Microsoft SQL Server as its data store. Furthermore, Systems Management Server instigates substantial network operations via the transfer of data from one LAN to another using a variety of protocols over various network media.
With respect to Microsoft BackOffice performance optimization, Systems Management Server is a very complex system for which there exists no perfect optimal configuration. However, the following information, consisting of optimization guidelines and techniques, can be used to optimally configure a variety of Microsoft BackOffice solutions that include Systems Management Server.
Accordingly, the following information presents a brief description of the Systems Management Server service components and their relationship to performance. Moreover, the interaction between Systems Management Server and Microsoft SQL Server will be discussed. For more information concerning Systems Management Server, please refer to Appendix A.
By default, all Systems Management Server services are installed on a site server. These services enable Systems Management Server to manage the site and communicate with other sites. The services are:
The following sections describe the service and the associated performance impact to a Microsoft BackOffice solution architecture.
The Systems Management Server Site Hierarchy Manager service monitors the site database for changes to the configuration of that site or its direct secondary sites. If the Systems Management Server Site Hierarchy Manager detects a change in a site's proposed configuration, it creates a site control file (which contains all proposed configurations for a site) and sends it to the site. The Systems Management Server Site Hierarchy Manager exists only on primary sites.
Performance Impact: The Systems Management Server Site Hierarchy Manager initiates transactions to the Systems Management Server SQL Server database, as well as initiating requests of other Systems Management Server services which result is network and disk I/O. Since this service monitors site configurations (new or existing), I/O is typically minimal.
The Systems Management Server Site Configuration Manager watches for site control files created by the Systems Management Server Site Hierarchy Manager. If the site control file contains changes to a site's configuration, the Systems Management Server Site Configuration Manager makes the changes. The Systems Management Server Site Configuration Manager exists on both primary and secondary sites.
Performance impact: The Systems Management Server Site Configuration Manager can initiate substantial disk I/O on a site server on which configuration changes are to be made. In addition, substantial network activity is generated as the Systems Management Server Site Configuration Manager performs its site monitoring functions. These monitoring functions can be tuned via interval values associated with this service. In addition, this service can initiate jobs with other Systems Management Server services that result in further network I/O.
The Systems Management Server Executive service functions as a master controller of the following Systems Management Server components. For a detailed description of these Systems Management Server Executive service components see the Systems Management Server System Reference, Appendix A.
Performance impact: All of these Systems Management Server Executive service components can initiate substantial Systems Management Server server disk and network I/O and/or database transactions.
Due to potential resource limitations on a single Microsoft BackOffice hardware platform the following Systems Management Server Executive service components may be moved to Systems Management Server Helper servers. Accordingly, work load may be balanced across several Microsoft BackOffice hardware platforms.
The Package Command Manager (PCM) service is installed on all servers running Windows NT Server at a Systems Management Server site. This service provides unattended package installation.
Performance impact: The PCM service is a polling type service, initially polling every one minute. The polling interval value may be changed. This service generates I/O via the execution of package jobs. Upon completion of such jobs, initiation of other service component processes occurs, thereby resulting in further I/O.
The Bootstrap service is used to set up the site server for a secondary site.
Performance impact: This service is basically a temporary service in that it is removed after completing the secondary site set up. The service initiates package decompression, creates directory structure, performs file maintenance, and starts the Site Configuration Manager service, which then removes the Bootstrap service. If this occurs on a new dedicated Systems Management Server secondary site server, there is little performance impact to other Microsoft BackOffice applications. Otherwise, this activity will result in some performance degradation until complete.
The Inventory Agent service performs inventory of Systems Management Server components on servers.
Performance impact: This service functions on a 24-hour interval, at which time by default it scans the hardware and software inventories of the associated server. This activity results in some CPU utilization as well as file creation. This in turn initiates a process whereby the inventory information is sent to the Systems Management Server central site for storage in the database. Since this occurs on a timed interval performance impact can be controlled via the scan and service interval values.
The SNA Receiver processes information sent from remote SNA Sender sites.
Performance impact: Aside from the overhead associated with the service, the performance impact is negligible.
Based upon the performance impact information it is clear that Systems Management Server is indeed a resource-hungry Microsoft BackOffice application. Hence, Systems Management Server as a component of a Microsoft BackOffice solution will require significant system resources and depending upon work loads may not co-exist well with other Microsoft BackOffice components. This is especially true of Microsoft BackOffice components, such as Microsoft SQL Server, that require dedicated system resources for efficient and optimal operation.
Systems Management Server sites interact with Microsoft SQL Server in order to support all the major Systems Management Server features. As the use of Systems Management Server is expanded, the Systems Management Server site server requires increasing amounts of system resources. If the Systems Management Server SQL database is placed on another system running Windows NT Server and Microsoft SQL Server, it allows the site server to have more of the available system resources. The added overhead of maintaining a network session with a remote SQL Server is substantially lower than running SQL Server on a site server machine, provided the amount of data being transferred is not great and SQL Server is not located across many network links.
If you decide to initially run Systems Management Server and Microsoft SQL Server on the same Microsoft BackOffice platform, you may later change the Systems Management Server server to use a different SQL Server system. This may be accomplished via the use of SQL Server database administration utilities to move the Systems Management Server database and by changing the location of the SQL Server with the Systems Management Server setup application.
The determination of when you should move to a remote SQL Server system can only be made through the monitoring of system resources and performance via Windows NT Performance Monitor and/or other monitoring mechanisms.
For more information on this subject see the previous section in this paper entitled "Microsoft SQL Server Performance Profile."
The following optimization guidelines will aid in the optimization of Systems Management Server as part of a Microsoft BackOffice solution. Optimal application of these guidelines are unique to each environment. Thus, you may wish to experiment with different configurations and values, in order to arrive at the best combination of settings for your particular Microsoft BackOffice environment.
As polling intervals are shortened, Systems Management Server places greater stress on the Microsoft BackOffice server. As interval values increase, information flow throughout the Systems Management Server environment is slowed and therefore may not be as timely.
There will be no visual feedback to the Systems Management Server administrator that the Systems Management Server feature has been disabled, nor will there be any visual feedback to the Systems Management Server administrator that future changes to the response controls in the service control dialog will not have any effect.
These settings can be changed by modifying values under <SMS Root>\Components\<Sender name>.
For the LAN sender, maximum concurrent sendings controls how many threads will be allocated at one time to service send request files.
Microsoft SNA Server emulates IBM SNA devices, allowing client software applications to emulate IBM terminals and printers, to transfer files to and from the IBM host, or to contact other software applications running with the SNA network.
SNA Server provides access into IBM mainframe environments by emulating the function of a traditional IBM cluster controller (PU2), and the various types of devices (LUs) that a PU2 can support. Hence, SNA Server is the Microsoft BackOffice component that provides for communication between the mainframe and client-server worlds.
With respect to Microsoft BackOffice component interaction and performance, SNA Server is relatively benign as far as its consumption of systems resources is concerned. However, SNA Server does require a basic level of system resource support and as such, the following information will paint a performance picture that will aid in the design and deployment of optimal Microsoft BackOffice architectures that include SNA Server as a component.
The basic installed SNA Server components are:
Of these components, the SnaBase service is the fundamental component of the SNA Server network. On the Microsoft BackOffice server, the purpose of SnaBase is to coordinate the SNA servers in the network according to their designated roles. In addition, SnaBase builds a list of available services in the domain and broadcasts the list of services provided by each server.
The SnaServer service component is the physical unit (PU) 2.1 node within an SNA Server. The SnaServer service interacts with all clients and other nodes on the SNA network.
Link Services are the software components of SNA Server that communicate with the device driver for a particular communication adapter (DLC 802.2, SDLC, X.25, DFT, Channel, or Twinax). Link services define the protocol used between the SNA Server software and the communications adapters in a computer.
Based on these brief descriptions of the primary SNA Server components, it is obvious that SNA Server, unlike other Microsoft BackOffice components (Systems Management Server and Microsoft SQL Server), primarily requires "kernel" level resources. In other words, SNA Server does not really compete with the other Microsoft BackOffice applications, as it is not considered to be a "disk bound" application. Rather, it is more reliant upon the availability of processor (CPU) and memory system resources. Hence, it competes for resources with Windows NT Server processes such as RAS, IP routing, and so on. In fact, SNA Server is architected to take full advantage of Windows NT multithreading and asynchronous completion, in order to scale gracefully as system processors are added to a Microsoft BackOffice hardware platform.
In light of this information, optimizing a Microsoft BackOffice platform for SNA Server is relatively easy.
# Users Amount of RAM Processor <= 500 32 MB 66 MHz 486 <= 2000 64 MB 90 MHz Pentium
The Microsoft Mail Server and Schedule+ applications may be constituents of your Microsoft BackOffice architecture. Hence, we address them in the paper as applications that can greatly influence performance.
To begin with, these applications are both "disk bound" from a performance perspective, as they make use of the Mail Server Post Office facility, which is based on the Shared File System architecture. In addition, these applications tend to generate an appreciable amount of disk and network I/O when under heavy access. Therefore, the following information will aid in the understanding of the performance characteristics associated with these applications.
With Microsoft Mail, users connected to a network can exchange messages, files, and programs electronically and efficiently with one another. These users have access to a collective mail-drop facility called a Postoffice (PO). This PO functions on a Shared File System (SFS) basis and resides on a file server where many users and processes can access the PO simultaneously, performing many different operations.
The Message Transfer Agent (MTA) for Microsoft Mail is a program called EXTERNAL.EXE. There are two main purposes of the MTA: transfer the mail between two or more Microsoft Mail POs and provide connectivity to remote mail users. There is both an MSDOS® and an OS/2 version of the MTA. The OS/2 version is called the Multitasking MTA (MMTA). There is currently a Windows NT version of the MMTA as well that will soon be in Beta. The MMTA uses OS/2 (and soon, Windows NT) to extend the capabilities of Microsoft Mail to multiple External instances, Dispatch instances (a function of Directory Synchronization explained later), and SchDist instances (a function of Schedule Distribution for Microsoft Schedule+ explained later) on a single machine. The MMTA may also be configured as a modem pool supporting many remote clients from a central hub. Additional External MTAs/MMTAs may be added as required to increase performance as the network grows and provide greater remote access support.
Directory Synchronization (DirSync) is the automatic, fault tolerant process of keeping a Global Address List (GAL), which contains all mail addresses defined on the network, available to all users on the Microsoft Mail network. This GAL is used by both the Microsoft Mail and Microsoft Schedule+ systems. DirSync is performed by an application called Dispatch. DISPATCH.EXE is included in the base Microsoft Mail Server. The directory synchronization architecture consists of a DirSync Server (DSS) postoffice and DirSync Requester (DSR) postoffice(s). There is only one DSS for synchronizing directories in an organization with all other postoffices participating in DirSync, including the DSS postoffice, defined as a DSRs.
Make use of the MTA's wide-area network (WAN) configuration option. On all physical links slower than typical LAN speeds, MTA/MMTAs should be located on both sides of the WAN link and should be configured to use the WAN option. To explain how this improves performance, a discussion on the tasks an MTA/MMTA must perform is in order. When an MTA/MMTA services a postoffice, it has three tasks to perform:
By using the WAN option, the MTA/MMTA must only perform the second task across the WAN connection and then the first and third tasks are performed by the MTA/MMTA on the local side of the link, which means they are performed at LAN speeds instead of WAN speeds. This not only improves performance, it also reduces the possibility of mail database corruption since files are kept open for shorter periods of time.
Group people that communicate most often on the same PO. Since the mail clients work directly with the PO to deliver mail to other users on the same PO, this eliminates the need for the MTA process to get involved to send those messages. Reducing the number of messages an MTA must route will increase your system throughput.
Limit the number of active users on a PO. One of the most important directories on the SFS PO is the global (GLB) directory. This directory contains all of the system and configuration files used by all of the Microsoft Mail clients and processes. A few files in that directory are accessed very frequently. Even though these files are very small, because of the number of times they are accessed, they can become the bottleneck for the PO. A PO has a hard limit of 500 users, but a more realistic number would be between 200-300 LAN users because of the number of file accesses that must occur. One could have more users than this recommendation if some of these users are remote mail users and aren't connected to the PO continuously. To size a PO, one must take into consideration the performance of the network operating system for file access, the physical connectivity on the network (such as 2 MB wireless LAN connections compared to 10 MB Ethernet compared to 100 MB FDDI), and the speed of the disk access where the PO resides.
Consolidate POs. After just stating that it is important to limit the number of active users on a PO, it should also be stated that one shouldn't go overboard and allocate only 50 users per PO as some companies have done. Again, a good rule of thumb is to try to stay between 200-300 users per PO. If a company has a lot of smaller POs, then consolidation of POs should be a consideration. Consolidating POs does the following:
Connect users to a local PO. Due to the SFS architecture of the PO and the amount of file I/O that occurs between a client and the PO, it is always better for a client to connect to a PO on the LAN rather than across the WAN. Even if an office only has 50, 20, or even 10 users, it should have its own PO. This is one of the few times when having a PO with less than 200 users on it is justified.
Plan for 10-15MB per user for mail storage on the PO. It is important to size the PO correctly. Where mail is used regularly as a part of normal business operations, experience has shown that users may consume up to 10-15 MB of disk for mail storage. Also, it has been found that limiting individual user message stores (MMF files) to around 10-15 MB reduces the likelihood for corruption to occur. Should corruption occur, if the file is around 10-15 MB in size, then the greater the chances of success for restoring the integrity of the MMF file. Users should be taught how to archive folders and perform simple backups of the MMF files through the mail menu so that they can make sure they keep their MMF files in the 10-15 MB range or lower. In most cases, customers average between 5-10 MB per user, but planning for 10-15 MB will provide a buffer for heavy use.
Use Hub POs to minimize routing requirements and ease administration. A Hub PO is a dedicated mail routing PO. This PO should be used at strategic points where there are heavy mail routing needs. Basically, a Hub PO is exactly the same as a user PO except that there are no users assigned to this PO and the file server on which it runs does not have any other responsibilities such as file and print sharing. Its sole purpose is to shuttle mail back and forth between other POs. By isolating the Hub POs from the user POs, mail routing throughput is increased and administration of mail routing is centralized. It improves throughput because the user POs only need to have a single direct connection defined back to a Hub PO. All other external user POs are then defined as indirect via the Hub PO. This means that the user PO only has to keep up with one queue for its one direct connection.
Limit the number of modems per MMTA to handle Remote Mail Users. Asynchronous connections in the MMTA generate lots of interrupts and experience has shown that a 486 CPU with 16 MB RAM running OS/2 1.3 cannot handle more than 4 simultaneously connected asynchronous sessions. A Pentium processor with 16 MB of memory running under OS/2 1.3 may be able to handle 5 modem connections. Limiting your modem connections will allow the CPU to respond more efficiently to the modem sessions. This is really driven by memory and the version of OS/2 utilized on the MMTA. Since OS/2 2.x can work effectively with more than 16 MB of RAM, some customers have successfully used 8 modems on OS/2 2.1 systems with 20-24 MB of RAM. Utilizing intelligent communication boards with high-speed UARTs and buffers will also increase the number of modems that a single MMTA machine can support.
Utilize the MailerDisable and DispatchDisable EXTERNAL.INI parameters in asynchronous instances. EXTERNAL.EXE really has only two functions for mail processing-dispatch and mailer. The dispatch function does the checking of external postoffice mailbags to see if any mail needs to go out. It then builds the P1 headers on that postoffice. External will use the P1 directory for outbound message transfer in each postoffice database it is processing. Once all the P1s are built, external will set up the connection to the destination postoffice. The P1 files only include message header information, not actual messages or attachment data. It writes all the P1 data to the INQUEUE3 file and writes all the message text and attachments to the \MAI and \ATT directories respectively. This ends the dispatch function. The mailer function reads the INQUEUE3 (INQUEUE3 handles all Microsoft Mail 3.x messages, INQUEUE handles mail from Mac, 2.x, OS/2 Presentation Manager, and MSDOS-based clients) and updates the users MBG files with pointers to the appropriate MAI and ATT files. One can disable one or both of these functions on an external using the MailerDisable or DispatchDisable INI commands. This is helpful when you have several MTAs or MMTA sessions and you want to tune them. For example, one may want to disable both of these functions on all MMTA sessions handling dial-in for remote users. This provides for the greatest availability and performance for remote users. Before doing this, make sure there are other sessions running on the MMTA system that are able to do all the dispatch and mailer functions.
For large networks, dedicate a PO to act as the DirSync server PO. DirSync is divided into three different stages that are called "times," and are numbered from T0 to T3. T2 is the DirSync server's "process updates" time. This is when the server takes all the updates sent in by the requestor POs, adds them to the master transaction list, and sends GAL updates out to each requestor PO. The DirSync T2 machine should run on its own PO, preferably SUBSTituted to a local hard drive to minimize network traffic. This local hard drive should be large and FAST. Using a separate PO and machine from the other POs minimizes impacting mail throughput due to problems with DirSync. Having the PO local can speed up DirSync by a factor of 10 (which means T2 can run in 30 minutes compared to 5 hours over the net). This can be a significant benefit on servicing distributed POs. This machine does not have to be a file server. Rather, it can copy the PO down to the local drive, run SUBST M: C:\MAILDATA, perform T2 for DirSync, copy the files back up to the server, and finish much faster than running the whole process over the network. Likewise, the local DirSync machine should be a separate machine than the local External PC. That way, if DirSync fails for any reason, at least External will continue to run to service mail customers. When DirSync returns on-line, the GAL will be usable without turning off External.
Limit the Number of Network Names. Microsoft Mail uses a three-layered naming convention consisting of Network/Postoffice/Mailbox. Even though the DirSync process will work with any combination, having a uniform network name does speed processing. The reason for this is that DirSync, in order to speed user search times, reindexes the GAL every time changes are made to the GAL. This indexing must occur across all three layers of the Microsoft Mail naming convention. By using only a couple Network names, DirSync has less indexing to do across the Network name and really only has to index across the Postoffice and Mailbox parts of the Microsoft Mail addresses.
Use more, high-performance hardware to increase performance. Many accounts use 286 and 386 class machines as their MTA/MMTAs. Depending on other factors such as link speeds, faster hardware (such as 486/33 or greater class machines) could significantly increase the throughput of mail. The cost effectiveness of using faster MTA/MMTAs to improve mail performance should not be underestimated. An advantage of using high-performance MTA/MMTAs is that they can be applied to the system in a highly tactical manner. Bottlenecks on only a small subset of the total number of links can have a big effect on the overall system performance. Increasing throughput on key links can have a substantial impact on overall system performance. When trying to determine where to apply high-performance MTA/MMTAs consider the following:
Apply faster hardware selectively on links where traffic volume is greatest. By selectively applying faster machines on links that move a lot of mail, one should be able to better balance the loads across the system.
Faster hardware should have a greater impact on MTA/MMTAs that are not link constrained. For example, on 10mbps links, the MTA/MMTAs are not link constrained to the degree that a WAN segment/link imposes. At 2mbps they are marginally link constrained. Therefore, the faster the link, the larger will be the marginal benefit of using fast hardware.
If used in conjunction with the WAN option, faster hardware has a greater impact. This is because when the MTA/MMTAs perform more delivery work on unconstrained links, the impact is relatively negligible on total link load and network traffic. The WAN option disables the delivery portion of MTA/MMTA processing across the WAN, which minimizes link impact due specifically to mail processing traffic. Remember, a local MTA must be running on both sides of the link in order for this to work or mail will not be distributed at the remote post office.
The use of fewer high-performance MTA/MMTAs is more efficient than the use of multiple low-performance MTA/MMTAs. Whether configured with a single MTA/MMTA polling remote postoffices, or with the WAN option, the marginal increase in throughput is much greater if a single fast machine is used as compared to multiple slower machines.
Match the Mail network configuration to the physical network. To ensure efficient WAN use, there must be a close correlation between the location of Hub POs (the mail backbone), the MTA/MMTAs, and physical network devices. When adjusting the mail system configuration, carefully consider the characteristics and locations of the various physical links. It is often difficult to determine the optimal location for a particular postoffice or device. The following are a few guidelines.
Distribute the processing for Directory Synchronization. As explained earlier, there are three stages, called "times" numbered T1 through T3, that make up the entire DirSync process. Times T1 and T3 run against all requestor POs and time T2 only runs against the DirSync Server PO. By distributing the processing of DirSync, one increases performance of the DirSync process by having multiple machines execute the T1 and T3 cycles against the different requestor POs. Just as one distributes processing for the MTA/MMTA function of Microsoft Mail, one should also distribute the processing for DirSync, especially when the connectivity involves WAN links. At all sites where there is at least one MTA/MMTA process running, there should also be at least one DirSync (Dispatch) process running. This can be put on the same physical box as the MTA/MMTA when performance requirements allow it. One knows when to add more machines to run the DirSync process when it takes too long for the company for the DirSync process to complete. For example, let's say a company schedules DirSync to run overnight when the mail network is not heavily utilized and desires it to be complete before the next morning when activity picks up again. The company sets the schedule so that T1 starts at 7:00pm, T2 at 11:00pm, and T3 at 3:00am every Monday, Wednesday, and Friday. The company wants the process to be completed by 7:00am the following day as that is when the mail activity increases. If the company determines that the T3 process is not being completed against all POs by 7:00am, then the company should consider increasing the number of Dispatch machines so that each has fewer POs to service.
Microsoft Schedule+ is a personal and workgroup scheduling tool that helps keep track of a user's appointments and tasks, block out time for meetings, and record notes. It can also be used to remind users of important appointments or events. In a workgroup environment, Microsoft Schedule+ allows users to view other users' schedules, find an appropriate time to meet, and then book the meeting in one step, without having to contact all of the participants individually. With Microsoft Schedule+, a user can designate another user as his or her assistant, who can then schedule appointments and set up meetings on behalf of the user. Microsoft Schedule+ is tightly integrated with Microsoft Mail in that it uses the same user name and password to logon, stores the calendar files on the Microsoft Mail PO, uses Microsoft Mail to send meeting requests and users' free and busy times, and uses the Microsoft Mail GAL for workgroup scheduling.
The client keeps a copy of the user's calendar file on the local workstation as well as on the Microsoft Mail postoffice.
The SchDist Program is the process that sends users' free/busy times, assistants' names, and resource information between POs at regular intervals. Like the Microsoft Mail External and Dispatch programs, the SchDist program should be left running 24 hours a day. SchDist uses the existing Microsoft Mail External program to route the free/busy message packets between POs.
Schedule Distribution is the process by which an administrator can send snapshots of users' Appointment Books to other postoffices so users on that other postoffice can view the data. These snapshots-Schedule Distribution mail messages, actually-contain the following information:
In all cases, the above information is only sent if a change occurs. For example, if there are 150 users on a postoffice, but only 10 users have modified their schedules since the last schedule distribution message was sent, only 10 new sets of free/busy bits will be sent in the next schedule distribution message.
The Schedule Distribution information is kept in the CAL directory of the receiving postoffice. On any postoffice, there is one Postoffice file (POF) for each postoffice that sends it Schedule Distribution messages.
Schedule Distribution is configured through the Schedule+ Administration Program. The actual work of distributing schedule information is done when the Schedule Distribution Program is run.
When Schedule Distribution is used, users can view free/busy information for users on another postoffice without requiring them to have physical network access or LAN access privileges to the other postoffice. Once the user has viewed the free/busy time of users on other postoffices, he or she can send a meeting request which they can act on.
The network traffic due to schedule distribution is predictable, as explained below. When scheduling meetings with users on other postoffices, Schedule Distribution allows Schedule+ to quickly find out if users on other postoffices have assistants or are resources. Schedule+ needs to know this information to decide where to send meeting requests. Schedule Distribution works across different LANs by using the Mail system to route information. Schedule distribution can cause a large amount of consistent network traffic.
This process does not allow users on one postoffice to see anything more than free/busy times of users on other postoffices. Schedule Distribution alone cannot enable a user on one postoffice to view appointment details, modify Appointment Books, or act as an assistant for a user on another postoffice. Additionally, Schedule Distribution alone does not let users on one postoffice automatically book resources on another postoffice. Schedule Distribution requires the Schedule Distribution program to be run, either manually or continuously, on a dedicated machine or with other processes using the DISPATCH.EXE program.
The size of each schedule distribution message is determined by :
1. The number of Schedule+ users on the postoffice.
2. The number of schedule changes the average user makes each day.
3. The frequency with which schedule distribution messages are sent.
4. The number of months of data sent via schedule distribution.
Each Schedule Distribution message contains the following data:
Information Size Message header 100 bytes Assistant/resource information for each user whose 25 bytes free/busy information has changed One month of free/busy information for one user 20 bytes
For example, if an administrator chooses to distribute three months of schedule data and 10 users on the postoffice have changed their appointments since the last schedule distribution, the schedule distribution message will be 100+{10x[25+(3x20)}=950 bytes. Of course, these numbers will vary depending on the frequency of Schedule Distribution, number of months of data propagated, and the frequency of schedule activity per user on the postoffice.
As can be seen, each Schedule Distribution message is very small. On a large network, however, these messages can really add up. On a Mail network with 100 postoffices all participating in Schedule Distribution, for instance, each postoffice could send up to 99 Schedule Distribution messages per "round" of Schedule Distribution. If every postoffice sends to every other, this means 9900 messages are being sent during every round of schedule distribution.
Don't over-send free/busy information. If you don't need to have each others' free/busy information updated every 10 minutes, then don't set up SchDist to send this information so often. Often, users' free/busy information may not change very often and an organization just needs an idea of when a user might be free and busy. Find a reasonable interval for your organization to distribute this information, recognizing that decreasing the interval to give users more up-to-the-minute information increases the messages that must be processed. Also, only send free/busy information where it makes sense. In other words, if users on two POs don't schedule meetings with each other very often (if at all), then don't set up SchDist between them.
Don't run SchDist across timezones. Since Schedule+ doesn't work across timezones, there is no need to run Schedule Distribution across timezones. In fact, doing so will cause more confusion for the users.
Use Dynamic PO Connections very sparingly. Dynamic PO Connection establishes actual network connections between POs, instead of sending free/busy message packets. Therefore, it requires a LAN connection between the two POs and may cause network traffic to increase because every Schedule+ user could connect to multiple POs. Limiting the number of possible dynamic PO connections will help limit the amount of network traffic. A common practice is to allow everyone to be able to dynamically connect to the PO that contains the conference rooms and other resources, but that is all.
When using Dynamic PO Connections, still run SchDist between the two POs. Most users will not need to dynamically connect to other POs to see detailed, up-to-the-second information of other calendars. Rather, free and busy information as recent as the last SchDist cycle is enough. Running SchDist at least once a day will make the Schedule+ system run more efficiently by reducing network traffic.
Possessing performance profile knowledge of the Microsoft BackOffice components, and having developed a Microsoft BackOffice solution architecture design, it is appropriate to consider the optimal hardware platform based upon this information. It is the goal of this section to provide information that will help to determine the best possible hardware configuration for your Microsoft BackOffice environment. In addition, characteristics of the Windows NT operating system will also be discussed from a hardware performance point of view.
Hardware planning as it relates to Microsoft BackOffice is primarily concerned with the following critical system resources.
These resources make up the bulk of all relevant hardware platforms on which the Microsoft BackOffice components operate. Hence, this paper will address planning considerations that are generic to all platforms and useful for scaling and implementing optimal Microsoft BackOffice solutions.
As stated earlier in this paper, the level of CPU- or processor-bound work is the key to determining the best CPU architecture. Hence, we have seen that all of the Microsoft BackOffice components require CPU resources to some extent and all will take advantage of SMP architectures. So, is there any set of factors or a particular application that points the way to this resource decision? Unfortunately not. Therefore, it is recommended that you acquire the best CPU architecture you can justify for your Microsoft BackOffice project. In fact, from a scalability point of view, when you look at all the potential combinations of Microsoft BackOffice components, it is best to go with a system that supports SMP, even if you only start with a single processor. Moreover, if your Microsoft BackOffice solution includes more than one of the Microsoft BackOffice components on a single platform, then obtain an SMP system with at least two processors scalable to at least four processors if you think growth of the system justifies the expandability.
All Microsoft BackOffice components require minimum memory configurations. This memory is managed by the Windows NT Virtual Memory Manager (VMM) and is used extensively by the Microsoft BackOffice components. (Mostly for caching data and code pages in order to avoid system page faults and subsequent disk I/O.) Therefore, it is recommended that the amount of memory be proportional to the Microsoft BackOffice components running on the selected hardware platform. Hence, if the platform includes Windows NT Server + Microsoft SQL Server + Systems Management Server, then at minimum add the required memory for each component to arrive at 16 MB (Windows NT Server) + 16 MB (SQL Server) + 16 MB (Systems Management Server) = 48 MB. This is your starting point for building an optimal Microsoft BackOffice system. You must now tune the memory under actual operating conditions in order to get the absolute best configuration.
The goal here is to achieve the fastest, most efficient disk I/O operations possible. All Microsoft BackOffice components except SNA Server are "disk bound." Thus, it is imperative that you configure the Microsoft BackOffice platform with the best disk subsystem you can justify. At minimum this equates to an intelligent SCSI controller and fast SCSI drives with read-ahead (at least 1 track) caching. Optimally for Microsoft BackOffice systems consisting of several components, the configuration will be:
Having procured a high-performance disk subsystem, the task is then to appropriately map the Microsoft BackOffice application environments to the disk subsystem based on seek efficiencies.
Without exception, all the Microsoft BackOffice applications initiate some level of network activity. This in turn results in network interrupts being generated. These interrupts may then need to be serviced by the system processor, thus depriving some other Microsoft BackOffice process of the CPU resource. Thus, it is to your benefit to obtain one or more intelligent network interface cards in order to offset this occurrence. Such a NIC will match the bus configuration of your system and should possess an on-board controller (bus master) and possibly on-board cache. This NIC configuration will reduce the level of interrupts and help to increase network throughput, thereby reducing latency.
Having designed an optimal Microsoft BackOffice solution architecture based on the knowledge of the performance characteristics of each Microsoft BackOffice component, you are now ready to tune the system to an even finer detail. The tuning process at this stage is now an exercise in what may be referred to as "traditional" Windows NT performance tuning. Therefore, it is beyond the scope of this paper to adequately address this process in detail. However, there is an excellent text by Russ Blake, Optimizing Windows NT, Microsoft Windows NT Resource Kit, Volume 3, which discusses in great detail how to further optimize your Windows NT environment. Furthermore, you may refer to another excellent tuning source, Tech Ed 95 seminar NT301, presented by Scott Suhy, which presents critical aspects of the Windows NT tuning process.
For more information concerning optimization and performance tuning for Microsoft Windows NT and other Microsoft BackOffice components, please refer to the following sources.
Tech Ed 95, NT301, Optimizing and Tuning Windows NT, Scott Suhy, Microsoft Corporation
Tech Ed 95, SQ301, Optimizing and Tuning SQL Server, Tony Scott, Microsoft Corporation
Tech Ed 95, SMS302, Systems Management Server Optimization and Tuning, Dwain Kinghorn, Computing Edge Corporation
Transact-SQL Reference for Microsoft SQL Server for Windows NT, Microsoft Corporation
Administrator's Guide for Microsoft SQL Server for Windows NT, Microsoft Corporation
Troubleshooting Guide for Microsoft SQL Server for Windows NT, Microsoft Corporation
Configuration Guide for Microsoft SQL Server for Windows NT, Microsoft Corporation
Administrator's Guide for Microsoft Systems Management Server for Windows NT, Microsoft Corporation
Administration Guide for Microsoft SNA Server for Windows NT, Microsoft Corporation
Installation Guide for Microsoft SNA Server for Windows NT, Microsoft Corporation
Microsoft Developer Network Development Library, Microsoft Corporation
Microsoft TechNet-Technical Information Network, Microsoft Corporation
Optimizing Windows NT, Windows NT Resource Kit, Volume 3, Russ Blake, Microsoft Press®
Inside Windows NT, Helen Custer, Microsoft Press
© 1995 Microsoft Corporation. All rights reserved.
THESE MATERIALS ARE PROVIDED "AS-IS," FOR INFORMATIONAL
PURPOSES ONLY.
NEITHER MICROSOFT NOR ITS SUPPLIERS MAKE ANY WARRANTY, EXPRESS
OR IMPLIED, WITH RESPECT TO THE CONTENT OF THESE MATERIALS OR
THE ACCURACY OF ANY INFORMATION CONTAINED HEREIN, INCLUDING, WITHOUT
LIMITATION, THE IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS
FOR A PARTICULAR PURPOSE. BECAUSE SOME STATES/JURISDICTIONS DO
NOT ALLOW EXCLUSIONS OF IMPLIED WARRANTIES, THE ABOVE LIMITATION
MAY NOT APPLY TO YOU.
NEITHER MICROSOFT NOR ITS SUPPLIERS SHALL HAVE ANY LIABILITY FOR
ANY DAMAGES WHATSOEVER, INCLUDING CONSEQUENTIAL, INCIDENTAL, DIRECT,
INDIRECT, SPECIAL, AND LOSS OF PROFITS. BECAUSE SOME STATES/JURISDICTIONS
DO NOT ALLOW THE EXCLUSION OF CONSEQUENTIAL OR INCIDENTAL DAMAGES,
THE ABOVE LIMITATION MAY NOT APPLY TO YOU. IN ANY EVENT, MICROSOFT'S
AND ITS SUPPLIERS' ENTIRE LIABILITY IN ANY MANNER ARISING OUT
OF THESE MATERIALS, WHETHER BY TORT, CONTRACT, OR OTHERWISE, SHALL
NOT EXCEED THE SUGGESTED RETAIL PRICE OF THESE MATERIALS.
![]() |
Click Here to Search TechNet Web Contents | TechNet CD Overview | Microsoft TechNet Credit Card Order Form At this time we can only support electronic orders in the US and Canada. International ordering information. |
©1996 Microsoft Corporation |