Server load optimization strives in strengthening the server performance. It includes consolidation of physical hardware within a data center and by employing the technique of server virtualization. It’s the key to the performance of all the applications and so need to be given utmost priority. This is applicable to a multi-server data center environment.

A server consolidation solution is implemented by merging the processing work-load of under-utilized nodes with other nodes. By combining the processes of multiple servers into a single efficiently working server, operational resources are greatly saved. In server virtualization, layers are created within a single server that allow it to work as different servers. This technique allows a single server to support multiple applications and operating systems while supporting a greater number of users.

Individual server performance optimization can be divided based on the operating system of the servers or the purpose – front-end application or backend database; we can broadly divide the server load optimization efforts into Windows-based servers, Linux-based servers and Database servers.

web servers

Best tips to optimize Windows server performance

Maintain the hard disks defragmented: Current hard drives perform well when working on sequential reads. Performance goes down when the disk is asked to read data stored in random locations. Maintaining the hard disks defragmented ensures that blocks of files are placed in a sequential order rather than scattered randomly across the surface of the drive. This lets the server read the files more efficiently.

  1. Use the NTFS file system: Avoid using the FAT and FAT-32 file systems. NTFS is a transaction-based file system and is slight faster and less prone to corruption than the FAT or FAT-32 file systems.

  1. Look for memory leaks: Poorly written applications causes memory leaks. An application that contains a memory leak will request for memory when it needs but fails to release it back when it is finished using it. The next time the application runs, it request for more memory rather than using the memory it has already stored. Over time, the server would have less memory and would result in memory drain for other applications. Though the memory leaks have less impact on system load at first, it becomes noticeable when the application continues to run. Performance Monitor is the best tool for detecting memory leaks.

  1. Remove Seldom used utilities and disable unused services: A server by default would have many logging and monitoring utilities. Uninstall utilities that are seldom used on the server. Running an application that is not used is a waste of server resources. Check the Service Control Manager and disable any services that are not needed for the role the server is working. Disabling unused services not only increases the server performance but also improves the security of the server.

  1. Log off: Log off a server when the console or display terminal is not actively used. Keeping logged on enables the server to load on the user profile and this consumes the memory and CPU cycles.

  1. Compress the hard disk: Compression can improve the server performance. When a file is compressed on the hard disk, it reduces the amount of time it takes to read the file from the disk. If a server is working on disk intensive applications that have a lot of individual files, compression improves the server performance.

Note: A compressed file has to be decompressed again after the file has been read. This process consumes additional CPU time in addition to memory. So, the compression is suggested only if there are several individual files (not database) to be processed. If not, it is better not to compress the hard disk.

Best tips to optimize Linux server performance

There are two main factors for optimizing a Linux server performance:

  1. Consider the resources that are not fully utilized and which could be scaled back to save money and maintenance overhead.

  2. Find out the resources that need to be increased to enable critical apps and services to run faster.

Netuitive is a complete monitoring tool that allows monitoring data sources like network performance, CPU, disk issues etc. from the Linux server on AWS EC2. For Linux server performance hosted on AWS using the Netuitive tool, consider the below two metrics.

Cpu.total.idle: This is a key matrix which tells the percentage of processor’s capacity that is not used. A near zero value of this metric for a longer time period indicates the need to figure out what is causing the CPU to constantly run. Either there might be a bug in the software or it can be an under-provisioned server. If the metric frequently surpasses 100% then it is a safe bet. This indicates that the server capacity is being underutilized.

Cpu.total.iowait: This is also an important metric that measures key process duration. If the metric has a high value for an unusually longer time then there is some issue with the current process or an issue with I/O operations on the disk.

The data collected from a Linux server along with the AWS EC2 metrics provide a clear picture as to where the bottlenecks on the server lie and also determine the exact resources that are underutilized. This data helps to identify the exact point that needs intervention to optimize the server performance.

Best tips to optimize Database (SQL) Server performance

Memory Configuration options:

SQL server by default determines the amount of memory that needs to be allocated based on the amount of memory that is required by the operating system and other applications. As the load on the SQL server changes, the need to change the memory allocated arises.

The following SQL server configuration options can be used to configure memory usage to optimize the performance of the SQL server.

  1. Min server memory: This option ensures that the server does not release memory below the configured minimum server memory once that threshold is reached. Always set this to some reasonable value depending on the size and activity of SQL server and ensure that the operating system does not request for too much memory which will affect the server performance.

  1. Max Server memory: This option allows specifying the maximum amount of memory an SQL Server can allocate while it starts and runs. This option is used when there are several applications running at the same time as SQL server. This option is used to make sure all the applications have sufficient memory to run.

    Note: Min Server memory and Max server memory should not be set to same value. This allocation fixes the memory allocated to the SQL Server. Dynamic memory allocations always give the maximum performance.

  1. Max Worker Threads: This option allows specifying the number of threads used to support the users connected to SQL server. Though default value 0 automatically configures the number of worker threads at the start up, setting this option to a reasonable value improves the server performance.

  1. Index create memory: This option controls the amount of memory used by sort operations during index creation. Index creation on a production system is normally scheduled as a job and performed during off-peak time. So setting this option maximum can increase the performance of index creation.

  1. Min memory per query: This option is used to specify the minimum amount of memory that is allocated for the execution of a query. When there are many queries executing concurrently in a system then increasing the value of this option can improve the performance especially for memory-intensive queries like substantial sort and hash operations. However, on a busy system, it is not advisable to set this value high, as the query has to wait until it has obtained the minimum memory configured.

I/O Configuration options:

Recovery Interval server configuration: This option can be used to configure I/O usage and improve server performance. This option controls as to when the SQL Server issues a check point in each database. SQL server by default, figures out the apt time to perform checkpoint operations.

To determine the appropriate setting, disk write activity on database files needs to be monitored using a Performance monitor. Activities that can disk utilization to 100% affect the server performance. Modifying this parameter to enable the checkpoint process to occur less often can improve the overall server performance.

Some more server load optimization tips

  1. Deploy Relays to reduce the load on the server. A relay is created to take the extra burden off the server for both patch downloads and data uploads. Setting up a relay is simple and the client can be set to automatically find the closest relay. More the number of relays better is the server performance.

  1. Slow down the Client Heartbeat. This reduces the number of messages that are regularly sent by the clients to update their retrieved properties. Reducing this frequency minimizes the amount of network traffic generated and reduces the timeliness of the retrieved properties. Client can always dispatch their latest information wherever they receive a refresh ping regardless of the heartbeat settings.

    Note: Heartbeat is a message sent by the Client to the server on a regular interval.

  1. Slow down the Fixlet List Refresh rate. This minimizes the update frequency for the data displayed in the console. If many clients are simultaneously connected or if the database is very large, decreasing this frequency can minimize the load on the server. If the console or display terminal is used by multiple operators at the same time, refresh date needs to be set to a value higher than the default. This greatly reduces the load on the database. For example, if the default value is 15 seconds, consider changing it to 60-120 seconds or more based on the number of console operators.

(Visited 120 times, 1 visits today)