Scaling Required Hardware
Commander connects to one or more hypervisors to provide powerful automation and reporting options, with a better user experience for producers and consumers of virtual services. This tight integration with your virtual infrastructure means that the hardware requirements for vCPUs, memory and disk space scale with your virtual infrastructure's rate of occupancy and activity. Use the guidelines in this section to establish a good starting point for your installation, and be prepared to allocate more resources over time as your virtual infrastructure grows.
See also the Snow Globe article Migrating the Commander Application, which covers migration of Commander and its database.
Note: Commander supports high availability (HA) in an active/hot-standby configuration, to support fast failover. These capabilities are experimental, and we're looking for customer feedback in this area. If you want to configure Commander for HA, contact firstname.lastname@example.org.
Each of these factors impacts the load on Commander:
- VM occupancy — the total number of VMs under management
- VM transience — the frequency with which VMs are created and destroyed
- Number of concurrent sign ins
- Frequency and depth of reporting
- Retention of historical data
This being said, it’s sometimes difficult to predict resource requirements, so the Hardware requirements table provides Commander deployment tiers based on typical use. You can also contact Support to discuss requirements further, should you have any questions or unique configurations.
Commander includes a default Postgres database, but installing against a Microsoft SQL database is recommended.
Important! No upgrade path exists that allows you to change database platforms from the default Postgres database to Microsoft SQL Server. If you do need to switch database platforms, Support can assist you with keeping some VM metadata (such as ownership and custom attributes), but other important data may be lost. As detailed in the Hardware requirements table, Postgres should only be used for evaluation purposes or for small environments not expected to experience any growth.
The most important reason for this recommendation is that the default Postgres database is installed on the same disk as the Commander application server, whereas Microsoft SQL databases can either be local to the Commander application server or on a remote system. This means that when Commander is using the default Postgres database, disk contention may potentially impact Commander performance.
When using Microsoft SQL on a separate server, it's still important to consider what else is running on the same database server, especially when clustering or other advanced storage solutions are not employed. Avoid sharing database storage with databases for your on-premise cloud accounts (vCenter or Hyper-V), because peaks in activity will impact both systems at the same time, and contention may result.
Caution: If you install Commander and the Microsoft SQL database on the same machine (not recommended), you must add the SQL server as a dependency of the Commander Windows service after Commander is installed. To do so, once Commander is installed, open a command prompt on the Commander server and run the following command:
sc config vlm depend= Tcpip/Afd/MSSQLSERVER
Note: The default Cardinality Estimator used for Microsoft SQL Server 2014 and 2016 increases query compile time, which can reduce the Service Portal Dashboard display speed. To increase the display speed of the Service Portal Dashboard, you should change the SQL Server's compatibility level to SQL Server 2012 (110), then restart the Commander service. See View or Change the Compatibility Level of a Database in the Microsoft documentation.
If you have any questions or concerns about making the right choice, please feel free to contact Technical Support.
It's important to consider the whole environment when looking at Commander sizing and performance, to make sure you understand from where performance demands originate. Answering the questions below will help provide insight into the factors potentially impacting performance.
- When using shared database servers, when are the other applications most active? If Commander is expected to be very active when spikes are already occurring, consider allocating more resources or using another server.
- When will you run backups on the database? Backups should be scheduled for times during off-peak hours when fewer users will require access to the system.
- How much data must you retain, and for how long? Purging data you no longer need will improve the database performance. Commander ships with a default one year purge for event history. For optimal performance it's recommended that you augment data purging to also include performance, historical, and billing record history.
- Have you optimized your reporting and scanning schedules? Any scheduled activity, like running reports and searches or your datastore scan should be scheduled for off-peak hours and staggered so that no two activities are executing concurrently.
- Is your Commander up to date? Keeping up to date means taking advantages of the most recent improvements to optimize performance and eliminate software defects.
- Have you excluded Commander executables from virus scans? When proper security measures are in place throughout the rest of the environment, there should be no need to scan the Commander executables found in the
<Install_Directory>\tomcat\bin\folder. In some cases, scanning these files for viruses will slow the system down considerably, and should be avoided. Read Microsoft’s best practices for exclusions on SQL Servers here.
To ensure that your users continue to have an excellent experience with Commander, it’s important to monitor system performance over time, so that you can accommodate increasing system demand for resources. One of the best ways to do so is to manage the application and database servers with Commander, so that you will receive rightsizing recommendations based on the performance metrics collected by Commander or integrated monitoring systems.
There are many other applications available for monitoring system performance. When employing such a system, make sure to look at both the application and database servers, and see if you can correlate performance spikes to other activity in your environment. Set clear performance thresholds to guarantee acceptable performance for your users, and provide more resources once the threshold has been exceeded.
In some environments, the amount of data Commander collects and stores can result in larger than expected database sizes, especially if you're not scheduling regular data purges for your database. If the data partition for your installation runs out of disk space, Commander will no longer function normally. A monitoring solution will ensure that you don't unexpectedly run out of disk space. See also:
- Maintaining the Commander Database
- The Snow Globe article Microsoft SQL 2012 Maintenance Planning for Snow Commander
When you believe that you have experienced a loss of system performance, and wish to investigate further, you can investigate on your own prior to engaging our support team. Refer to the Snow Globe article Troubleshooting Commander Performance Issues. If you're unable to resolve the issue, send an email to email@example.com.