NCC Guide NCC v3 - 10
NCC Guide NCC v3 - 10
NCC Guide NCC v3 - 10
10
ii
Copyright.................................................................................................................. 43
License......................................................................................................................................................................... 43
Conventions............................................................................................................................................................... 43
Default Cluster Credentials................................................................................................................................. 43
Version......................................................................................................................................................................... 44
iii
1
NUTANIX CLUSTER CHECK (NCC)
Nutanix Cluster Check (NCC) is cluster-resident software that can help diagnose cluster health
and identify configurations qualified and recommended by Nutanix. NCC continously and
proactively runs hundreds of checks and takes the needed action towards issue resolution.
Depending on the issue discovered, NCC raises an alert or automatically creates Nutanix
Support cases. NCC can be run provided that the individual nodes are up, regardless of cluster
state.
When run from the Controller VM command line or web console, NCC generates a log file with
the output of the diagnostic commands selected by the user.
NCC actions are grouped into plugins and modules.
Note: Some plugins run nCLI commands and might require the user to input the nCLI password.
The password is logged on as plain text. If you change the password of the admin user from
the default, you must specify the password every time you start an nCLI session from a remote
system. A password is not required if you are starting an nCLI session from a Controller VM
where you are already logged on.
NCC Output
Each NCC plugin is a test that completes independently of other plugins. Each test completes
with one of these status types. The status might also display a link to a Nutanix Support Portal
Knowledge Base article with more details about the check, or information to help you resolve
issues NCC finds.
PASS
The tested aspect of the cluster is healthy and no further action is required. A check can
also return a PASS status if it is not applicable
FAIL
The tested aspect of the cluster is not healthy and must be addressed. This message
requires an immediate action. If you do not take immediate action, the cluster might
become unavailable or require intervention by Nutanix Support.
WARN
The plugin returned an unexpected value that you must investigate. This message
requires user intervention which you should resolve as soon as possible to help maintain
cluster heath.
• From the Prism web console Health page, select Actions > Run Checks. Select All
checks and click Run.
• If you disable a check in the Prism web console, you cannot run it from the NCC
command line unless you enable it again from the web console.
• You can run NCC checks from the Prism web console for clusters where AOS 5.0 or
later and NCC 3.0 or later are installed. You cannot run NCC checks from the Prism
web console for clusters where AOS 4.7.x or previous and NCC 3.0 are installed.
• For AOS clusters where it is installed, running NCC 3.0 or later from the command
line updates the Cluster Health score, including the color of the score. For some NCC
checks, you can clear the score by disabling and then re-enabling the check.
Run two or more individual checks at a time
• You can specify two or more individual checks from the command line, with each
check separated by a comma. Ensure you do not use any spaces between checks, only
a comma character. For example:
ncc health_checks system_checks \
--plugin_list="cluster_version_check,cvm_reboot_check"
• You can re-run any NCC checks or plug-ins that reported a FAIL status.
ncc --rerun_failing_plugins=True
Note: To help ensure that Prism Central and each managed cluster are taking advantage of NCC
features, ensure that:
NCC 3.9.0 for IBM Power CS Series AOS 5.11.1.2 (Power platforms)
(Power platforms only)
NCC 3.6.4 for IBM Power CS Series AOS 5.10.0.7 (Power platforms)
(Power platforms only)
Procedure
2. Log on to the Prism web console for any node in the cluster and click the gear icon.
5. When the download process is completed, click Upgrade, then click Yes to confirm.
The Upgrade Software dialog box shows the progress of your selection.
As part of installation or upgrade, NCC automatically restarts the cluster health service on
each node in the cluster. You you might observe notifications or other slight anomalies as
the service is restarting.
• Do the following steps to download NCC binary and metadata .JSON files from the Nutanix
Support Portal, then upgrade NCC through Upgrade Software in the Prism web console.
• Typically you must perform this procedure if your cluster is not directly connected to the
Internet and you cannot download the binary and metadata .JSON files through the Prism
web console.
Procedure
1. Log on to the Nutanix Support portal and select Downloads > Tools & Firmware.
2. Click the download link to save the binary gzipped TAR (.tar.gz) and metadata (.json) files on
your local media.
3. Log on to the Prism web console for any node in the cluster and click the gear icon.
6. Click Choose File for the NCC metadata and binary files, respectively, browse to the file
locations, and click Upload Now.
7. When the upload process completes, click Upgrade, then click Yes to confirm.
The Upgrade Software dialog box shows the progress of your selection.
As part of installation or upgrade, NCC automatically restarts the cluster health service on
each node in the cluster. You you might observe notifications or other slight anomalies as
the service is restarting.
• If you are adding one or more nodes to expand your cluster, the latest version of NCC might
not be installed on each newly-added node. In this case, re-install NCC in the cluster after
you have finished adding the one or more nodes.
• This topic describes how to install NCC from the command line by using a shell script
downloaded from the Nutanix Support Portal. To upgrade NCC software from the web
console, see Upgrading NCC on Prism Element Clusters on page 7 or Upgrading NCC on
Prism Central on page 10.
Note: To help ensure that Prism Central and each managed cluster are taking advantage of NCC
features, ensure that:
1. From the Nutanix Support Portal Downloads > Tools & Firmware page, download, save, and
then copy the NCC installation shell file to any Controller VM in the cluster.
• Make sure that the Controller VM directory where you copy the shell file exists on all
nodes in the cluster. Nutanix recommends the /home/nutanix folder. This folder should be
owned by any accounts that use NCC.
• Note the MD5 value of the file as published on the Support Portal.
2. From the Controller VM, check the MD5 value of the file.
nutanix@cvm$ md5sum ./ncc_installer_filename.sh
It must match the MD5 value published on the Support Portal. If the value does not match,
delete the file and download it again from the Support Portal.
4. Install NCC.
nutanix@cvm$ ./ncc_installer_filename.sh
The installation script installs NCC on each node in the cluster. NCC installation file logic tests
the NCC file checksum and prevents installation if it detects file corruption.
• If it verifies the file, the installation script installs NCC on each node in the cluster.
• If it detects file corruption, it prevents installation and deletes any extracted files. In this
case, download the file again from the Nutanix support portal.
• In some cases, output similar to the following is displayed. Depending on the NCC version
installed, the installation file might log the output to /home/nutanix/data/logs/ or /home/
nutanix/data/serviceability/ncc.
Copying file to all nodes [ DONE ]
-------------------------------------------------------------------------------+
+---------------+
| State | Count |
+---------------+
| Total | 1 |
+---------------+
Plugin output written to /home/nutanix/data/logs/ncc-output-latest.log
[ info ] Installing ncc globally.
[ info ] Installing ncc on 10.130.45.72, 10.130.45.73
[ info ] Installation of ncc succeeded on nodes 10.130.45.72, 10.130.45.73.
What to do next
• As part of installation or upgrade, NCC automatically restarts the cluster health service on
each node in the cluster, so you might observe notifications or other slight anomalies as the
service is being restarted.
1. Log on to the Prism Central web console as the admin user and click the gear icon.
4. When the download process is completed, click Upgrade, then click Yes to confirm.
The Upgrade Software dialog box shows the progress of your selection.
As part of installation or upgrade, NCC automatically restarts the cluster health service on
each node in the cluster, so you might observe notifications or other slight anomalies as the
service is being restarted.
Procedure
1. Log on to the Nutanix support portal and select Downloads > Tools & Firmware.
2. Click the NCC version download link to save the binary gzipped TAR (.tar.gz) and metadata
(.json) files on your local media.
3. Log on to the Prism Central web console as the admin user and click the gear icon.
6. Click Choose File for the NCC metadata and binary files, respectively, browse to the file
locations, and click Upload Now.
7. When the upload process is completed, click Upgrade, then click Yes to confirm.
The Upgrade Software dialog box shows the progress of your selection.
As part of installation or upgrade, NCC automatically restarts the cluster health service on
each node in the cluster, so you might observe notifications or other slight anomalies as the
service is being restarted.
Procedure
6. In the LCM sidebar under Updates, click Software to show the latest available NCC updates.
8. Repeat these steps for your Prism Central cluster. See Running NCC (Prism Central) on
page 37 to run NCC checks on Prism Central.
3
SCHEDULING AND AUTOMATICALLY
EMAILING NCC RESULTS
About this task
Nutanix Cluster Check (NCC) enables you to set the frequency of cluster checks and to email
the results of these checks. By default, this feature is disabled. Once you enable and configure
this feature, NCC:
Note:
• NCC results emailed to Nutanix support do not automatically create support cases.
• After adding a node to a cluster, ensure that any previously-configured email
settings exist and configure them as needed, as described in this topic.
Procedure
1. In the Health dashboard, from the Actions drop-down menu select Set NCC Frequency.
» Every 4 hours: Select this option to run the NCC checks at four hours interval.
» Every Day: Select this option to run the NCC checks on a daily basis.
Select the time of the day when you want to run the checks from Start Time field.
» Every Week: Select this option to run the NCC checks on a weekly basis.
Select the day and time of the week when you want to run the checks from the On and
Start Time fields. For example, if you select Sunday and Monday from the On field and
select 3:00 p.m. from the Start Time field, every Sunday and Monday at 3 p.m. the NCC
checks are run automatically.
The Email address that you have configured in the web console is also displayed. A report
will be sent as an email to all the recipients.
AOS implements many logs and configuration information files that are useful for
troubleshooting issues and finding out details about a particular node or cluster. You can collect
logs for Controller VMs, file server, hardware, alerts, hypervisor, and for the system.
Log collection includes:
• Collecting Logs from the Web Console with Logbay on page 15 through the Prism web
console or logbay command line
• Logbay Log Collection (Command Line) on page 18through the NCC command line
Procedure
1. In the Health dashboard, from the Actions drop-down menu, select Collect Logs.
3. Click Next.
» All. Select this option if you want to collect the logs for all the tags.
» Specific (by tags). Select this option, click + Select Tags if you want to collect the logs
only for the selected tags and then click Done.
• 1. Select Duration. Select the duration for which you want to collect the logs. You can
collect the logs either in hours or days. Click the drop-down list to select the required
option.
2. Cluster Date. Select the date from which you want to start the log collection operation.
Click the drop-down list to select either Before or After to collect logs before or after a
selected date.
3. Cluster Time. Select the time from when you want to start the log collection operation.
4. Select Destination for the collected logs. Click the drop-down list to select the server
where you want the logs to be collected.
• Download Locally
• Nutanix Support FTP. If you select this option, enter the case number in the Case
Number field.
• Nutanix Support SFTP. If you select this option, enter the case number in the Case
Number field.
• Custom Server. Enter server name, port, username, password, and archive path if
you select this option.
5. Anonymize Output. Select this option if you want to mask all the sensitive information
like the IP addresses.
7. After the operation completes, you can download the log bundle for the last two runs and
(as needed) add it to a support case as follows:
a. Go to the Task dashboard, find the log bundle task entry, and click the Succeeded link for
that task (in the Status column) to download the log bundle.
Note: If a pop-up blocker in your browser stops the download, turn off the pop-up blocker
and try again.
b. Log in the support portal, click on the target case in the 360 View widget on the
dashboard (or click the Create a New Case button to create a new case), and upload the
log bundle to the case (click the Choose Files button in the Attach Files section to select
the file to upload).
• Collect logs.
nutanix@cvm$ logbay collect
nutanix@cvm$ logbay collect [Options]
By default, logbay collect collects all tags or components for the last 4 hours, individual log
bundles per Controller VM are stored locally (not aggregated to the single Controller VM).
Replace [Options] with the options listed in table Options for the Collect Command.
• List all the available tags.
nutanix@cvm$ logbay list_tags
Logbay uses tags to easily select specific components for collection. Tags are useful for
faster and smaller collections of log files for focused troubleshooting.
Option Description
-t, --tags Specify the tag name to collect the logs from
the tag specified. By default, -t collect logs for
all the tags.
-x, --exclude_tags Exclude logs that are tagged with any of the
specified tags. For example, if you want to
collect Controller VM logs and at the same
time you want to filter out logs that are
tagged with Stargate tag, you can run logbay
collect -t cvm_logs -x stargate command.
This command excludes all the Stargate logs
and generates a log bundle for the rest of the
Controller VM logs.
-c, --case_number Specify the case number for uploading the log
bundle to the Nutanix server.
-d, --duration=-4h0m0s Specify for how long, relative to the start time,
you want to collect logs from. For example,
300s, -1.5h, 3d2h45m0s. By default, the
duration for log collection is 4 hours.
-f, --from=2019/04/09-14:00:00 Specify the point in time from which you want
the logs to be collected. By default, -f is the
current cluster time.
• --dst=(file|ftp|sftp)://username@host]/
path or (ftp|sftp)://nutanix for Nutanix
uploads or --dst=container:/container_name
for containers .
• By default, the destination is set to file://
[email protected]/home/nutanix/data/
logbay/bundles
• Format: --options
optOne=foo,optTwo=[I,Am,A,Multi,String,Option],optThr
For example, run command logbay -o
file_server_name_list=<FSVM name to collect
file server logs. Replace FSVM name with the file
server name.
Examples
• The following command collects all the Controller VM logs excluding logs that are tagged
with Stargate tag.
nutanix@cvm$ logbay collect -t cvm_logs -x stargate
• The following command collects and aggregates all individual node log bundles to a single
file on the CVM where the command is run.
nutanix@cvm$ logbay collect --aggregate=1
• The following command collects logs and uploads the log files to Nutanix FTP server for
automatic case association.
nutanix@cvm$ logbay collect --dst=ftp://nutanix -c case_number
In this command, case_number is the open Nutanix Support case number provided by Nutanix
Support team.
• The following command collects logs for the last 6.15 hours after 2pm (using cluster time and
timezone).
nutanix@cvm$ logbay collect --from=2019/04/09-14:00:00 --duration=+6h15m
• The following command collects logs only for specific node in a cluster.
nutanix@cvm$ logbay collect -s comma separated list of CVM IPs
In this example, comma separated list of CVM IPs is the list of Controller VM IP address on
which you want to collect logs.
• The following example sets the name of the archive folder to cluster_logs.
nutanix@cvm$ logbay collect --name="cluster_logs"
• The following commands collect logs and upload the log bundle to the destination specified;
whether a disk, an internal SFTP/FTP server, or a Nutanix container.
nutanix@cvm$ logbay collect --dst=(file|ftp|sftp)://username@host]/path
nutanix@cvm$ logbay collect --dst=(ftp|sftp)://nutanix
nutanix@cvm$ logbay collect --dst=container:/container_name
• The following example displays the output of collecting logs for a specific tag on.
nutanix@cvm$ logbay -o normal collect -t svm_boot
Time period of collection: Sun Dec 22 17:51:58 PST 2019 - Sun Dec 22 21:51:58 PST 2019
Creating a task to collect logs...
Logbay task created ID: x.x.x.x::7ff44340-3aba-42f2-a6f5-df90a5fdf674
[====================================================================]
x.x.x.x
Archive Location: x.x.x.x:/home/nutanix/data/logbay/bundles/NTNX-
Log-2019-12-22-1577080318-33287-PE-x.x.x.x-CW.zip
Logbay Plug-ins
Hypervisor configuration
• For AHV, logbay collect -t ahv_config
Hypervisor logs
• For AHV, logbay collect -t ahv_logs
With NCC 3.10 and later releases, logbay masks the sensitive information to preserve
anonymity.
Example
The following example collects logs and masks the critical information.
nutanix@cvm$ logbay collect -t x --anonymize=1
Time period of collection: Sun Dec 22 17:57:25 PST 2019 - Sun Dec 22 21:57:25 PST 2019
Creating a task to collect logs...
Logbay task created ID: x.x.x.x::845f77d0-9267-4049-8efb-ccf3a9d8741e
[====================================================================]
x.x.x.x
Archive Location: x.x.x.x:/home/nutanix/data/logbay/bundles/NTNX-
Log-2019-12-22-1577080645-33287-PE-xx.xx.xx.65-CW.zip
Note: Uploading log files to Nutanix container is supported on clusters running AOS versions 5.9
and later.
In the Controller VM, run the following command to upload log files to Nutanix storage
container.
nutanix@cvm$ logbay collect --dst=container:/container_name
Replace container_name with the name of the storage container where you want to upload the
log files.
For example, the Controller VM SSH window displays results similar to the following.
nutanix@cvm$ logbay collect --dst=container:/<container_name>
Time period of collection: Sun Dec 22 17:55:41 PST 2019 - Sun Dec 22 21:55:41 PST 2019
Creating a task to collect logs...
Logbay task created ID: x.x.x.x::fc7a6f51-18f5-4c0f-9fa1-3a8cce77a0cb
[====================================================================]
x.x.x.x
Archive Location: /<container_name>/NTNX-Log-2019-12-22-1573466625-2227044053748017374-
PE-x.x.x.x-CW.zip
Nutanix Cluster Check (NCC) 1.3.1 introduces the Log Collector plugin. AOS implements many
logs and configuration information files that are useful for troubleshooting issues and finding
out details about a particular node or cluster.
Note:
• In some cases, running the NCC log collector (ncc log_collector run_all) can trigger
spikes in average cluster latency.
• Log collector is a resource intensive task. Running it for a long period might cause
performance degradation on the Controller VM where you are running it.
• Use caution if business needs require high performance levels. Try to run it during a
maintenance window if possible in this case.
To use the log collector, run the following command from a Controller VM.
nutanix@cvm$ ncc log_collector [plugin_name] --collector_plugin_timeout=seconds
To collect logs for specific plugins, use the --plugin_list option with a comma-separated list
of plugin names. For example, to collect Controller VM configuration, general, and kernel logs,
run the following command from a Controller VM.
nutanix@cvm$ ncc log_collector --plugin_list=cvm_config,cvm_logs,cvm_kernel_logs
[Introduced in NCC 2.3] Specify in seconds the optional collector timeout duration (time to wait
for collected results from a specified plug-in; --collector_plugin_timeout=).
Specify plugin_name help_opts to get further details about options for the plugin_name.
Log Collector stores the output in a zipped tar file named log_collector_logs.tar.gz in the /
home/nutanix/data/log_collector directory on the Controller VM where the NCC command
was issued.
Procedure
For example, the Controller VM SSH window displays results similar to the following.
ncc_version: 2.2.0-22137482
cluster id: 37146
cluster name: your_cluster
node with service vm id 3
service vm external ip: ip_address
hypervisor address list: [u'ip_address']
hypervisor version: 6.3.9600 build - 9600
ipmi address list: [u'ip_address']
software version: danube-4.6-stable
software changeset ID: 18363af92e18843279a78f066f280af70a59ad27
node serial: OM159S002597
rackable unit: NX-1065-G4
node with service vm id 4
service vm external ip: ip_address
hypervisor address list: [u'ip_address']
hypervisor version: 6.3.9600 build - 9600
ipmi address list: [u'ip_address']
software version: danube-4.6-stable
software changeset ID: 18363af92e18843279a78f066f280af70a59ad27
node serial: OM159S002547
rackable unit: NX-1065-G4
node with service vm id 5
service vm external ip: ip_address
hypervisor address list: [u'ip_address']
hypervisor version: 6.3.9600 build - 9600
ipmi address list: [u'ip_address']
software version: danube-4.6-stable
software changeset ID: 18363af92e18843279a78f066f280af70a59ad27
node serial: OM159S002535
rackable unit: NX-1065-G4
[==================================================] 100%
--------------------------------------------------+
Running /log_collector/cvm_logs on the node
[==================================================] 100%
[ WARN ]
--------------------------------------------------+
Running /log_collector/alerts on the node
[==================================================] 100%
--------------------------------------------------+
Running /log_collector/hypervisor_config on the node
[==================================================] 100%
--------------------------------------------------+
Running /log_collector/sysstats on the node
--------------------------------------------------+
[==================================================] 100%
--------------------------------------------------+
[==================================================] 100%
--------------------------------------------------+
/home/nutanix/data/log_collector/NCC-logs-2016-02-08-37146-1454929507.tar uploaded to
remote server - ip_address
Detailed information for cvm_logs:
Node ip_address:
WARN: Not collecting /home/nutanix/data/logs/cluster_health.out since its size exceeds the
maximum size limit: 0.5
Node ip_address:
WARN: Not collecting /home/nutanix/data/logs/cluster_health.out since its size exceeds the
maximum size limit: 0.5
Node ip_address:
WARN: Not collecting /home/nutanix/data/logs/cluster_health.out since its size exceeds the
maximum size limit: 0.5
+-----------------+
| State | Count |
+-----------------+
| Warning | 1 |
| Total | 7 |
+-----------------+
Plugin output written to /home/nutanix/data/logs/ncc-output-latest.log
Use the NCC command ncc hardware_info to collect cluster or node hardware information.
This command displays the hardware information on the console and also writes to an output
log file on the Controller VM. For example, if the Controller VM IP address is 10.5.25.46, the
command writes an output file to /home/nutanix/data/hardware_logs/10.5.25.46_output.
ncc hardware_info saves the hardware information to an output file in the following location.
/home/nutanix/data/hardware_logs/controller_VM_IP_address_output
Tue, 16 Aug 2016 06:19:31 PDT|Storage controller Component changed: Old values: |{}|
(Last Detected Tue, 16 Aug 2016 05:01:28 PDT)
New values: |{location: ioc1}|
Tue, 16 Aug 2016 06:19:31 PDT|Storage controller Component changed: Old values: |{}|
Last Detected Tue, 16 Aug 2016 05:01:28 PDT)
New values: |{location: ioc2}|
System Information
Manufacturer
Product name
Chassis Information
Manufacturer
Version
Serial number
Boot up
Thermal state
Node Position
Manufacturer
Version
Serial number
Boot up state
Thermal state
Host name
Hypervisor type
Temperature
BIOS Information
Vendor
Version
Release date
ROM size
BMC
Device id
Device revision
Firmware revision
Ipmi version
Manufacturer id
Manufacturer
Product id
Device available
Num slots
Banks
Max size
Storage Controller
Manufacturer
Serial number
Bios version
Location
Manufacturer
Serial number
Revision
Status #
Processor Information
Socket designation
Status
Type
Id #
Signature #
Version
Voltage #
External clock
Max speed
Current speed
Core count
Core enabled
Thread count
L1 cache handle
L2 cache handle
L3 cache handle
Temperature
Memory Module
Location
Manufacturer
Serial number
Bank locator
Type
Installed size
Temperature
NIC
Manufacturer
Location
Device name #
Version
Firmware version
SSD
Manufacturer
Serial number
Firmware version
Capacity
Location
Power on hours
HDD
Serial number
Firmware version
Capacity
Location
Power on hours
SATADOM
Firmware version
Capacity
Serial number
Device model
Power on hours #
FAN
Rpm
Location
• The Nutanix Cluster Check (NCC) Guide for your NCC version provides more details about
NCC operation and command usage.
• The Nutanix support portal includes a series of Knowledge Base articles describing most
NCC health checks run by the ncc health_checks command. These articles are updated
regularly.
To search for new, updated or existing NCC health check articles on the Nutanix support portal:
1. Log on to the support portal and go to the Knowledge Base.
2. Click the Nutanix KB Articles filter search and type NCC Health Check.
Procedure
• Show top-level help about a specific available health check category. For example,
hypervisor_checks.
nutanix@cvm$ ncc health_checks hypervisor_checks
• Show all NCC flags to set your NCC configuration. Use these flags under the direction of
Nutanix Support.
nutanix@cvm$ ncc -help
• For Prism Element clusters, run the NCC checks from the Health dashboard of the
Prism web console. You can select to run all the checks at once, the checks that have
failed or displayed some warning, or even specific checks of your choice. You can also
log on to a Controller VM and run NCC from the command line.
• if you are running checks by using web console, you are unable to collect the logs at
the same time.
• You can also log on to a Controller VM and run NCC from the ncc command line.
Run NCC on Prism Central
For Prism Central clusters, log on to the Prism Central VM and run the NCC checks from
the ncc command line. You cannot run NCC from the Prism Central web console.
Procedure
If the check reports a status other than INFO or PASS, resolve the reported issues before
proceeding. If you are unable to resolve the issues, contact Nutanix Support for assistance.
2. Do these steps to run NCC from the Prism Element web console.
a. In the Health dashboard, from the Actions drop-down menu, select Run Checks.
b. Select the checks that you want to run for the cluster.
• All checks. Select this option to run all the checks at once.
• Only Failed and Warning Checks. Select this option to run only the checks that failed
or triggered a warning during the health check runs.
• Specific Checks. Select this option and type the check or checks name in the text box
that appears that you want to run.
This field gets auto-populated once you start typing the name of the check. The Added
Checks box lists all the checks that you have selected for this run.
c. Select the Send the cluster check report in the email option to receive the report after
the cluster check.
To receive the email, ensure that you have configured email configuration for alerts. For
more information, see the Web Console Guide.
The status of the run (succeeded or aborted) is available in the Tasks dashboard. By default,
all the event triggered checks are passed. Also, the Summary page of the Health dashboard
updates with the status according to health check runs.
Procedure
If the check reports a status other than INFO or PASS, resolve the reported issues before
proceeding. If you are unable to resolve the issues, contact Nutanix Support for assistance.
Note: The flags override the default configurations of the NCC modules and plugins. Do not
run with these flags unless your cluster configuration requires these modifications.
License
The provision of this software to you does not grant any licenses or other rights under any
Microsoft patents with respect to anything other than the file server implementation portion of
the binaries for this software, including no licenses or any other rights in any hardware or any
devices or software that are used to communicate with or in connection with this software.
Conventions
Convention Description
root@host# command The commands are executed as the root user in the vSphere or
Acropolis host shell.
> command The commands are executed in the Hyper-V host shell.
NCC |
Interface Target Username Password
Version
Last modified: July 22, 2020 (2020-07-22T21:53:41-07:00)