Quick fix: Nutanix. Restore admin password

It’s not a common situation, but it does happen sometimes, especially in test clusters.

To change the admin password we need to connect to the CVM, for example, using the nutanix user, and reset the password:

$ ncli user reset-password user-name=admin password=MyStrong@Passw0rd
Password of user 'admin' reset successfully
Please login again with new credentials.

The password should be strong and between 8 and 255 characters. It should differ by at least 4 characters from previous password and it should not be from last 5 passwords.

Now you can connect to Prism and use the password you just set.

Loading

Quick fix: VMware. Some of the disks of the virtual machine failed to load.

I have faced an issue with one of the VMs running on VMware ESXi, 7.0.3, 20328353.

Symptoms:

1. VM is running. There are no reports from users;

2. vMotion fails with an error:

The object or item referred to could not be found.

3. After vMotion in hostd.log we can find the following:

Failed to find file size for /vmfs/volumes/.../VM_NAME.nvram: No such file or directory

4. In the vCenter UI under VM a message is displaying:

Some of the disks of the virtual machine VM_NAME failed to load. The information present for them in the virtual machine configuration may be incomple

5. No issues with the storage layer. All VM’s files are located on the datastore;

6. Other VMs on the host and datastore works fine;

7. Recommendations like “Rescan Datastore” don’t work.

Solution.

Before you begin, make sure that you have a backup.

The solution for me was simple, but it required downtime:

  1. Power off the VM;
  2. After that, the VM will be in an inaccessible state;
  3. Remove VM from the vCenter inventory;
  4. Locate VM’s files on the datastore and find vmx file;
  5. Register VM;
  6. Power on the VM.

After that VM should be up and running without issues.

Loading

Veeam Backup & Replication Instant Recovery to Nutanix AHV

I have previously written about how to connect VBR to a Nutanix cluster, as well as how to create backup tasks and restore.

In this article, we will talk about a very important VBR functionality – Instant Recovery, support for which was added with the release of Nutanix AOS 6.0 STS and AOS 6.5 LTS.

Continue reading “Veeam Backup & Replication Instant Recovery to Nutanix AHV”

Loading

VMware ESXi 8.0 Update 2b is out

VMware ESXi 8.0 Update 2b is out and contains a lot of bug fixes. One of the fixes I want to mention is a bug in CBT:

Changed Block Tracking (CBT) might not work as expected on a hot extended virtual disk:

In vSphere 8.0 Update 2, to optimize the open and close process of virtual disks during hot extension, the disk remains open during hot extend operations. Due to this change, incremental backup of virtual disks with CBT enabled might be incomplete, because the CBT in-memory bitmap does not resize, and CBT cannot record the changes to the extended disk block. As a result, when you try to restore a VM from an incremental backup of virtual disks with CBT, the VM might fail to start.

As a workaround, there were two options: not to use hot extend and perform disk extend operations when the VM is powered off, or create periodically full backups to reset the CBT.

So, if you’re running ESXi version 8.0 Update 2, you should consider updating to the 8.0 Update 2b as soon as possible.

You can read about other release notes here.

Loading

Connecting Ceph clients. File access with CephFS

With this article, I close the series on the basics of Ceph deployment. Previously, we looked at how to deploy Ceph, and how block and object access is provided.

This article will briefly describe the procedure for providing file access in Ceph using CephFS. This topic is very extensive and a lot may be missed, so please refer to the official documentation for more information.

Continue reading “Connecting Ceph clients. File access with CephFS”

Loading

Connecting Ceph clients. Ceph Object Gateway and S3

In addition to block and file data access, Ceph also supports object access via the S3 or Swift protocols.

In this case, we will look at what settings need to be made on the Ceph side to provide clients with the ability to store data using the S3 protocol.

Let me remind you that I previously described the procedure for installing Ceph Reef from scratch in this article. In this case, I use the same platform, as well as a client based on Rocky Linux 9.

Also, I previously wrote about connecting block devices using RBD here.

Continue reading “Connecting Ceph clients. Ceph Object Gateway and S3”

Loading

Considerations about changes in Time Zones in Kazakhstan and how to deal with it in Linux

On the 00:00 1st of March 2024, Kazakhstan changes the clocks to a single time zone UTC +5 for the whole country.  It will affect two time zones: Asia/Almaty and Asia/Qostanay, which are in UTC +6 and need to be adjusted.

And the question may arise – what to do?

As this blog is more about Virtualization, one thing to mention: there are not many problems. Most hypervisors work in the UTC +0 time zone, and the time zone should be correct inside the Guest Virtual Machines.

Many people ask: Will the NTP server move my clock back for an hour? The answer is no. NTP servers work in the UTC +0. And on the 1st of March, they won’t move your clock backward.

In this article, we will look briefly at Linux systems and how to change the time.

Continue reading “Considerations about changes in Time Zones in Kazakhstan and how to deal with it in Linux”

Loading

Connecting Ceph clients. Block devices – RBD

RBD, aka RADOS Block Device, as you might guess from the name, allows you to allocate space from Ceph and present it to clients as block devices (disks).

RBD can often be found in conjunction with virtualization, in Kubernetes, where disks are connected to containers as PV, and also inside the client OS.

In this case, we will look at how to connect block devices with Ceph to a regular Linux host.

Continue reading “Connecting Ceph clients. Block devices – RBD”

Loading

Deploying Ceph Reef cluster using cephadm

The last time I had a chance to work with Ceph was the release of Nautilus (14), several years ago.

Since then, some aspects have changed in the procedure for creating and managing the Ceph cluster.

In this article, I plan to refresh my knowledge on deploying Ceph based on Reef release (18) as an example.

Continue reading “Deploying Ceph Reef cluster using cephadm”

Loading

What’s new at Nutanix University?

Good news for Nutanix Cloud Clusters (NC2) users – two new courses and certification tracks have been announced.

Two free online courses:
Nutanix Cloud Clusters on AWS Administration (NC2A-AWS) – Configuring and administrating NC2 in the AWS environment;
Nutanix Cloud Clusters on Azure Administration (NC2A-Azure) – Configuring and administrating NC2 in the Microsoft Azure environment.

Both courses include a theoretical part and Hands-on Labs as well.

And there are new certifications – Nutanix Certified Professional – Cloud Integration for AWS and Azure.

NCP-CI-AWS 6.7;
NCP-CI-Azure 6.7.

As usual, both exams are in beta state, and you can take them for free. Use discount code NCPCIAWS67BETA for NCP-CI-AWS 6.7 exam and NCPCIAZURE67BETA for NCP-CI-Azure 6.7. This offer is only available to the first 250 participants, and the last day to test is March 10, 2024.

Loading