Clean VMware Update Manager on VCSA 6.5

Update Manager on VCSA 6.5

Conflicting VIBs during patching? Changing server vendors? Want to get rid of an old repository? All of the above?

I just ran into this scenario again, where a customer is changing from one server vendor to another. There was an Update Manager repository for the old server vendor, and those vendor-specific, conflicting patches for both critical and non-critical patches were included in the default baselines.

There is no nice way to remove individual patches or an entire repository worth of patches from the internal database. Instead, we must start from scratch. Take note of your settings, check your backups, take a snapshot, and reset Update Manager back to defaults.

Connect to vCenter Server Appliance 6.5 via SSH

Run the shell command to switch to the BASH Shell:

shell

Stop the VMware Update Manager Service:

service-control --stop vmware-updatemgr 

Run the following command to reset the VMware Update Manager Database:

/usr/lib/vmware-updatemgr/bin/updatemgr-util reset-db

Run the following Command to delete the contents of the VMware Update Manager Patch Store:

rm -rf /storage/updatemgr/patch-store/*

Start the VMware Update Manager Service:

service-control --start vmware-updatemgr

The only things left to do is to log out, log back in, and you’ve got a fresh Update Manager waiting for your configuration. Don’t forget to clean up your snapshot!

See KB2147284 if you have any other questions, or comment below.

vSphere CPU ready time

Bored, nothing to do, and checking out your performance metrics?  First off, use VMware vRealize Operations (vROps, formerly vCOps), and take up a new hobby in all your spare time; thank me later.  Still need to take a look because you’re troubleshooting a slow VM?  Concerned about if you’re oversubscribing your CPUs?  High kernel times?

Why can't I hold all of these metrics

A basic explanation of CPU Ready Time is: how long is your virtual machine is waiting in line to use the CPU on the host?  There is a very acceptable percentage (in general, under 10%, more on that below), however oversubscribing will definitely cause you (or clients) headaches.  An example of when you have this problem is a generally slow VM, but task manager/TOP isn’t showing something eating up all your CPU, and all other metrics look fine.  Extreme cases will make the VM’s clock slow.  Perhaps high kernel time?  Josh at vmtoday has an image of this example on his very relevant post.

When you look at these graphs and see high numbers, don’t necessarily worry.  There’s a pretty easy formula to figure out what you’re looking at.  In the example I’m using below, I’m using the performance chart for the VM, realtime, which has a metric rollup time of 20 seconds.  Here’s how I got it to that, and what it looks like:

 

Performance Options
Performance Options

Realtime CPU Ready
Realtime CPU Ready

If you’re looking at graphs of different timeframes, you want to use a separate number in the formula:

  • Realtime: 20 seconds – We’re using this one in my example
  • Past Day: 5 minutes (300 seconds)
  • Past Week: 30 minutes (1800 seconds)
  • Past Month: 2 hours (7200 seconds)
  • Past Year: 1 day (86400 seconds)

(CPU summation value / (<Chart Interval in Seconds> * 1000)) * 100 = % CPU ready

It’s probably hard to see, but I’m interested in the VM average of 547 at a realtime (20 second) interval.  I toss those numbers into the formula:

(547 / (20 seconds * 1000)) * 100 = 2.73% CPU Ready

With only 2.73% CPU ready time, I can see this VM isn’t having any CPU problems.

Some different resources concur that up to 10% is acceptable, but something over 10% should require some reviewing.  Keep in mind the time-frame you’re looking at this:  realtime during high production times may not be the most accurate for an overview.  If that’s the case, check out a daily or weekly average instead.

Additional resources on this topic including all about using CPU affinity: