TKG Series [TKG Series – Part 1] VMware Tanzu Kubernetes Grid introduction and installation [TKG Series – Part 2] Install Kubernetes Cluster(s) using Tanzu Kubernetes Grid [TKG Series – Part 3] Creating custom plan in Tanzu Kubernetes Grid For this post, I’ll be showing how to create custom plan that can be used when provisioning kubernetes clusters using tkg. Plan is used to define specifics of the provisio
TKG Series [TKG Series – Part 1] VMware Tanzu Kubernetes Grid introduction and installation [TKG Series – Part 2] Install Kubernetes Cluster(s) using Tanzu Kubernetes Grid [TKG Series – Part 3] Creating custom plan in Tanzu Kubernetes Grid For my next post, we will now be installing kubernetes clusters using Tanzu Kubernetes Grid (TKG). With properly configured tkg, the command is straightforward:. Execu
TKG Series [TKG Series – Part 1] VMware Tanzu Kubernetes Grid introduction and installation [TKG Series – Part 2] Install Kubernetes Cluster(s) using Tanzu Kubernetes Grid [TKG Series – Part 3] Creating custom plan in Tanzu Kubernetes Grid This is going to be a long post as I will try to keep it as detailed as possible. Quick Introduction VMware Tanzu Kubernetes Grid (TKG for short) is the rebranded PKS
Upgrading NSX intelligence is not the most straight-forward process. Here’s experience I’ve had and how to overcome them: How to use IIS to host the .NUB upgrade bundle As part of the upgrade, you need to “host” the *.nub or upgrade bundle to a local webserver so the appliance can pick it off. I only had IIS running in my environment so I had to use it. Now, for IIS to work – you need to
This post is a long overdue tutorial on how to setup govc. What is govc? It’s a cli utilizing govmomi, a go library used to interact with vSphere API. Why use govc? It’s fast and way better to use for automation tasks. Installation: Download release binary here: https://github.com/vmware/govmomi/releases Depending on the platform, decompress: gzip -d govc_linux_amd64.gz Flag it as executable chmod +x govc
Encountered issue after a power failure which cause all the nodes of the identity manager cluster to reboot. Based on the error, i’m getting the following: Error 500: org.hibernate.exception.GenericJDBCException: could not prepare statement Bookmark this KB as this explains how the internal Postgres cluster works – basically, if all the nodes failed – you need to manully bring up the VIP of the post
Making a post on this as I experienced this first hand. Issue is, after reboot, some pods will not run. Specifically, vco and pg. This can be verified by executing the following command in one of the node: kubectl get pods –all-namespaces You’ll notice some pods will be CrashLoop state. To resolve, the following KB will help: https://kb.vmware.com/s/article/78235 For new installs: https://kb.vmware.com/s/
I was with a GSS Support today troubleshooting NSX issue on one of my engagement. He did a packet trace from ESXi hosts to see if there is traffic leaving/ entering the physical NIC which really nice. This will be helpful in establishing if the issue is somewhere in the environment or not. Here’s the command Receive: pktcap-uw –uplink vmnic0 –capture UplinkRcvKernel -o -| tcpdump-uw -nr – Send
Unable to validate the provided access credentials: Failed to validate credentials. Error: java.security.cert.CertificateException: No subject alternative DNS name matching <nsx> found. Cloud account: null Task: /provisioning/endpoint-tasks/d3f06b7ab13aec7559c1458d6fa20 Got the above error when trying to add NSX-V cloud-account to vRealize Automation 8. Issue: it’s because the self-signed certificate of the
When doing an upgrade via LCM of vRA from 8.0 to 8.0.1, you might encounter this error: Disk space on root partition (/) on VM Disk 1 (/dev/sda4) should have atleast 20 GB of free disk space. This is because initial installation of vRA only has small disk not enough for upgrade. The issue is documented in the Known Issue for vRA 8.0.1 here To resolve, WITHOUT POWERING OFF your Virtual Appliance, go to each of the vRA