Encountered issue after a power failure which cause all the nodes of the identity manager cluster to reboot. Based on the error, i’m getting the following: Error 500: org.hibernate.exception.GenericJDBCException: could not prepare statement Bookmark this KB as this explains how the internal Postgres cluster works – basically, if all the nodes failed – you need to manully bring up the VIP of the post
Making a post on this as I experienced this first hand. Issue is, after reboot, some pods will not run. Specifically, vco and pg. This can be verified by executing the following command in one of the node: kubectl get pods –all-namespaces You’ll notice some pods will be CrashLoop state. To resolve, the following KB will help: https://kb.vmware.com/s/article/78235 For new installs: https://kb.vmware.com/s/
With the recent ncov19 situation, this gave me time to start the long overdue updates to my site which include: Migrating to a new (beefier) VPS Server! End-to-End SSL certificates between CDN and the VPS Migrate from docker-compose to ansible for easier rebuild of the site. Overall, happy with the results.. and with that said, I’ll create a more detailed steps on the learning from this and what you can do with
Unlike NSX-V, NSX-T configuration for logging is done manually. Syslog configurations are not propagated to objects (Edge, Transport Nodes) created from the manager. (this true with the current version: NSX-T 2.5.1) Anyway, here are the steps on how to configure: Manager / Edge Nodes: SSH to the Management IP. I’m using root Switch to admin to start working with NSX CLI (cmd: su admin) Issue command: set-loggin
I was with a GSS Support today troubleshooting NSX issue on one of my engagement. He did a packet trace from ESXi hosts to see if there is traffic leaving/ entering the physical NIC which really nice. This will be helpful in establishing if the issue is somewhere in the environment or not. Here’s the command Receive: pktcap-uw –uplink vmnic0 –capture UplinkRcvKernel -o -| tcpdump-uw -nr – Send
Unable to validate the provided access credentials: Failed to validate credentials. Error: java.security.cert.CertificateException: No subject alternative DNS name matching <nsx> found. Cloud account: null Task: /provisioning/endpoint-tasks/d3f06b7ab13aec7559c1458d6fa20 Got the above error when trying to add NSX-V cloud-account to vRealize Automation 8. Issue: it’s because the self-signed certificate of the
When doing an upgrade via LCM of vRA from 8.0 to 8.0.1, you might encounter this error: Disk space on root partition (/) on VM Disk 1 (/dev/sda4) should have atleast 20 GB of free disk space. This is because initial installation of vRA only has small disk not enough for upgrade. The issue is documented in the Known Issue for vRA 8.0.1 here To resolve, WITHOUT POWERING OFF your Virtual Appliance, go to each of the vRA
Here’s a quick how-to in updating solutions managed by VMware Lifecycle Manager. Pre-requisite: Solution must be managed by VMware Lifecycle. Add your my VMware Account to LCM Go to Settings -> Binary Mapping and Download Binary: Procedure: Login to LCM, Under Settings -> Update Product Support Pack If LCM has internet connection, initiate Check Support Packs Online Wait for the process completes. Go to requ
Ok. This took awhile to look for so might as well blog about it. This application rule redirects ALL HTTP traffic to HTTPS using the same URI. redirect scheme https if !{ ssl_fc } To use this, you just need to create the Virtual Server that serves HTTP, attach the application rule containing the string – and youre all set 🙂 Enjoy!
vRealize Automation 8 is out! This is a big release as it marks feature parity with VMware Cloud Automation Service (SaaS offering). In addition, a new architecture which eliminates the need for Windows Server. For this post, i’ll document what’s needed for an Enterprise Install of vRA8. High-level diagram: Components: 1 x Lifecycle Manager3 x Identity Manager3 x vRealize Automation Appliance2 x LB to handle IDM and