Ok. so it’s been awhile since my last post and I just notice I have alot of stuff in my draft. Anyway, here’s a quick post on how to solve Supervisor Namespace not showing up in kubectl CLI when trying to login to TKGS. The issue: if you use kubectl vsphere login in a machine that already has kubeconfig configured, changing context won’t work I’ve been doing alot of kubernetes and have collect
I needed to change the configuration of AVI / NSX ALB that is powering my TKG 1.4 cluster. The change is a downgrade since i’m seeing weird stuff on the new version of AVI i was using. In hindsight, i could have easily just recreated everything in my TKG cluster as it’s all scripted/ templated already but doing this steps would help you understand the integration works. so lets get started: In the Managem
Last February, I blogged how to use Inlets which allows you to expose your on-prem kubernetes cluster to the big bad internet using an exit node which is a public VPS. It works as it claims but this may not be for everyone as you need to pay extra $/mo for the public VPS. Like anything in IT- there’s many ways to skin a cat. Now, i’ll be detailing another way to achieve this using CloudFlare Argo Tunnel w
NOTE: This post is based on Tanzu Kubernetes Grid. If you are using other kubernetes release, installing/using AVI Kubernetes Operator manually should work. With the release of Tanzu Kubernetes Grid ( tkg 1.3), Avi Kubernetes Operator can be pre-configured as part of kubernetes cluster creation. This helps streamline setting up Type:LoadBalancer especially for on-prem kubernetes install. This is HUGE as it removes th
For this post, I’ll document how to setup Harbor registry using Traefik as an ingress controller with a valid certificate from LetsEncrypt . Documentation around the topic is scattered in different places and people just assumes you’ll figure out the trivial details. So without further ado… let’s start with a quick pre-requisites. Pre-requisite As an image registry, Harbor needs to have a vali
TKG 1.1.3 is out and with it brings an exciting change – NFS Tools is now included in the PhotonOS! This is big as it opens up ootb integration with shared storage. Previously, you need to mess with photonOS internal manually to make use of NFS for pods… and yeah -new K8S version too. Now – time for an upgrade. Before that, lets do the pre-work Upload both tkg OVA: kubernetes and haproxy and mark it
Since last week, I’ve been running harbor using a self-signed certificate. This is okay for home-lab purpose but annoying once you start integrating with Kubernetes. This is because you need to modify the each node to trust the self-signed cert to be able to push/pull images, and with TKG providing scale-out k8s installation – this is a headache to integrate. To solve this, we can use LetsEncrypt to provi
During the weekend, I wanted to try creating a CI/CD pipeline for web-applications I’ve been developing ( details on a separate post). Given the only experience I have with such technologies (CI/CD Pipeline) is seeing them from marketing slide – this is an opportunity for me to learn and document my experience. Hello Jenkins When you think CI/CD- you’ll always going to come across Jenkins. Now, whil
This post documents the experience in using ansible module for nsx-t https://github.com/vmware/ansible-for-nsxt Prerequisites The following steps were undertaken in a control VM where the ansible playbook will be executed. Install ovftool I’m using ubuntu 18.04 and have downloaded ovftool.bundle from VMware. After uploading the file, issue sh VMware-ovftool-4.3.0-7948156-lin.x86_64.bundle pip3 install --upgrade
The following post describes how to update vRealize Automation from 8.0.x to 8.1 manually. Most of the guides that tackles the topic assumes you are directly connected to the internet to pull the upgrade binaries. Unfortunately, this is not the case on some environments where the solution is deployed in an airgapped setup. With that said, here’s the high-level steps on how to go about bring up the solution to 8