DISCLAIMER: I’m not affiliated with inlets/ openfaas. I bought inlets-pro license on my own as it’s a cool tech which answers limitations to my current home-lab setup. As with anyone running an on-premise Kubernetes, exposing services internally is straight forward process – setting up services/ ingress. voila! You got your application consumable by anyone internal to your network. Now to have it ac
New year.. new stack (get it?) For this year – I wanted to learn a new configuration management I’ve been using at home. Although ansible CLI has been good and dependable, AWX has left a bad taste. The high resource utilization and cumbersome way of installing in Kubernetes are good reason to look at other options. Good thing, being with VMware allows us to try the softwares that are currently in our solu
Happy New Year!Before the end 2020, I managed to pass the recently released Certified Kubernetes Security Specialist (CKS). It took two tries to pass it since the domain is a whole new beast compared to CKA-CKAD. In addition, studying for it was also a challenge since it was just released officially last November 2020. For people looking at how to prepare, here’s some general tips in preparing for the exam (wit
Quick demonstration of how to utilize tkg-cli and Gitlab CICD to provision kubernetes cluster. This allows anyone to provision/scale/delete kubernetes cluster by just committing a cluster definition to git – a common usecase when employing GitOps. Overview of how it’s done: Python script gets executed as part of the GitlabCICD which does a git diff to determine the appropriate action to perform Any files
If you’re reading this – you either: sweating as you can’t recover a data from a POD that was using a PV or.. looking for ways to safely delete PODs without affecting storage stored in a PV Either way, came across the same dilemma while I was migrating my apps to argocd. Took awhile to search for this so I’m documenting for anyone wanted to have the solution. If the PV is already released, ski
Ever since I’ve started running a home-lab, one service that has been a staple for me is Pi-hole. It’s fast, reliable and low foot print DNS server that also blocks adds. More information about Pi-hole here: https://pi-hole.net/ I’ve tried different iteration of installation: from linux install to my current setup, running as a container in a stand-alone docker host. Now, I’ve started switchin
As a follow-up from my previous jenkins install, I decided to instead use gitlab to run my ci/cd pipeline due to the following reasons: Code and Jobs in one place feels more natural as execution automatically gets triggered in each code commit. No need to mess with plugins. Way easier to setup. Now, to have a real-world experience (or alteast close to it) I needed to have a good use case to apply it. Good thing my si
TKG 1.1.3 is out and with it brings an exciting change – NFS Tools is now included in the PhotonOS! This is big as it opens up ootb integration with shared storage. Previously, you need to mess with photonOS internal manually to make use of NFS for pods… and yeah -new K8S version too. Now – time for an upgrade. Before that, lets do the pre-work Upload both tkg OVA: kubernetes and haproxy and mark it
Since last week, I’ve been running harbor using a self-signed certificate. This is okay for home-lab purpose but annoying once you start integrating with Kubernetes. This is because you need to modify the each node to trust the self-signed cert to be able to push/pull images, and with TKG providing scale-out k8s installation – this is a headache to integrate. To solve this, we can use LetsEncrypt to provi
For this post, I’ll be documenting how to run Harbor behind Traefik in a kubernetes installation. Although the Harbor helm chart can be installed with nginx ingress controller – I already have an ingress controller running in my cluster and I prefer to use it instead. (Also, traefik is way easier to configure :P). Now, to install, configure the following in the values.yaml in the harbor helm chart: values