Transcription

LFS258KubernetesFundamentalsVersion 2020-04-20Version 2020-04-20 Copyright the Linux Foundation 2020. All rights reserved.

ii Copyright the Linux Foundation 2020. All rights reserved.The training materials provided or developed by The Linux Foundation in connection with the training services are protectedby copyright and other intellectual property rights.Open source code incorporated herein may have other copyright holders and is used pursuant to the applicable open sourcelicense.The training materials are provided for individual use by participants in the form in which they are provided. They may not becopied, modified, distributed to non-participants or used to provide training to others without the prior written consent of TheLinux Foundation.No part of this publication may be reproduced, photocopied, stored on a retrieval system, or transmitted without express priorwritten consent.Published by:the Linux Foundationhttps://www.linuxfoundation.orgNo representations or warranties are made with respect to the contents or use of this material, and any express or impliedwarranties of merchantability or fitness for any particular purpose or specifically disclaimed.Although third-party application software packages may be referenced herein, this is for demonstration purposes only andshall not constitute an endorsement of any of these software applications.Linux is a registered trademark of Linus Torvalds. Other trademarks within this course material are the property of theirrespective owners.If there are any questions about proper and fair use of the material herein, please contact:[email protected] 2020-04-20 Copyright the Linux Foundation 2020. All rights reserved.

Contents123456789Introduction11.11Labs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Basics of Kubernetes32.13Labs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Installation and Configuration53.15Labs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Kubernetes Architecture254.125Labs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .APIs and Access335.133Labs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .API Objects396.139Labs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Managing State With Deployments497.149Labs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Services578.157Labs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Volumes and Data639.163Labs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .10 Ingress7910.179Labs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .11 Scheduling11.185Labs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .12 Logging and Troubleshooting12.193Labs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .13 Custom Resource Definition13.18593101Labs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10114 Helm105iii

ivCONTENTS14.1Labs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10515 Security15.1111Labs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11116 High Availability16.1119Labs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119V 2020-04-20 Copyright the Linux Foundation 2020. All rights reserved.

List of Figures3.1External Access via Browser . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2210.1Accessing the API . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8412.1External Access via Browser . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .9912.2External Access via Browser . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10012.3External Access via Browser . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10016.1Initial HAProxy Status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12116.2Multiple HAProxy Status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12416.3HAProxy Down Status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126v

viV 2020-04-20LIST OF FIGURES Copyright the Linux Foundation 2020. All rights reserved.

Chapter 1Introduction1.1LabsExercise 1.1: Configuring the System for sudoIt is very dangerous to run a root shell unless absolutely necessary: a single typo or other mistake can cause serious (evenfatal) damage.Thus, the sensible procedure is to configure things such that single commands may be run with superuser privilege, by usingthe sudo mechanism. With sudo the user only needs to know their own password and never needs to know the root password.If you are using a distribution such as Ubuntu, you may not need to do this lab to get sudo configured properly for the course.However, you should still make sure you understand the procedure.To check if your system is already configured to let the user account you are using run sudo, just do a simple command like: sudo lsYou should be prompted for your user password and then the command should execute. If instead, you get an error messageyou need to execute the following procedure.Launch a root shell by typing su and then giving the root password, not your user password.On all recent Linux distributions you should navigate to the /etc/sudoers.d subdirectory and create a file, usually with thename of the user to whom root wishes to grant sudo access. However, this convention is not actually necessary as sudo willscan all files in this directory as needed. The file can simply contain:student ALL (ALL)ALLif the user is student.An older practice (which certainly still works) is to add such a line at the end of the file /etc/sudoers. It is best to do so usingthe visudo program, which is careful about making sure you use the right syntax in your edit.You probably also need to set proper permissions on the file by typing: sudo chmod 440 /etc/sudoers.d/student(Note some Linux distributions may require 400 instead of 440 for the permissions.)1

2CHAPTER 1. INTRODUCTIONAfter you have done these steps, exit the root shell by typing exit and then try to do sudo ls again.There are many other ways an administrator can configure sudo, including specifying only certain permissions for certainusers, limiting searched paths etc. The /etc/sudoers file is very well self-documented.However, there is one more setting we highly recommend you do, even if your system already has sudo configured. Mostdistributions establish a different path for finding executables for normal users as compared to root users. In particular thedirectories /sbin and /usr/sbin are not searched, since sudo inherits the PATH of the user, not the full root user.Thus, in this course we would have to be constantly reminding you of the full path to many system administration utilities;any enhancement to security is probably not worth the extra typing and figuring out which directories these programs are in.Consequently, we suggest you add the following line to the .bashrc file in your home directory:PATH PATH:/usr/sbin:/sbinIf you log out and then log in again (you don’t have to reboot) this will be fully effective.V 2020-04-20 Copyright the Linux Foundation 2020. All rights reserved.

Chapter 2Basics of Kubernetes2.1LabsExercise 2.1: View Online ResourcesVisit kubernetes.ioWith such a fast changing project, it is important to keep track of updates. The main place to find documentation of thecurrent version is https://kubernetes.io/.1. Open a browser and visit the https://kubernetes.io/ website.2. In the upper right hand corner, use the drop down to view the versions available. It will say something like v1.12.3. Select the top level link for Documentation. The links on the left of the page can be helpful in navigation.4. As time permits navigate around other sub-pages such as SETUP, CONCEPTS, and TASKS to become familiar with thelayout.Track Kubernetes IssuesThere are hundreds, perhaps thousands, working on Kubernetes every day. With that many people working in parallelthere are good resources to see if others are experiencing a similar outage. Both the source code as well as featureand issue tracking are currently on github.com.1. To view the main page use your browser to visit https://github.com/kubernetes/kubernetes/2. Click on various sub-directories and view the basic information available.3. Update your URL to point to https://github.com/kubernetes/kubernetes/issues. You should see a series ofissues, feature requests, and support communication.4. In the search box you probably see some existing text like isissue is:open: which allows you to filter on the kind ofinformation you would like to see. Append the search string to read: isissue is:open label:kind/bug: then press enter.3

4CHAPTER 2. BASICS OF KUBERNETES5. You should now see bugs in descending date order. Across the top of the issues a menu area allows you to view entriesby author, labels, projects, milestones, and assignee as well. Take a moment to view the various other selection criteria.6. Some times you may want to exclude a kind of output. Update the URL again, but precede the label with a minus sign,like: isissue is:open -label:kind/bug:. Now you see everything except bug reports.V 2020-04-20 Copyright the Linux Foundation 2020. All rights reserved.

Chapter 3Installation and Configuration3.1LabsExercise 3.1: Install KubernetesOverviewThere are several Kubernetes installation tools provided by various vendors. In this lab we will learn to use kubeadm. As acommunity-supported independent tool, it is planned to become the primary manner to build a Kubernetes cluster.Platforms: GCP, AWS, VirtualBox, etcThe labs were written using Ubuntu instances running on Google Cloud Platform (GCP). They have been written tobe vendor-agnostic so could run on AWS, local hardware, or inside of virtualization to give you the most flexibility andoptions. Each platform will have different access methods and considerations. As of v1.18.1 the minimum (as in barelyworks) size for VirtualBox is 3vCPU/4G memory/5G minimal OS for master and 1vCPU/2G memory/5G minimal OSfor worker node.If using your own equipment you will have to disable swap on every node. There may be other requirements which will beshown as warnings or errors when using the kubeadm command. While most commands are run as a regular user, there aresome which require root privilege. Please configure sudo access as shown in a previous lab. You If you are accessing thenodes remotely, such as with GCP or AWS, you will need to use an SSH client such as a local terminal or PuTTY if not usingLinux or a Mac. You can download PuTTY from www.putty.org. You would also require a .pem or .ppk file to access thenodes. Each cloud provider will have a process to download or create this file. If attending in-person instructor led training thefile will be made available during class.Very ImportantPlease disable any firewalls while learning Kubernetes. While there is a list of required ports for communication betweencomponents, the list may not be as complete as necessary. If using GCP you can add a rule to the project which allows5

6CHAPTER 3. INSTALLATION AND CONFIGURATIONall traffic to all ports. Should you be using VirtualBox be aware that inter-VM networking will need to be setto promiscuous mode.In the following exercise we will install Kubernetes on a single node then grow the cluster, adding more compute resources.Both nodes used are the same size, providing 2 vCPUs and 7.5G of memory. Smaller nodes could be used, but would runslower, and may have strange errors.YAML files and White SpaceVarious exercises will use YAML files, which are included in the text. You are encouraged to write the files whenpossible, as the syntax of YAML has white space indentation requirements that are important to learn. An importantnote, do not use tabs in your YAML files, white space only. Indentation matters.If using a PDF the use of copy and paste often does not paste the single quote correctly. It pastes as a back-quote instead.You will need to modify it by hand. The files have also been made available as a compressed tar file. You can view theresources by navigating to this To login use user: LFtraining and a password of: Penguin2014Once you find the name and link of the current file, which will change as the course updates, use wget to download the fileinto your node from the command line then expand it like this: wget 258 V2020-04-20 SOLUTIONS.tar.bz2 \--user LFtraining --password Penguin2014 tar -xvf LFS258 V2020-04-20 SOLUTIONS.tar.bz2(Note: depending on your PDF viewer, if you are cutting and pasting the above instructions, the underscores may disappearand be replaced by spaces, so you may have to edit the command line by hand!)BionicWhile Ubuntu 18 bionic has become the typical version to deploy, the Kubernetes repository does not yet havematching binaries at the time of this writing. The xenial binaries can be used until an update is provided.Install KubernetesLog into your nodes. If attending in-person instructor led training the node IP addresses will be provided by theinstructor. You will need to use a .pem or .ppk key for access, depending on if you are using ssh from a terminal orPuTTY. The instructor will provide this to you.1. Open a terminal session on your first node. For example, connect via PuTTY or SSH session to the first GCP node. Theuser name may be different than the one shown, student. The IP used in the example will be different than the one youwill use.[[email protected] ] ssh -i LFS458.pem [email protected] authenticity of hostECDSA key fingerprint isECDSA key fingerprint isAre you sure you want toV 2020-04-20'54.214.214.156 (35.226.100.87)' can't be established.SHA256:IPvznbkx93/Wc 2:d3:95:08:08:4a:74:1b:f6:e1:9f.continue connecting (yes/no)? yes Copyright the Linux Foundation 2020. All rights reserved.

73.1. LABSWarning: Permanently added '35.226.100.87' (ECDSA) to the list of known hosts. output omitted 2. Become root and update and upgrade the system. You may be asked a few questions. Allow restarts and keep thelocal version currently installed. Which would be a yes then a [email protected]: sudo [email protected]: # apt-get update && apt-get upgrade -y output omitted You can choose this option to avoid being prompted; instead,all necessary restarts will be done for you automaticallyso you can avoid being asked questions on each library upgrade.Restart services during package upgrades without asking? [yes/no] yes output omitted A new version (/tmp/fileEbke6q) of configuration file /etc/ssh/sshd config isavailable, but the version installed currently has been locally modified.1.2.3.4.5.6.7.install the package maintainer's versionkeep the local version currently installedshow the differences between the versionsshow a side-by-side difference between the versionsshow a 3-way difference between available versionsdo a 3-way merge between available versionsstart a new shell to examine the situationWhat do you want to do about modified configuration file sshd config? 2 output omitted 3. Install a text editor like nano, vim, or emacs. Any will do, the labs use a popular option, [email protected]: # apt-get install -y vim output-omitted 4. The main choices for a container environment are Docker and cri-o. We suggest Docker for class, as cri-o is not yetthe default when building the cluster with kubeadm on Ubuntu.The cri-o engine is the default in Red Hat products and is being implemented by others. It has not yet gained wideusage in production, but is included here if you want to work with it. Installing Docker is a single command. At themoment it takes ten steps to install and configure crioVery ImportantIf you want extra challenge use cri-o. Otherwise install DockerPlease note, install Docker OR cri-o. If both are installed the kubeadm init process search pattern will use Docker. Alsobe aware that if you choose to use cri-o you may find encounter different output than shown in the book.(a) If using Docker:[email protected]: # apt-get install -y docker.io output-omitted (b) If using CRI-O:i. Use the modprobe command to load the overlay and the br netfilter modules.V 2020-04-20 Copyright the Linux Foundation 2020. All rights reserved.

8CHAPTER 3. INSTALLATION AND [email protected]: # modprobe [email protected]: # modprobe br netfilterii. Create a sysctl config file to enable IP forwarding and netfilter settings persistently across [email protected]: # vim dge-nf-call-iptables 1net.ipv4.ip forward 1net.bridge.bridge-nf-call-ip6tables 1iii. Use the sysctl command to apply the config [email protected]: # sysctl --system.* Applying /etc/sysctl.d/99-kubernetes-cri.conf .net.bridge.bridge-nf-call-iptables 1net.ipv4.ip forward 1net.bridge.bridge-nf-call-ip6tables 1* Applying /etc/sysctl.d/99-sysctl.conf .* Applying /etc/sysctl.conf .iv. Install a dependent software [email protected]: # apt-get install -y software-properties-common output-omitted v. Add the CRI-O software repository. Press ENTER to continue, then update the [email protected]: # add-apt-repository ppa:projectatomic/ppaPress [ENTER] to continue or Ctrl-c to cancel adding [email protected]: # apt-get updatevi. We can now install the cri-o software. Be aware the version may lag behind updates to Kubernetes [email protected]: # apt-get install -y cri-o-1.15 output omitted vii. There is a hard coded path for the conmon binary which does not match Ubuntu 18.04. Update the crio.conffile to use the correct binary [email protected]: # which conmon/usr/bin/conmonviii. Edit the /etc/crio/crio.conf file to use the proper binary path. Also configure registries. Unlike Docker wemust declare where to find images other than the core Kubernetes images. Be aware this can be done in afew places such as : # vim /etc/crio/crio.conf.# Path to the conmon binary, used for monitoring the OCI runtime.conmon "/usr/bin/conmon"# -- Edit this line. Around line 91.registries [# -- Edit and add registries. Around line rg",].ix. Enable cri-o and ensure it is [email protected]: # systemctl [email protected]: # systemctl enable [email protected]: # systemctl start [email protected]: # systemctl status crioV 2020-04-20 Copyright the Linux Foundation 2020. All rights reserved.

93.1. LABScrio.service - Container Runtime Interface for OCI (CRI-O)Loaded: loaded (/usr/lib/systemd/system/crio.service; disabled; vendor preset: enabled)Active: active (running) since Mon 2020-02-03 17:00:34 UTC; 7s agoDocs: https://github.com/cri-o/cri-o.x. Configure kubelet to understand how to interact with crio. The following would be one long line inside of thefile. It is presented here on multiple lines for ease of [email protected]: # vim /etc/default/kubeletKUBELET EXTRA ARGS --feature-gates "AllAlpha false,RunAsGroup true"--container-runtime remote--cgroup-driver systemd--container-runtime-endpoint timeout 5m5. Add a new repo for kubernetes. You could also download a tar file or use code from GitHub. Create the file and add anentry for the main repo for your distribution. We are using the Ubuntu 18.04 but the kubernetes-xenial repo of thesoftware, also include the key word main. Note there are four sections to the [email protected]: # vim pt.kubernetes.io/kubernetes-xenialmain6. Add a GPG key for the packages. The command spans three lines. You can omit the backslash when you type. The OKis the expected output, not part of the [email protected]: # curl -s .gpg \ apt-key add OK7. Update with the new repo declared, which will download updated repo [email protected]: # apt-get update output-omitted 8. Install the software. There are regular releases, the newest of which can be used by omitting the equal sign and versioninformation on the command line. Historically new versions have lots of changes and a good chance of a bug or five. Asa result we will hold the software at the recent but stable version we [email protected]: # apt-get install -y \kubeadm 1.18.1-00 kubelet 1.18.1-00 kubectl 1.18.1-00 output-omitted [email protected]: # apt-mark hold kubelet kubeadm kubectlkubelet set on hold.kubeadm set on hold.kubectl set on hold.9. Deciding which pod network to use for Container Networking Interface (CNI) should take into account the expecteddemands on the cluster. There can be only one pod network per cluster, although the CNI-Genie project is trying tochange this.The network must allow container-to-container, pod-to-pod, pod-to-service, and external-to-service communications. AsDocker uses host-private networking, using the docker0 virtual bridge and veth interfaces would require being on thathost to communicate.We will use Calico as a network plugin which will allow us to use Network Policies later in the course. CurrentlyCalico does not deploy using CNI by default. Newer versions of Calico have included RBAC in the main file. Oncedownloaded look for the expected IPV4 range for containers to use in the configuration file.V 2020-04-20 Copyright the Linux Foundation 2020. All rights reserved.

10CHAPTER 3. INSTALLATION AND [email protected]: # wget ml10. Use less to page through the file. Look for the IPV4 pool assigned to the containers. There are many different configuration settings in this file. Take a moment to view the entire file. The CALICO IPV4POOL CIDR must match the valuegiven to kubeadm init in the following step, whatever the value may be. Avoid conflicts with existing IP ranges of [email protected]: # less calico.yamlcalico.yaml1234567.# The default IPv4 pool to create on startup if none exists. Pod IPs will be# chosen from this range. Changing this value after installation will have# no effect. This should fall within --cluster-cidr .- name: CALICO IPV4POOL CIDRvalue: "192.168.0.0/16".11. Find the IP address of the primary interface of the master server. The example below would be the ens4 interface andan IP of 10.128.0.3, yours may be [email protected]: # ip addr show.2: ens4: BROADCAST,MULTICAST,UP,LOWER UP mtu 1460 qdisc mq state UP group default qlen 1000link/ether 42:01:0a:80:00:18 brd ff:ff:ff:ff:ff:ffinet 10.128.0.3/32 brd 10.128.0.3 scope global ens4valid lft forever preferred lft foreverinet6 fe80::4001:aff:fe80:18/64 scope linkvalid lft forever preferred lft forever.12. Add an local DNS alias for our master server. Edit the /etc/hosts file and add the above IP address and assign aname [email protected]: # vim /etc/hosts10.128.0.3 k8smaster127.0.0.1 localhost.# -- Add this line13. Create a configuration file for the cluster. There are many options we could include, but will only set the control planeendpoint, software version to deploy and podSubnet values. After our cluster is initialized we will view other defaultvalues used. Be sure to use the node alias, not the IP so the network certificates will continue to work when we deploya load balancer in a future [email protected]: # vim sion: kubeadm.k8s.io/v1beta2kind: ClusterConfigurationkubernetesVersion: 1.18.1controlPlaneEndpoint: "k8smaster:6443"networking:podSubnet: 192.168.0.0/16V 2020-04-20# -- Use the word stable for newest version# -- Use the node alias not the IP# -- Match the IP range from the Calico config file Copyright the Linux Foundation 2020. All rights reserved.

113.1. LABS14. Initialize the master. Read through the output line by line. Expect the output to change as the software matures. At theend are configuration directions to run as a non-root user. The token is mentioned as well. This information can be foundlater with the kubeadm token list command. The output also directs you to create a pod network to the cluster, whichwill be our next step. Pass the network settings Calico has in its configuration file, found in the previous step. Pleasenote: the output lists several commands which following exercise steps will [email protected]: # kubeadm init --config kubeadm-config.yaml --upload-certs \ tee kubeadm-init.out# Save output for future reviewPlease NoteWhat follows is output of kubeadm init. Read the next step prior to further typing.[init] Using Kubernetes version: v1.18.1[preflight] Running pre-flight checks[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as theDocker cgroup driver. The recommended driver is "systemd".You can now join any number of the control-plane noderunning the following command on each as root:kubeadm join k8smaster:6443 --token vapzqi.et2p9zbkzk29wwth \--discovery-token-ca-cert-hash 3865aab9d0bca8ec9f8cd \--control-plane --certificate-key ce1818f642fab8Please note that the certificate-key gives access to cluster sensitivedata, keep it secret!As a safeguard, uploaded-certs will be deleted in two hours; Ifnecessary, you can use"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.Then you can join any number of worker nodes by running the followingon each as root:kubeadm join k8smaster:6443 --token vapzqi.et2p9zbkzk29wwth \--discovery-token-ca-cert-hash 3865aab9d0bca8ec9f8cd15. As suggested in the directions at the end of the previous output we will allow a non-root user admin level access to thecluster. Take a quick look at the configuration file once it has been copied and the permissions [email protected]: # [email protected]: mkdir -p HOME/[email protected]: sudo cp -i /etc/kubernetes/admin.conf HOME/.kube/[email protected]: sudo chown (id -u): (id -g) HOME/.kube/[email protected]: less .kube/configapiVersion: v1clusters:- cluster: output omitted V 2020-04-20 Copyright the Linux Foundation 2020. All rights reserved.

12CHAPTER 3. INSTALLATION AND CONFIGURATION16. Apply the network plugin configuration to your cluster. Remember to copy the file to the current, non-root user [email protected]: sudo cp /root/calico.yaml [email protected]: kubectl apply -f calico.yamlconfigmap/calico-config io/felixconfigurations.crd.projectcalico.org io/ipamblocks.crd.projectcalico.org io/blockaffinities.crd.projectcalico.org created output omitted 17. While many objects have short names, a kubectl command can be a lot to type. We will enable bash auto-completion.Begin by adding the settings to the current shell. Then update the /.bashrc file to make it persistent. Ensure thebash-completion package is installed. If it was not installed, log out then back in for the shell completion to [email protected]: sudo apt-get install bash-completion -y exit and log back in [email protected]: source (kubectl completion bash)[email protected]: echo "source (kubectl completion bash)" /.bashrc18. Test by describing the node again. Type the first three letters of the sub-command then type the Tab key. Auto-completionassumes the default namespace. Pass the namespace first to use auto-completion with a different namespace. Bypressing Tab multiple times you will see a list of possible values. Continue typing until a unique name is used. First lookat the current node (your node name may not start with lfs458-), then look at pods in the kube-system namespace. Ifyou see an error instead such as -bash: get comp words by ref: command not found revisit the previous step,install the software, log out and back [email protected]: kubectl des Tab n Tab Tab lfs458- Tab [email protected]: kubectl -n kube-s Tab g Tab po Tab 19. View other values we could have included in the kubeadm-config.yaml file when creating the [email protected]: sudo kubeadm config print init-defaultsapiVersion: kubeadm.k8s.io/v1beta2bootstrapTokens:- groups:- en: abcdef.0123456789abcdefttl: 24h0m0susages:

In this lab we will learn to use kubeadm. As a community-supported independent tool, it is planned to become the primary manner to build a Kubernetes cluster. Platforms: GCP, AWS, VirtualBox, etc The labs were written using Ubuntu instances running