CKA https://www.cncf.io/certification/cka/
Kind | Version |
---|---|
Pod | v1 |
Service | v1 |
ReplicaSet | apps/v1 |
DaemonSet | apps/v1 |
Deployment | apps/v1 |
ConfigMap | v1 |
Secret | v1 |
Role | rbac.authorization.k8s.io/v1 |
LimitRange | v1 |
RoleBinding | rbac.authorization.k8s.io/v1 |
NetworkPolicy | networking.k8s.io/v1 |
CertificateSigningRequest | certificates.k8s.io/v1beta1 |
PersistentVolume | v1 |
PersistentVolumeClaim | v1 |
kubectl exec etcd-master -n kube-system -- sh -c "ETCDCTL_API=3 etcdctl get / --prefix --keys-only --limit=10 --cacert /etc/kubernetes/pki/etcd/ca.crt --cert /etc/kubernetes/pki/etcd/server.crt --key /etc/kubernetes/pki/etcd/server.key"
registry/pods/name...etc
/etc/systemd/system/kube-apiserver.service
/etc/kubernetes/manifests/kube-controller-manager.yaml
or (non-kube-admin-setup) /etc/systemd/system/kube-controller-manager.service
/etc/kubernetes/manifests/kube-scheduler.yaml
or (non-kube-admin-setup) /etc/systemd/system/kube-scheduler.service
/etc/systemd/system/kubelets.service
there are also multi-container pod, one pods has multiple containers(helper containers, and each pods can refer to each other with localhost
)
apiVersion: v1
kind: Pod
metadata:
name: myapp-pod
labels:
app: myapp
tier: frontend
spec:
containers:
- name: nginx-container
image: nginx
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: myapp-rc
labels:
app: myapp
tier: frontend
spec:
template:
metadata:
name: myapp-pod
labels:
app: myapp
tier: frontend
spec:
containers:
- name: nginx-container
image: nginx
replicas: 3
selector:
matchLabels:
app: myapp
tier: frontend
kubectl replace -f rc.yml
kubectl scale --replicas=6 -f rc.yml
kubectl scale --replicas=6 replicaset myapp-rc
kubectl create namespace name
Service Types | Feature |
---|---|
NodePort | listen to node’s port and then redirect to a pod’s ip |
ClusterIP | pods to pods communication |
LoadBalancer | provision a LB on the cloud provider, then distribute traffic to pods |
spec.nodeName: node02
spec.nodeName
set, then the pod status will be Pending state, manually create a Binding yaml file then send a POST request to http://$SERVERapi/v1/namespaces/default/pods/$PODNAME/binding/
with the yaml converted JSON as bodykubectl get pods --selector env=dev,tier=frontend
NoSchedule(don't schedule new pods) | PreferNoSchedule(not guarantee) | NoExecute(will evict existing pods)
kubectl taint node node01 app=blue:NoSchdule
kind: Pod
...
spec.nodeSelector:
size: Large
kubectl label nodes node01 size=Large
kind: Pod
...
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoreDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: size
operator: In
values:
- Large
- Medium
- key: size
operator: NotIn
values:
- Small
- key: good
operator: Exists
spec.containers.resources:
requests:
memory: "1Gi"
cpu: 1
Make sure the pods is running on every node, it use affinity under the hood to make sure the pods is running on every node
example
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: fluentd-elasticsearch
namespace: kube-system
spec:
selector:
matchLabels:
name: fluentd-elasticsearch
template:
metadata:
labels:
name: fluentd-elasticsearch
spec:
containers:
- name: fluentd-elasticsearch
image: quay.io/fluentd_elasticsearch/fluentd:v2.5.2
/etc/kubernetes/manifests
folder.--pod-manifest-path=/etc/kubernetes/manifests
to kubelet.service--config=kubeconfig.yaml
then inside kubeconfig.yaml
, it is staticPodPath: /etc/kubernetes/manifests
ps -aux | grep .yaml
to find the config path, then cat that file grep staticPodPath
to find the folder of those yaml files....
spec:
containers:
- command:
- kybe-scheduler
- --leader-elect=false
- --port=
- --scheduler-name=my-scheduler
- --lock-object-name=my-scheduler
...
then add spec.schedulerName
steps to find where to add pods
$ cd /etc/systemd/system/kubelet.service.d
$ cat 10-kubeadm.conf | grep kubelet
$ cat /var/lib/kubelet/config.yaml | grep -i path
# copy and config the file
$ netstat -natulp | grep 10253 # until finding a port that is not being used
# then add `spec.schedulerName` to pods' yaml
minikube addons enable metrics-server
kubectl top pods
or kubectl top nodes
kubectl logs -f pod-name container-name
kubectl rollout status deployment/myapp-deployment
kubectl rollout history deployment/myapp-deployment
kubectl set image deployment/myapp-deployment nginx=nginx:1.9.1
kybectl rollout undo deployment/myapp-deployment
kubectl create -f myapp-deployment.yml
kubectl get deployment
# ugprade
kubectl apply -f myapp-deployment.yml
kubectl set image deployment/myapp-deployment nginx=nginx:1.9.1
# status
kubectl rollout status deployment/myapp-deployment
kubectl rollout history deployment/myapp-deployment
kubectl rollout undo deployment/myapp-dep1
kubectl create configmap app-config --from-literal=APP_COLOR=blue --from-literal=APP_MODE=prod
kubectl create configmap app-config --from-file=app_config.properties
create:
apiVersion: v1
kind: ConfigMap
metadate:
name: app-config
data:
APP_COLOR: blue
APP_MODE: prod
inject into pod
--- env
envFrom:
- configMapRef:
name: app-config
--- single env
env
- name: APP_COLOR
valueFrom:
configMapKeyRef:
name: app-config
key: APP_COLOR
--- volume
volumes:
- name: app-config-volume
configMap:
name: app-config
kubectl create secret generic secret-name --from-literal=DB_User=root --from-literal=DB_Password=passwooord
kubectl create secret generic secret-name --from-file=app_secret.properties
echo -n passwooord | base64
then add them toecho -n cGFzc3dvb29yZA== | base64 --decode
to decodeconfig Secret yaml
apiVersion: v1
kind: Secret
metadate:
name: app-secret
data:
DB_User: cm9vdA==
DB_Password: cGFzc3dvb29yZA==
inject into pod
--- env
envFrom:
- secretRef:
name: secret-name
--- single env
env
- name: DB_Password
valueFrom:
secretKeyRef:
name: secret-name
key: DB_Password
--- volume
volumes:
- name: app-config-volume
secret:
secretName: app-config # (which will create files such as /opt/app-config-volumes/DB_Password and /opt/app-config-volumes/DB_Host)
initContainers(docs)
apiVersion: v1
kind: Pod
metadata:
name: myapp-pod
spec:
containers:
- name: myapp-container
image: busybox:1.28
command: ['sh', '-c', 'echo The app is running! && sleep 3600']
initContainers:
- name: init-myservice
image: busybox:1.28
command: ['sh', '-c', 'until nslookup myservice; do echo waiting for myservice; sleep 2; done;']
- name: init-mydb
image: busybox:1.28
command: ['sh', '-c', 'until nslookup mydb; do echo waiting for mydb; sleep 2; done;']
kubectl drain node-1
// will move pods inside to other nodes and cordon the nodekubectl uncordon node-1
// unstrict(uncordon) schedulingkubectl cordon node-2
// if the node is running a critical serviceupgrade one cluster process
$ kubectl drain node01
# plan on upgrading cluster
$ kubeadm upgrade plan
# install kubeadm
$ apt install kubeadm=1.17.0-00
# upgrade kubeadm
$ kubeadm upgrade apply v1.17.0
# install kubelet
$ apt upgrade kubelet=1.17.0-00
# upgrade kubelet
$ kubeadm upgrade node config --kubelet-version v1.17.0
$ systemctl restart kubelet
$ kubectl uncordon node01
kubeadm upgrade node
on node is like kubeadm upgrade apply
on masterto upgrade the master node, run
kubectl drain master --ignore-daemonsets
apt install kubeadm=<version>
kubeadm upgrade apply <version>
apt install kubelet=<version>
kubectl uncordon master
kubectl drain node01 --ignore-daemonsets
ssh node01
node01$ apt install kubeadm=<version>
node01$ apt install kubelet=<version>
node01$ apt-mark hold kubelet
master$ kubectl uncordon node01
kubectl get all --all-namespaces -o yaml > all-deploy-backup.yaml
etcdctl snapshot save snapshot.db
ETCDCTL_API=3 etcdctl \
--endpoints=https://[127.0.0.1]:2379 \ (--listen-client-urls)
--cacert=/etc/kubernetes/pki/etcd/ca.crt \ (--key-file)
--cert=/etc/kubernetes/pki/etcd/server.crt \ (--cert-file)
--key=/etc/kubernetes/pki/etcd/server.key \ (--peer-trusted-ca-file)
snapshot save /tmp/snapshot-pre-boot.db
restore and update /etc/kubernetes/manifests/etcd.yaml
ETCDCTL_API=3 etcdctl --endpoints=https://[127.0.0.1]:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt \
--name=master \
--cert=/etc/kubernetes/pki/etcd/server.crt --key=/etc/kubernetes/pki/etcd/server.key \
--data-dir /var/lib/etcd-from-backup \
--initial-cluster=master=https://127.0.0.1:2380 \
--initial-cluster-token etcd-cluster-1 \
--initial-advertise-peer-urls=https://127.0.0.1:2380 \
snapshot restore /tmp/snapshot-pre-boot.db
/tmp/users/user-details.csv
# User File Contents
password123,user1,u0001
password123,user2,u0002
password123,user3,u0003
password123,user4,u0004
password123,user5,u0005
/etc/kubernetes/manifests/kube-apiserver.yaml
with --authorization-mode
and --basic-auth-file
flagsapiVersion: v1
kind: Pod
metadata:
name: kube-apiserver
namespace: kube-system
spec:
containers:
- command:
- kube-apiserver
- --authorization-mode=Node,RBAC
...
- --basic-auth-file=/tmp/users/user-details.csv
kind: Role
and kind: RoleBinding
yaml files---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: default
name: pod-reader
rules:
- apiGroups: [""] # "" indicates the core API group
resources: ["pods"]
verbs: ["get", "watch", "list"]
---
# This role binding allows "jane" to read pods in the "default" namespace.
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: read-pods
namespace: default
subjects:
- kind: User
name: user1 # Name is case sensitive
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: Role #this must be Role or ClusterRole
name: pod-reader # this must match the name of the Role or ClusterRole you wish to bind to
apiGroup: rbac.authorization.k8s.io
openssl x509 -in /etc/kubernetes/pki/ca.crt -text -noout
openssl genrsa -out jane.key 2048
openssl req -new -key jane.key -subj "/CN=jane" -out jane.csr
cat jane.csr | base64 | tr -d '\n'
apiVersion: certificates.k8s.io/v1beta1
kind: CertificateSigningRequest
metadata:
name: jane
spec:
groups:
- system:authenticated
usages:
- digital signature
- key encipherment
- server auth
request:
GLKHLJKYHIHFVIFH...YUGHJGJKLHK
openssl x509 -req -in /etc/kubernetes/pki/apiserver-etcd-client.csr -CA /etc/kubernetes/pki/etcd/ca.crt -CAkey /etc/kubernetes/pki/etcd/ca.key -CAcreateserial -out /etc/kubernetes/pki/apiserver-etcd-client.crtmaster
kubectl get csr
kubectl certificate approve jane
apiVersion: v1
kind: Config
current-context: aws-user@kubernetes-on-aws
clusters:
- name: production
cluster:
certificate-authority: /etc/kubernetes/pki/ca.crt
server: https://172.17.0.8:6443
- name: development
cluster:
certificate-authority: /etc/kubernetes/pki/ca.crt
server: https://172.17.0.8:6443
- name: kubernetes-on-aws
cluster:
certificate-authority: /etc/kubernetes/pki/ca.crt
server: https://172.17.0.8:6443
- name: test-cluster-1
cluster:
certificate-authority: /etc/kubernetes/pki/ca.crt
server: https://172.17.0.8:6443
contexts:
- name: test-user@development
context:
cluster: development
user: test-user
- name: aws-user@kubernetes-on-aws
context:
cluster: kubernetes-on-aws
user: aws-user
- name: test-user@production
context:
cluster: production
user: test-user
- name: research
context:
cluster: test-cluster-1
user: dev-user
users:
- name: test-user
user:
client-certificate: /etc/kubernetes/pki/users/test-user/test-user.crt
client-key: /etc/kubernetes/pki/users/test-user/test-user.key
- name: dev-user
user:
client-certificate: /etc/kubernetes/pki/users/dev-user/developer-user.crt
client-key: /etc/kubernetes/pki/users/dev-user/dev-user.key
- name: aws-user
user:
client-certificate: /etc/kubernetes/pki/users/aws-user/aws-user.crt
client-key: /etc/kubernetes/pki/users/aws-user/aws-user.key
kubectl config view
kubectl config use-context prod-user@production
, kubectl config --kubeconfig=/root/my-kube-config use-context research
$ curl https://192.168.64.6:8443 -k --key /Users/andy/.minikube/client.key --cert /Users/andy/.minikube/client.crt --cacert /Users/andy/.minikube/ca.crt
$ kubectl proxy
$ curl http://127.0.0.1:8001 -k --key /Users/andy/.minikube/client.key --cert /Users/andy/.minikube/client.crt --cacert /Users/andy/.minikube/ca.crt
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: developer
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["list", "get", "create", "update", "delete"]
# resourceNames: ["blue", "orange"]
- apiGroups: [""]
resources: ["ConfigMap"]
verbs: ["create"]
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: devuser-developer-binding
subjects:
- kind: User
name: dev-user
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: Role
name: developer
apiGroup: rbac.authorization.k8s.io
kubectl api-resources --namespaced=true
to check what resource are bound by namespacescreate docker-registry secret
$ kubectl create secret docker-registry regcred \
--docker-server= \
--docker-username= \
--docker-password= \
--docker-email=
$ kubectl create secret docker-registry private-reg-cred \
--docker-server=myprivateregistry.com:5000 \
--docker-username=dock_user\
--docker-password=dock_password \
--docker-email=[email protected]
use it in the pod file, add spec.imagePullSecrets: - name: regcred
only supported at the container level
spec.containers:
- name: ubuntu
image: ubuntu
command: ["sleep", "3600"]
securityContext:
runAsUser: 1000
capabilities:
add: ["MAC_ADMIN"]
podSelector.matchLabels
to assign the network policy to the podse.g., api can connect with db, then ingress policy for db, egress policy for api
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: db-policy
spec:
podSelector:
matchLabels:
role: db
policyTypes:
- Ingress
ingress:
- from:
- podSelector:
matchLabels:
name: api-pod
ports:
- protocol: TCP
port: 3306
apiVersion: v1
kind: Pod
metadata:
name: mypod
spec:
containers:
- name: myfrontend
image: nginx
volumeMounts:
- mountPath: "/var/www/html"
name: mypd
volumes:
- name: mypd
hostPath:
path: /data/html
type: Directory
apiVersion: v1
kind: PersistentVolume
metadata:
name: mv-vol1
spec:
accessModes:
- ReadWriteOnce
capacity:
storage: 1Gi
hostPath:
path: /tmp/data
# awsElasticBlockStore:
# volumeID: <volume-id>
# fsType: ext4
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mvclaim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 500Mi
apiVersion: v1
kind: Pod
metadata:
name: mypod
spec:
containers:
- name: myfrontend
image: nginx
volumeMounts:
- mountPath: "/var/www/html"
name: mypd
volumes:
- name: mypd
persistentVolumeClaim:
claimName: myclaim
nslookup
ip link
ip addr
ip addr add 192.168.1.10/24 dev eth0
ip route
ip route add 192.168.1.10/24 via 192.168.2.1
cat /proc/sys/net/ipv4/ip_forward, cat /etc/hosts # 1
ping
arp
netstat -nplt
ip link add v-net-0 type bridge
on each nodeip link set dev v-net-0 up
on each node to bring them upip -n <namespace> addr add ...
ip -n <namespace> route add ...
ip addr add 10.244.1.1/24 dev v-net-0
, ip addr add 10.244.2.1/24 dev v-net-0
, and ip addr add 10.244.3.1/24 dev v-net-0
to each nodeip -n <namespace> link set ...
ip route add 10.244.2.2 via 192.168.1.12
and ip route add 10.244.3.1 via 192.168.1.13
/etc/cni/net.d/*.conf
iptables -L -t net | grep db-service
curl http://web-service
curl http://web-service.apps.svc.cluster.local
curl http://10-244-2-5.apps.pod.cluster.local
nginx-ingress-controller
å as a pod firstapiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-app
spec:
backend:
serviceName: app-service
servicePort: 8080
version C: sub-domain (rule 2, 3)
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-wear-watch
spec:
rules:
- host: wear.my-online-store.com
http:
paths:
- backend:
serviceName: wear-service
servicePort: 8080
- host: watch.my-online-store.com
http:
paths:
- backend:
serviceName: watch-service
servicePort: 8088
default-http-backend:80
for default/fallback service (404 page..etc)rewrite target
metadata:
name: test-ingress
namespace: critical-space
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
...
http://<ingress-service>:<ingress-port>/wear --> http://<wear-service>:<port>/wear
http://<ingress-service>:<ingress-port>/wear --> http://<wear-service>:<port>/
(rewrite the /wear
with /
)