k8s使用

  • kubectl创建别名
1
alias k=kubectl
  • tab补全命令
1
2
3
yum install -y bash-completion
source /usr/share/bash-completion/bash_completion
source <(kubectl completion bash sed s/kubectl/k/g)
  • kubernets运行应用
1
2
3
kubectl run kubia --image=luksa/kubia --port=8080 --generator=run/v1
#--image=luksa/kubia 容器运行时所需镜像
#--port=8080 监听8080端口
  • 创建一个服务
1
2
3
4
5
#创建一个LoadBalancer服务
kubectl expose pod kubia --type=LoadBalancer --name kubia-http
#查看
kubectl get svc
kubectl scale rc nginx-test --replicas=3
  • 公共配置参数
1
2
3
4
5
6
7
8
9
--log-backtrace-at traceLocation 记录日志每到 file:行号时打印一次stack trace 默认值0
--log-dir string 日志文件路径
--log-flush-frequency duration 设置flush日志文件的时间间隔 默认值5s
--logtostderr 设置为true表示将日志输出到stderr 不输出到日志文件
--alsologtostderr 设置为true表示将日志输出到日志文件同时输出到stderr
--stderrthreshold severity 将threshold级别以上的日志输出到stderr 默认值2
--v Level 配置日志级别
--vmodule moduleSpec 详细日志级别
--version version[=true] 输出其版本号
  • kube-apiserver 启动参数
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
--admission-control strings 对发送给apiserver的任何请求进行准入控制 配置为一个准入控制器列表
AlwaysAdmit 运行所有请求
AlwaysDeny 禁止所有请求
AlwaysPullImages 启动容器之前总是去下载镜像
DefaultStorageClass 实现共享存储动态供应 为未指定StorageClass或PV的PVC匹配默认StorageClass
DefaultTolerationSeconds 设置默认容忍时间 5min
DenyEscalatingExec 拦截 所有exec和attach到具有特权的Pod上的请求
DenyExecOnPrivileged 拦截所有想在privileged container上执行命令的请求
ImagePolicyWebhook 允许后端webhook程序完成admission controller
LimitRanger 配额管理
NamespaceLifecycle 拒绝在不存在namespace中创建资源对象的请求 删除namespace时删除所有对象
PodPreset pod启动时注入应用所需设置
PodSecurityPolicy 对pod进行安全策略控制
ResourceQuota 配额管理
……
--advertise-address ip 广播给集群所有成员自己的IP地址
--allow-privileged 配置为true 运行pod中运行拥有系统特权的容器应用
--anonymous-auth 配置为true表示apiserver接收匿名请求 默认值true
--apiserver-count 集群中运行apiserver数量 默认值1
--authorization-mode 认证模式列表 多个以逗号分隔

Pod 资源文件详细说明

属性名称

取值类型

是否必需

取值说明

version

String

yes

v1

kind

String

yes

Pod

metadata

Object

yes

元数据

metadata.name

String

yes

Pod的名称

metadata.namespace

String

yes

Pod所属名称空间

metadata.labels[]

List

自定义标签列表

metadata.annotation[]

List

自定义注解列表

spec

Object

yes

Pod中容器详细定义

spec.containers[]

List

yes

Pod中的容器列表

spec.containers[].name

String

yes

容器的名称

spec.containers[].image

String

yes

容器的镜像名称

spec.containers[].imagePullPolicy

String

获取镜像策略

spec.containers[].command[]

List

容器启动命令列表

spec.containers[].args[]

List

启动命令参数列表

spec.containers[].workingDir

String

容器工作目录

spec.containers[].volumeMounts[]

List

容器存储卷配置

spec.containers[].volumeMounts[].name

String

共享存储卷名称

spec.containers[].volumeMounts[].mountPath

String

存储卷容器内挂载绝对路径

spec.containers[].volumeMounts[].readOnly

Boolean

是否只读模式,默认读写模式

spec.containers[].ports[]

List

容器暴露的端口号列表

spec.containers[].ports[].name

String

端口的名称

spec.containers[].ports[].containerPort

Int

容器需要监听的端口号

spec.containers[].ports[].hostPort

Int

默认与containerPort一致

spec.containers[].ports[].protocol

String

端口协议TCP UDP 默认TCP

spec.containers[].env[]

List

容器需要环境变量列表

spec.containers[].env[].name

String

环境变量的名称

spec.containers[].env[].value

String

环境变量的值

spec.containers[].resources

Object

资源限制和资源请求设置

spec.containers[].resources.limits

Object

资源限制的设置

spec.containers[].resources.limits.cpu

String

CPU限制 单位为core数

spec.containers[].resources.limits.memory

String

内存限制 单位MiB/GiB

spec.containers[].resources.requests

Object

请求限制的设置

spec.containers[].resources.requests.cpu

String

CPU请求 单位为core数

spec.containers[].resources.requests.memory

String

内存请求 单位MiB/GiB

spec.volumes[]

List

Pod定义共享存储卷列表

spec.volumes[].name

String

共享存储卷的名称

spec.volumes[].emptyDir

Object

与Pod同生命周期的临时目录

spec.volumes[].hostPath

Object

Pod所在宿主机的目录

spec.volumes[].hostPath.path

String

Pod所在在主机的目录

spec.volumes[].secret

Object

挂载预定义secret对象到容器

spec.volumes[].configMap

Object

挂载预定义configMap对象到容器

spec.volumes[].livenessProbe

Object

健康检查配置

spec.volumes[].livenessProbe.exec

Object

使用exec方式

spec.volumes[].livenessProbe.exec.command[]

String

指定命令或脚本

spec.volumes[].livenessProbe.httpGet

Object

使用httpGet方式 path prot

spec.volumes[].livenessProbe.tcpSocket

Object

使用tcpSocket方式

spec.volumes[].livenessProbe.initialDelaySeconds

Number

启动后首次探测时间 单位s

spec.volumes[].livenessProbe.timeoutSeconds

Number

探测超时时间 默认1s

spec.volumes[].livenessProbe.periodSeconds

Number

探测时间间隔 默认10s

spec.restartPolicy

String

重启策略

spec.nodeSelector

Object

Pod调度到包含label的Node key:value格式指定

spec.imagePullSecrets

Object

Pull镜像使用secret

spec.hostNetwork

Boolean

是否使用主机网络模式

Pod 资源文件详细说明

属性名称

取值类型

是否必需

取值说明

version

String

yes

v1

kind

String

yes

Pod

metadata

Object

yes

元数据

metadata.name

String

yes

Pod的名称

metadata.namespace

String

yes

Pod所属名称空间

metadata.labels[]

List

自定义标签列表

metadata.annotation[]

List

自定义注解列表

spec

Object

yes

Pod中容器详细定义

spec.selector[]

List

yes

选择指定label标签的Pod

spec.type

String

yes

service的类型默认ClusterIP

spec.clusterIP

String

虚拟服务IP地址

spec.sessionAffinity

String

是否支持session 默认为空 可选ClientIP 同一客户端到同一后端Pod

spec.ports[]

List

service需要暴露端口列表

spec.ports[].name

String

端口名称

spec.ports[].protocol

String

端口协议 TCP UDP 默认TCP

spec.ports[].port

int

服务监听端口号

spec.ports[].targetPort

int

需要转发到后端Pod的端口号

spec.ports[].nodePort

int

当type=NodePort时 映射宿主机端口号

status

object

当type=LoadBalancer时 设置外部负载均衡器地址

status.loadBalancer

object

外部负载均衡器

status.loadBalancer.ingress

object

外部负载均衡器

status.loadBalancer.ingress.ip

string

外部负载均衡器的IP地址

status.loadBalancer.ingress.hostname

string

外部负载均衡器的主机名

进入容器

1
kubectl exec -it podname -c containername -n namespace -- shell command

VOLUME

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
apiVersion: apps/v1  #注意版本号
kind: Deployment
metadata:
name: nginx-dep
spec:
selector: #属性,选择器
matchLabels:
app: nginx
replicas: 1 #管理的副本个数
template: #模板属性
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
volumeMounts: #定义挂载卷
- mountPath: "/var/log/nginx"
name: nginx-vol
- name: busybox
image: busybox
command: ["sh", "-c", "tail -f /logs/access.log"]
volumeMounts:
- mountPath: /logs
name: nginx-vol
volumes: #定义共享卷
- name: nginx-vol
emptyDir: {}

CONFIGMAP

1
2
3
4
5
6
7
8
9
10
11
12
kubectl create configmap user-config --from-file=./
kubectl create configmap log-config --from-file=./2.txt
#查看
kubectl get cm/user-config -o yaml

apiVersion: v1 #注意版本号
kind: ConfigMap
metadata:
name: test-configmap
data:
apploglevel: info
appdatadir: /var/data
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
apiVersion: apps/v1  #注意版本号
kind: Deployment
metadata:
name: nginx-dep-configmap
spec:
selector: #属性,选择器
matchLabels:
app: nginx
replicas: 1 #管理的副本个数
template: #模板属性
metadata:
labels:
app: nginx
spec:
containers:
- name: busybox
image: busybox
command: ["sh", "-c", "env grep APP"]
env:
- name: APPLOGLEVEL
valueFrom:
configMapKeyRef:
name: test-configmap
key: apploglevel
- name: APPDATADIR
valueFrom:
configMapKeyRef:
name: test-configmap
key: appdatadir
restartPolicy: Never
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
#envFrom
apiVersion: apps/v1 #注意版本号
kind: Deployment
metadata:
name: nginx-dep-configmap
spec:
selector: #属性,选择器
matchLabels:
app: nginx
replicas: 1 #管理的副本个数
template: #模板属性
metadata:
labels:
app: nginx
spec:
containers:
- name: busybox
image: busybox
command: ["sh", "-c", "env"]
envFrom:
- configMapRef:
name: test-configmap
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
#configmap热更新
apiVersion: v1
kind: ConfigMap
metadata:
name: reload-config
data:
logLevel: INFO
---
apiVersion: extensions/v1beta1
#apiVersion: apps/v1 要加上selector
kind: Deployment
metadata:
name: nginx-ig
spec:
replicas: 2
template:
metadata:
labels:
name: nginx
spec:
containers:
- name: nginx
image: nginx:latest
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
volumeMounts:
- name: reload-volume
mountPath: /etc/config
volumes:
- name: reload-volume
configMap:
name: reload-config

kubectl exec nginx-ig-b898c76f5-2w8ws -it -- cat /etc/config/logLevel
kubectl edit configmaps reload-config
kubectl exec nginx-ig-b898c76f5-2w8ws -it -- cat /etc/config/logLevel
#注意:使用configmap挂载env不会同步更新,使用configmap挂载volume的中数据需要一段时间(10s)才能同步更新
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
#将pod信息注入环境变量
apiVersion: apps/v1 #注意版本号
kind: Deployment
metadata:
name: nginx-dep-configmap
spec:
selector: #属性,选择器
matchLabels:
app: nginx
replicas: 1 #管理的副本个数
template: #模板属性
metadata:
labels:
app: nginx
spec:
containers:
- name: busybox
image: busybox
command: ["sh", "-c", "env"]
env:
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: MY_POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
#将资源限制信息注入环境变量
apiVersion: apps/v1 #注意版本号
kind: Deployment
metadata:
name: nginx-dep-configmap
spec:
selector: #属性,选择器
matchLabels:
app: nginx
replicas: 1 #管理的副本个数
template: #模板属性
metadata:
labels:
app: nginx
spec:
containers:
- name: busybox
image: busybox
command: ["sh", "-c", "env"]
resources:
requests:
memory: "32Mi"
cpu: "125m"
limits:
memory: "64Mi"
cpu: "250m"
env:
- name: MY_CPU_REQUEST
valueFrom:
resourceFieldRef:
containerName: busybox
resource: requests.cpu
- name: MY_MEM_LIMIT
valueFrom:
resourceFieldRef:
containerName: busybox
resource: limits.memory

Pod健康检查

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
#livenessProbe 检查容器是否存活(running)
#1.ExecAction 容器内执行命令,改命令返回码为0表明容器健康
apiVersion: v1
kind: Pod
metadata:
labels:
test: liveness
name: liveness-exec
spec:
containers:
- name: liveness
image: busybox
command: ["sh", "-c", "echo ok > /tmp/health","sleep 10","rm -rf /tmp/health","sleep 600"]
livenessProbe:
exec:
command: ["cat", "/tmp/health"]
initialDelaySeconds: 15
timeoutSeconds: 1
#2.TCPSocketAction
apiVersion: v1
kind: Pod
metadata:
name: liveness-socket
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
livenessProbe:
tcpSocket:
port: 81
#首次探测时间
initialDelaySeconds: 5
#每隔多少s探测一次
periodSeconds: 2
#检查失败尝试几次
failureThreshold: 3
#3.HTTPGetAction
apiVersion: v1
kind: Pod
metadata:
name: liveness-http
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
livenessProbe:
httpGet:
path: /index1.html
port: 80
#首次探测时间
initialDelaySeconds: 20
timeoutSeconds: 1
#检查失败尝试几次
failureThreshold: 3

#ReadinessProbe 检查容器是否启动完成(ready)

调度器

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
#公平
#资源利用率高
#效率
#灵活

#自定义调度器
apiVersion: v1
kind: Pod
metadata:
name: busybox
labels:
name: bb-test
spec:
schedulername: my-scheduler
containers:
- image: busybox
command:
- sleep
- "3600"
name: busybox
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
#节点亲和性
#requiredDuringSchedulingIgnoredDuringExecution 硬策略
#preferredDuringSchedulingIgnoredDuringExecution 软策略

apiVersion: v1
kind: Pod
metadata:
name: with-node-affinity
spec:
affinity:
nodeAffinity:
#硬策略
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- 192.168.7.152
containers:
- name: with-node-affinity
image: nginx
imagePullPolicy: "IfNotPresent"
---
apiVersion: v1
kind: Pod
metadata:
name: with-node-affinity2
spec:
affinity:
nodeAffinity:
#硬策略
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 1
preference:
matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- 192.168.7.153
containers:
- name: with-node-affinity2
image: nginx
imagePullPolicy: "IfNotPresent"
1
2
3
4
5
6
7
#键值运算关系
In label的值在某个列表中
NotIn label的值不在某个列表中
Gt label的值大于某个值
Lt label的值小于某个值
Exists 某个label存在
DoesNotExist 某个label不存在

Pod调度

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
#requiredDuringSchedulingIgnoredDuringExecution 硬策略
#preferredDuringSchedulingIgnoredDuringExecution 软策略

apiVersion: v1
kind: Pod
metadata:
name: with-pod-affinity
spec:
affinity:
podAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- pod-1
topologyKey: kubernetes.io/hostname
containers:
- name: with-pod-affinity
image: nginx
imagePullPolicy: "IfNotPresent"
---
apiVersion: v1
kind: Pod
metadata:
name: with-pod-affinity2
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- pod-1
topologyKey: kubernetes.io/hostname
containers:
- name: with-pod-affinity
image: nginx
imagePullPolicy: "IfNotPresent"

#1.Deployment 全自动调度
#2.定向调度 NodeSelector
#给Node打标签
kubectl label nodes node-01 key=val
#查看标签
kubectl get nodes --show-labels
#删除标签
kubectl label nodes node-01 key-
#修改标签
kubectl label nodes node-01 key=val2 --overwirte
#例子
apiVersion: v1
kind: Pod
metadata:
name: busybox
spec:
containers:
- image: busybox
command:
- sleep
- "3600"
name: busybox
nodeSelector:
zone: north

#NodeAffinity Node亲和性调度
#PodAffinity
1
2
3
4
5
#亲和性/反亲和性调度策略
调度策略 匹配标签 操作符 是否支持拓扑域 调度目标
nodeAffinity 节点 In,NotIn,Exists,DoesNotExist,Gt,Lt 否 指定主机
podAffinity Pod In,NotIn,Exists,DoesNotExist 是 Pod与指定Pod同一拓扑域
podAntiAffinity Pod In,NotIn,Exists,DoesNotExist 是 Pod与指定Pod同一拓扑域

污点(Taint) 容忍(Toleration)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
#Taint
key=value:effect
每个污点有一个key和value作为污点标签,其中value可以为空,effect描述污点作用
effect支持:
NoSchedule: 不会将pod调度到具有此污点的Node上
PreferNoSchedule: 尽量避免将pod调度到具有此污点的Node上
NoExecute: 不会将pod调度到具有此污点的Node上,同时将Node上已存在Pod驱逐出去

污点设置
kubectl taint nodes 192.168.7.152 check=ropon:NoExecute
查看
kubectl describe node 192.168.7.152grep Taints
删除
kubectl taint nodes 192.168.7.152 check:NoExecute-

#Toleration
pod.spec.tolerations

tolerations:
- key: "check"
operator: "Exists"
value: "ropon"
effect: "NoSchedule"
#停留在node污点的时间
tolerationSeconds: 60

apiVersion: v1
kind: Pod
metadata:
name: with-pod-affinity2
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- pod-1
topologyKey: kubernetes.io/hostname
tolerations:
- key: "check"
operator: "Equal"
value: "ropon"
effect: "NoExecute"
tolerationSeconds: 60
containers:
- name: with-pod-affinity
image: nginx
imagePullPolicy: "IfNotPresent"

其中key,value,effect与Node上设置taint必须一致
operator的值Exists将会忽略value值
tolerationSeconds 描述当前Pod需要被驱逐时在Node上保留运行时间

不指定key值时容忍所有污点key
tolerations:
- operator: "Exists"

不指定effect值时容忍所有污点作用
tolerations:
- key: "key"
operator: "Exists"

DaemonSet 每个Node上调度一个Pod

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
#守护进程
#日志采集
#监控程序

apiVersion: apps/v1
kind: DaemonSet
metadata:
name: daemonset-demo
spec:
selector:
matchLabels:
app: testbb
template:
metadata:
labels:
app: testbb
spec:
containers:
- image: busybox
command:
- sleep
- "3600"
name: busybox

Job 批处理调度

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
#spec.template格式同Pod
#RestartPolicy仅支持Never或OnFailure
#单个Pod时,默认Pod成功运行后Job即结束
#spec.completions标志Job结束需要成功运行的Pod个数,默认为1
#spec.parallelism标志并运行Pod个数,默认为1
#spec.activeDeadlineSeconds标志失败Pod最大重试时间
apiVersion: batch/v1
kind: Job
metadata:
name: job-demo
spec:
template:
metadata:
name: job-demo
spec:
containers:
- image: busybox
command:
- sleep
- "40"
name: busybox
restartPolicy: Never

CronJob 基于时间的Job

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
#spec.schedule 调度必需字段,指定任务运行周期
#spec.jobTemplate Job模板必需字段,指定需要运行的任务
#spec.startingDeadlineSeconds启动Job期限
#spec.concurrencyPolicy并发策略,默认Allow Forbid禁止并发 Replace 取消当前用新替换
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: cronjob-demo
spec:
schedule: "*/1 * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- image: busybox
command:
- sh
- -c
- date;echo hello world
name: busybox
restartPolicy: OnFailure

Service 服务

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
#Cluster默认类型
#NodePort
#LoadBalancer
#ExternalName

apiVersion: apps/v1 #注意版本号
kind: Deployment
metadata:
name: myapp-dep
spec:
selector: #属性,选择器
matchLabels:
app: myapp
replicas: 3 #管理的副本个数
template: #模板属性
metadata:
labels:
app: myapp
spec:
containers:
- name: myapp
image: nginx
imagePullPolicy: "IfNotPresent"
ports:
- name: http
containerPort: 80
---
apiVersion: v1 #注意版本号
kind: Service
metadata:
name: myapp
spec:
type: ClusterIP
selector: #属性,选择器
app: myapp
ports:
- name: http
port: 80
targetPort: 80

Headless Service

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
apiVersion: v1  #注意版本号
kind: Service
metadata:
name: myapp-headless
spec:
clusterIP: "None"
selector: #属性,选择器
app: myapp
ports:
- name: http
port: 80
targetPort: 80

#测试
dig @172.20.0.23 myapp-headless.default.svc.cluster.local

NodePort

1
2
3
4
5
6
7
8
9
10
11
12
apiVersion: v1  #注意版本号
kind: Service
metadata:
name: myapp-nodeport
spec:
type: NodePort
selector: #属性,选择器
app: myapp
ports:
- name: http
port: 80
targetPort: 80

LoadBalancer

1
实际与NodePort方式一样

ExternalName

1
2
3
4
5
6
7
8
9
10
apiVersion: v1  #注意版本号
kind: Service
metadata:
name: myapp-ex1
spec:
type: ExternalName
externalName: test.ropon.top

#测试
dig @172.20.0.23 myapp-ex1.default.svc.cluster.local

Ingress

1
2
kubectl apply -f /etc/ansible/manifests/ingress/nginx-ingress/nginx-ingress.yaml
kubectl apply -f /etc/ansible/manifests/ingress/nginx-ingress/nginx-ingress-svc.yaml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
#ingress http
apiVersion: extensions/v1beta1
#apiVersion: apps/v1 要加上selector
kind: Deployment
metadata:
name: nginx-ig
spec:
replicas: 2
template:
metadata:
labels:
name: nginx
spec:
containers:
- name: nginx
image: nginx:latest
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx-svc
spec:
ports:
- port: 80
targetPort: 80
protocol: TCP
selector:
name: nginx
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: nginx-test
spec:
rules:
- host: test1.ropon.top
http:
paths:
- path: /
backend:
serviceName: nginx-svc
servicePort: 80
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
#https ingress
kubectl create secret tls ropon-tls --cert ropon.top.crt --key ropon.top.key

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: nginx-test-https
spec:
tls:
- hosts:
- test2.ropon.top
secretName: ropon-tls
rules:
- host: test2.ropon.top
http:
paths:
- path: /
backend:
serviceName: nginx-svc
servicePort: 80
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
#basicauth
htpasswd -c auth ropon
kubectl create secret generic basic-auth --from-file=auth

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: nginx-test-auth
annotations:
nginx.ingress.kubernetes.io/auth-type: basic
nginx.ingress.kubernetes.io/auth-secret: basic-auth
nginx.ingress.kubernetes.io/auth-realm: 'Authentication Required - ropon'
spec:
rules:
- host: test3.ropon.top
http:
paths:
- path: /
backend:
serviceName: nginx-svc
servicePort: 80
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
#rewrite
#nginx.ingress.kubernetes.io/rewrite-target 重定向目标URL
#nginx.ingress.kubernetes.io/ssl-redirect
#nginx.ingress.kubernetes.io/force-ssl-redirect 强制重定向https
#nginx.ingress.kubernetes.io/app-root
#nginx.ingress.kubernetes.io/use-regex 使用正则

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: nginx-test-rewrite
annotations:
nginx.ingress.kubernetes.io/rewrite-target: https://test2.ropon.top:23457
spec:
rules:
- host: test4.ropon.top
http:
paths:
- path: /
backend:
serviceName: nginx-svc
servicePort: 80

Secret

1
2
3
4
#Service Accout 
#访问kubernets api由kubernets自动创建并且自动挂载Pod的/run/secrets/kubernetes.io/serviceaccount
#Opaque base64编码格式的secret 用来存储密码 密钥
#kubernets.io/dockerconfigjson 用来存储私有docker registry
1
2
#Service Accout
kubectl exec nginx-ig-b898c76f5-2w8ws -- ls /run/secrets/kubernetes.io/serviceaccount
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
#Opaque
echo "ropon"base64
echo "123456"base64

apiVersion: v1
kind: Secret
metadata:
name: mysecret
type: Opaque
data:
username: cm9wb24K
password: MTIzNDU2Cg==
---
apiVersion: extensions/v1beta1
#apiVersion: apps/v1 要加上selector
kind: Deployment
metadata:
name: nginx-secret-test
spec:
replicas: 2
template:
metadata:
labels:
name: nginx
spec:
containers:
- name: nginx
image: nginx:latest
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
volumeMounts:
- name: secrets
mountPath: "/test"
readOnly: true
volumes:
- name: secrets
secret:
secretName: mysecret

#测试
kubectl exec nginx-secret-test-5d9f5c4bc-l6jjk -- cat /test/password
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
#secret导入环境变量
apiVersion: extensions/v1beta1
#apiVersion: apps/v1 要加上selector
kind: Deployment
metadata:
name: nginx-secret-test1
spec:
replicas: 1
template:
metadata:
labels:
name: nginx
spec:
containers:
- name: nginx
image: nginx:latest
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
env:
- name: TEST_USER
valueFrom:
secretKeyRef:
name: mysecret
key: username

#测试
kubectl exec nginx-secret-test1-5d89cd9486-zzjw4 -- env
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
#创建docker registry
kubectl create secret docker-registry myregistrykey --docker-server= --docker-username= --docker-password= --docker-email=

apiVersion: extensions/v1beta1
#apiVersion: apps/v1 要加上selector
kind: Deployment
metadata:
name: nginx-secret-test2
spec:
replicas: 1
template:
metadata:
labels:
name: nginx
spec:
containers:
- name: nginx
image: nginx:latest
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
imagePullSecrets:
- name: myregistrykey

Volume

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
#emptyDir
#暂存空间 共享数据

apiVersion: extensions/v1beta1
#apiVersion: apps/v1 要加上selector
kind: Deployment
metadata:
name: nginx-vol-test
spec:
replicas: 1
template:
metadata:
labels:
name: nginx
spec:
containers:
- name: nginx
image: nginx:latest
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
volumeMounts:
- name: cache-vol
mountPath: "/cache"
- name: busybox
image: busybox
imagePullPolicy: IfNotPresent
command:
- sleep
- "3600"
volumeMounts:
- name: cache-vol
mountPath: "/test"
volumes:
- name: cache-vol
emptyDir: {}

#测试
kubectl exec pod nginx-vol-test-5bc5485bdb-tk7wm -c nginx -it -- /bin/sh
kubectl exec pod/nginx-vol-test-5bc5485bdb-tk7wm -c busybox -it -- /bin/sh
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
#hostPath
#将节点的文件或目录挂载到集群中
#"" 默认 不做任何检查
#DirectoryOrCreate 指定路径不存在则创建空目录 权限755 与kublete具有相同组和所有权
#Directory 指定路径下必须存在目录
#FileOrCreate 指定文件路径不存在则创建空文件 权限644 与kublete具有相同组和所有权
#File 指定路径下必须存在文件
#Socket 指定路径下必须存在套接字
#CharDevice 指定路径下必须存在字符设备
#BlockDevice 指定路径下必须存在块设备
mkdir /www
echo "hello" > /www/index.html
date >> /www/index.html

apiVersion: extensions/v1beta1
#apiVersion: apps/v1 要加上selector
kind: Deployment
metadata:
name: nginx-vol-test1
spec:
replicas: 3
template:
metadata:
labels:
name: nginx
spec:
containers:
- name: nginx
image: nginx:latest
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
volumeMounts:
- name: nginx-vol
mountPath: "/usr/share/nginx/html"
volumes:
- name: nginx-vol
hostPath:
path: /www
type: Directory

PV PVC

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
PV是集群中的资源 独立于Pod的生命周期
PVC 用户存储请求 与Pod类型,Pod消耗节点资源(CPU和内存)PVC消耗PV资源
PV访问模式
ReadWriteOnce RWO 该卷可被单个节点读写挂载
ReadOnlyMany ROX 该卷可被多个节点读挂载
ReadWriteMany RWX 该卷可被多个节点读写挂载

回收策略
Retain 保留手动回收
Recycle 回收
Delete 删除

状态
Available 可用
Bound 已绑定
Released 已释放
Failed 失败
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
#部署PV
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfspv1
spec:
capacity:
storage: 8Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: nfs
nfs:
path: /home/k8sdata
server: 172.16.7.151
---
#创建服务并使用PVC
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
app: nginx
spec:
ports:
- port: 80
name: web
clusterIP: None
selector:
app: nginx
---
#部署statefulset
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web
spec:
selector:
matchLabels:
app: nginx
serviceName: "nginx"
replicas: 3
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:latest
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
name: web
volumeMounts:
- name: www
mountPath: "/usr/share/nginx/html"
volumeClaimTemplates:
- metadata:
name: www
spec:
accessModes: ["ReadWriteMany"]
storageClassName: nfs
resources:
requests:
storage: 2Gi

新增Node节点

1
2
3
4
5
6
7
8
9
#安装ansible
yum install -y ansible
#安装pip
yum install -y python-pip
#安装netaddr
pip install netaddr -i https://mirrors.aliyun.com/pypi/simple/
pip install configparser -i https://mirrors.aliyun.com/pypi/simple/
pip install --upgrade pip -i https://mirrors.aliyun.com/pypi/simple/
pip install zipp -i https://mirrors.aliyun.com/pypi/simple/

动态PV

1
2
#github地址:
https://github.com/kubernetes-incubator/external-storage/tree/master/nfs/deploy/kubernetes
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
#创建RBAC授权
cat rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: nfs-client-provisioner
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: nfs-client-provisioner-runner
rules:
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["list", "watch", "create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: run-nfs-client-provisioner
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
namespace: default
roleRef:
kind: ClusterRole
name: nfs-client-provisioner-runner
apiGroup: rbac.authorization.k8s.io
1
2
3
4
5
6
7
#创建Storageclass类
cat storageclass-nfs.yaml
apiVersion: storage.k8s.io/v1beta1
kind: StorageClass
metadata:
name: managed-nfs-storage
provisioner: fuseim.pri/ifs
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
#创建nfs的deployment
cat deployment-nfs.yaml
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: nfs-client-provisioner
spec:
replicas: 1
strategy:
type: Recreate
template:
metadata:
labels:
app: nfs-client-provisioner
spec:
imagePullSecrets:
- name: registry-pull-secret
serviceAccount: nfs-client-provisioner
containers:
- name: nfs-client-provisioner
image: lizhenliang/nfs-client-provisioner:v2.0.0
volumeMounts:
- name: nfs-client-root
mountPath: /persistentvolumes
env:
- name: PROVISIONER_NAME
value: fuseim.pri/ifs
- name: NFS_SERVER
value: 172.16.7.151
- name: NFS_PATH
value: /home/k8sdata
volumes:
- name: nfs-client-root
nfs:
server: 172.16.7.151
path: /home/k8sdata
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
#使用statefulset创建nginx服务动态供给pv
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
app: nginx
spec:
ports:
- port: 80
name: web
clusterIP: None
selector:
app: nginx
---
#部署statefulset
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web
spec:
selector:
matchLabels:
app: nginx
serviceName: "nginx"
replicas: 3
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:latest
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
name: web
volumeMounts:
- name: www
mountPath: "/usr/share/nginx/html"
volumeClaimTemplates:
- metadata:
name: www
spec:
accessModes: ["ReadWriteOnce"]
storageClassName: "nfs-storage"
resources:
requests:
storage: 2Gi

StatefulSet

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
#Pod名称:$(statefulset名称)-$(序号)
#StatefulSet为每个Pod副本创建一个DNS域名
#格式:$(podname).$(headlessservername).$(namespace).svc.cluster.local 通过域名通信并非Pod IP
#StatefulSet使用Headless服务控制Pod的域名
#格式:$(servicename).$(namespace).svc.cluster.local
#根据volumeClaimTemplates为每个Pod创建一个pvc
#删除Pod不会删除其pvc,手工删除pvc将自动释放pv

#StatefulSet启动顺序
#有序部署:部署StatefulSet 多个副本 顺序创建(0~N-1) 下一个Pod运行之前 之前Pod必须是Running或Ready
#有序删除:Pod被删除时 顺序删除(N-1~0)
#有序扩展:Pod扩展 之前Pod必须是Running或Ready

#使用场景
持久化存储 Pod重新调度后还能访问相同数据 基于PVC实现
稳定网络标识符 Pod重新调度后其PodName和HostName不变
有序部署 有序扩展 基于init containers实现
有序收缩

集群安全

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
Authentication
RBAC
Role ClusterRole RoleBinding ClusterRoleBinding
k8s没有提供用户管理
ApiServer会把客户端证书CN字段作为User 把names.O字段作为Group

kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
namespace: default
name: pod-reader
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get","watch","list"]

ClusterRole 具有与Role相同的权限角色控制能力 不同的是ClusterRole是集群级别
集群级别的资源控制
非资源类型endpoints
所有命名空间资源控制

kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: secret-reader
rules:
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get","watch","list"]

RoleBinding ClusterRoleBinding
RoleBinding可以将角色中定义权限赋予用户或用户组 包含一组权限列表(subjects)
权限列表包含不同形式权限资源类型(user,groups,service accounts)
RoleBinding包含对被Bind的Role引用,RoleBinding适用于某个命名空间内的授权
ClusterRoleBinding适用于集群范围内的授权

#将default命名空间pod-reader Role授予ropon用户
#此后ropon用户名在default命名空间中将具有pod-reader的权限
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: pod-reader
namespace: default
subjects:
- kind: User
name: ropon
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: Role
name: pod-reader
apiGroup: rbac.authorization.k8s.io

RoleBinding同样可以引用ClusterRole来对当前namespace内用户、用户组或ServiceAccount进行授权
允许集群管理员在整个集群内定义一些通用的ClusterRole,然后再不同namespace中使用RoleBinding来引用

#RoleBinding引用一个ClusterRole,这个ClusterRole具有整个集群内对secrets的访问权限
#但其授权用户ropon只能访问development空间中的secrets(因为RoleBinding定义在development命名空间)
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: read-secrets
namespace: development
subjects:
- kind: User
name: ropon
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: read-secrets
apiGroup: rbac.authorization.k8s.io

#使用ClusterRoleBinding可以对整个集群中所有命名空间资源权限进行授权
#ClusterRoleBinding授权manager组所有用户名在全部命名空间对secrets进行访问
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: read-secrets-global
namespace: development
subjects:
- kind: Group
name: manager
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: read-secrets
apiGroup: rbac.authorization.k8s.io

Resources

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
Kubernets集群内一些资源一般以其名称字符串来表示,这些字符串一般会在API的URL地址中出现
某些资源还包含子资源,比如logs资源属于pods的子资源
GET /api/v1/namespaces/{namespace}/pods/{name}/log

#定义pods资源logs访问权限的Role
kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: pod-and-pod-logs-reader
namespace: default
rules:
- apiGroup: [""]
resources: ["pods/log"]
verbs: ["get","list"]

RoleBinding和ClusterRoleBinding可以将Role绑定subjects
subjects可以是groups、users或者service accounts
subjects中Users使用字符串表示,恶意普通名字字符串,可以是email地址
还可以字符串形式数组ID,但前缀不能以system开头
同理Groups格式与Users相同,都为一个字符串,前缀不能以system开头

实战 创建一个用户只能管理dev空间

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
useradd devuser
passwd devuser
kubectl create namespace dev
cat dev-csr.json
{
"CN": "devuser",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "HangZhou",
"L": "XS",
"O": "k8s",
"OU": "System"
}
]
}

cfssl gencert -ca=./ca.pem -ca-key=./ca-key.pem -profile=kubernets ./dev-csr.jsoncfssljson -bare devuser
#设置集群参数
export KUBE_APISERVER="https://192.168.7.150:6443"
kubectl config set-cluster cluster1 \
--certificate-authority=/etc/kubernetes/ssl/ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER}
--kubeconfig=devuser.kubeconfig
#设置客户端认证参数
kubectl config set-credentials devuser \
--client-certificate=/etc/kubernetes/ssl/kubelet.pem \
--client-key=/etc/kubernetes/ssl/kubelet-key.pem \
--embed-certs=true \
--kubeconfig=devuser.kubeconfig
#设置上下文参数
kubectl config set-context cluster1 \
--cluster=cluster1 \
--user=devuser \
--namespace=dev \
--kubeconfig=devuser.kubeconfig
#进行RoleBinding角色绑定
kubectl create rolebinding devuser-admin-binding --clusterrole=admin --user=devuser --namespace=dev
cp devuser.kubeconfig /home/devuser/.kube/config
#切换devuser用户并切换上下文
cd /home/devuser/.kube
kubectl config use-context cluster1 --kubeconfig=config

Helm

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
cat helm-rabc-config.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: tiller
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: tiller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: tiller
namespace: kube-system

kubectl create -f helm-rabc-config.yaml
helm init --service-account tiller --history-max 200 --tiller-image registry.cn-hangzhou.aliyuncs.com/google_containers/tiller:v2.13.1 --stable-repo-url https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts --upgrade
kubectl get pod -n kube-system -l name=tiller
#替换helm的repo为阿里镜像仓库
helm repo remove stable
helm repo add stable https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts
helm repo update

cat Chart.yaml
name: hello-world
version: 1.0.0

cat templates/deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: hello-world
spec:
replocas: 2
template:
metadata:
labels:
app: hello-world
spec:
containers:
- name: hello-world
image: nginx
ports:
- containerPort: 80
protocal: TCP

cat templates/service.yaml
apiVersion: v1
kind: Service
metadata:
name: hello-world
spec:
type: NodePort
ports:
- port: 80
targetPort: 80
protocol: TCP
selector:
app: hello-world

#安装
helm install .
#列出已经部署的Release
helm ls
#查询具体Release的状态
helm status XXXXX
#删除所有与具体Release相关的kubernets资源
helm delete XXXXX
helm rollback

#Debug 使用模板动态生成k8s资源清单 能提前预览生成的结果
#--dry-run --debug 选项打印出 生成的清单文件内容 但不执行部署
helm install . --dry-run --debug --set image.tag=latest

Prometheus