kubernetes-role
RBAC,基于角色的权限管控
普通role,namespace资源
cluster-role,集群资源,全局资源,可以被所有namespace调用
role :一组权限的集合
role 属于某个namespace,就管理某个namespace下的权限控制
常见的clusterrole:
1),cluster-admin: 最高权限
2),admin 项目管理员
3),edit 编辑者
4),view: 查看者
role需和用户绑定,一个或多个
创建一个普通role,应用于emporer用户,拥有删除和创建deployment,daemonset,和pod,资源的权限:
原先的emporer用户我绑定了cluster-admin一个admin 的权限
[root@master01 ~]# kubectl get clusterrolebindings.rbac.authorization.k8s.io |grep emporer
emporer-admin-role ClusterRole/admin 21h
验证下,现在还能获取pv 和statefulset资源。。
[root@mysql-master01 ~]# kubectl get pvc,sts
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/data-mysql-0 Bound slave01-pv 5Gi RWX slow 46h
persistentvolumeclaim/data-mysql-1 Bound slave02-pv 5Gi RWX slow 46h
persistentvolumeclaim/data-mysql-2 Bound master01-pv 5Gi RWX slow 46h
NAME READY AGE
statefulset.apps/mysql 3/3 46h
[root@mysql-master01 ~]#
删除cluster-rolebindings
[root@master01 ~]# kubectl delete clusterrolebindings.rbac.authorization.k8s.io emporer-admin-role
clusterrolebinding.rbac.authorization.k8s.io "emporer-admin-role" deleted
测试无法查看pv,sts,资源
[root@mysql-master01 ~]# kubectl get pvc,sts
Error from server (Forbidden): persistentvolumeclaims is forbidden: User "emporer" cannot list resource "persistentvolumeclaims" in API group "" in the namespace "default"
Error from server (Forbidden): statefulsets.apps is forbidden: User "emporer" cannot list resource "statefulsets" in API group "apps" in the namespace "default"
[root@mysql-master01 ~]# kubectl get pvc,sts
自定义role
照葫芦画瓢。利用clusterrole 的admin role
[root@master01 ~]# kubectl describe clusterrole admin
Name: admin
Labels: kubernetes.io/bootstrapping=rbac-defaults
Annotations: rbac.authorization.kubernetes.io/autoupdate: true
PolicyRule:
Resources Non-Resource URLs Resource Names Verbs
--------- ----------------- -------------- -----
rolebindings.rbac.authorization.k8s.io [] [] [create delete deletecollection get list patch update watch]
roles.rbac.authorization.k8s.io [] [] [create delete deletecollection get list patch update watch]
configmaps [] [] [create delete deletecollection patch update get list watch]
events [] [] [create delete deletecollection patch update get list watch]
persistentvolumeclaims [] [] [create delete deletecollection patch update get list watch]
pods [] [] [create delete deletecollection patch update get list watch]
replicationcontrollers/scale [] [] [create delete deletecollection patch update get list watch]
replicationcontrollers [] [] [create delete deletecollection patch update get list watch]
services [] [] [create delete deletecollection patch update get list watch]
daemonsets.apps [] [] [create delete deletecollection patch update get list watch]
deployments.apps/scale [] [] [create delete deletecollection patch update get list watch]
deployments.apps [] [] [create delete deletecollection patch update get list watch]
replicasets.apps/scale [] [] [create delete deletecollection patch update get list watch]
replicasets.apps [] [] [create delete deletecollection patch update get list watch]
statefulsets.apps/scale [] [] [create delete deletecollection patch update get list watch]
statefulsets.apps [] [] [create delete deletecollection patch update get list watch]
horizontalpodautoscalers.autoscaling [] [] [create delete deletecollection patch update get list watch]
cronjobs.batch [] [] [create delete deletecollection patch update get list watch]
jobs.batch [] [] [create delete deletecollection patch update get list watch]
daemonsets.extensions [] [] [create delete deletecollection patch update get list watch]
deployments.extensions/scale [] [] [create delete deletecollection patch update get list watch]
deployments.extensions [] [] [create delete deletecollection patch update get list watch]
ingresses.extensions [] [] [create delete deletecollection patch update get list watch]
networkpolicies.extensions [] [] [create delete deletecollection patch update get list watch]
replicasets.extensions/scale [] [] [create delete deletecollection patch update get list watch]
replicasets.extensions [] [] [create delete deletecollection patch update get list watch]
replicationcontrollers.extensions/scale [] [] [create delete deletecollection patch update get list watch]
ingresses.networking.k8s.io [] [] [create delete deletecollection patch update get list watch]
networkpolicies.networking.k8s.io [] [] [create delete deletecollection patch update get list watch]
poddisruptionbudgets.policy [] [] [create delete deletecollection patch update get list watch]
deployments.apps/rollback [] [] [create delete deletecollection patch update]
deployments.extensions/rollback [] [] [create delete deletecollection patch update]
localsubjectaccessreviews.authorization.k8s.io [] [] [create]
pods/attach [] [] [get list watch create delete deletecollection patch update]
pods/exec [] [] [get list watch create delete deletecollection patch update]
pods/portforward [] [] [get list watch create delete deletecollection patch update]
pods/proxy [] [] [get list watch create delete deletecollection patch update]
secrets [] [] [get list watch create delete deletecollection patch update]
replicasets.apps/status [] [] [get list watch]
statefulsets.apps/status [] [] [get list watch]
horizontalpodautoscalers.autoscaling/status [] [] [get list watch]
cronjobs.batch/status [] [] [get list watch]
jobs.batch/status [] [] [get list watch]
endpointslices.discovery.k8s.io [] [] [get list watch]
daemonsets.extensions/status [] [] [get list watch]
deployments.extensions/status [] [] [get list watch]
ingresses.extensions/status [] [] [get list watch]
略。。。。。。。。。。。。。
[root@master01 ~]# kubectl get clusterrole admin -o yaml
aggregationRule:
clusterRoleSelectors:
- matchLabels:
rbac.authorization.k8s.io/aggregate-to-admin: "true"
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
creationTimestamp: "2023-02-09T07:27:20Z"
labels:
kubernetes.io/bootstrapping: rbac-defaults
name: admin
resourceVersion: "373127"
uid: ec705964-6d7b-402a-81b6-9c508a7845fb
rules:
- apiGroups:
- ""
resources:
- pods/attach
- pods/exec
- pods/portforward
- pods/proxy
- secrets
- services/proxy
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- serviceaccounts
verbs:
- impersonate
- apiGroups:
- ""
resources:
- pods
- pods/attach
- pods/exec
- pods/portforward
- pods/proxy
verbs:
- create
- delete
- deletecollection
- patch
- update
- apiGroups:
- ""
resources:
- configmaps
- events
- persistentvolumeclaims
- replicationcontrollers
- replicationcontrollers/scale
- secrets
- serviceaccounts
- services
- services/proxy
verbs:
略。。。。。。。。。。。。。。。。。。。。
利用上面的yaml复制黏贴一个yaml来改
[root@master01 emporer]# vi emporer-role.yaml
[root@master01 emporer]# cat emporer-role.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
labels:
sts-dep-pod: role
name: emporer-role
namespace: project #作用于那个namespace
rules:
- apiGroups:
- ""
resources:
- pods
- pods/attach
- pods/exec
- pods/portforward
- pods/proxy
verbs:
- get
- list
- watch
- create
- delete
- deletecollection
- patch
- run
- update
- apiGroups:
- apps
resources:
- daemonsets
- deployments
- deployments/rollback
- deployments/scale
- replicasets
- replicasets/scale
verbs:
- create
- delete
- deletecollection
- patch
- update
- get
- list
- watch
[root@master01 emporer]# kubectl create -f emporer-role.yaml
role.rbac.authorization.k8s.io/emporer-role created
[root@master01 emporer]# kubectl get role -n project
NAME CREATED AT
emporer-role 2023-03-22T07:10:22Z
验证结果:获取到project下的pod和deploy
[root@master01 emporer]# kubectl create rolebinding --role=emporer-role
ca.crt ca.srl democonf emporer.csr emporer-role.yaml
ca.key demo1conf emporer.crt emporer.key
[root@master01 emporer]# kubectl create rolebinding --namespace=project --role=emporer-role --user=emporer emporer-role
rolebinding.rbac.authorization.k8s.io/emporer-role created
[root@master01 emporer]# kubectl get rolebindings.rbac.authorization.k8s.io -n project
NAME ROLE AGE
emporer-role Role/emporer-role 10s
上面是用户账号,我下面做一个服务账号针对project命名空间中的pod
用户账号与服务账号
Kubernetes 区分用户账号和服务账号的概念,主要基于以下原因:
用户账号是针对人而言的。而服务账号是针对运行在 Pod 中的应用进程而言的, 在 Kubernetes 中这些进程运行在容器中,而容器是 Pod 的一部分。
用户账号是全局性的。其名称在某集群中的所有名字空间中必须是唯一的。 无论你查看哪个名字空间,代表用户的特定用户名都代表着同一个用户。 在 Kubernetes 中,服务账号是名字空间作用域的。 两个不同的名字空间可以包含具有相同名称的 ServiceAccount。
通常情况下,集群的用户账号可能会从企业数据库进行同步,创建新用户需要特殊权限,并且涉及到复杂的业务流程。 服务账号创建有意做得更轻量,允许集群用户为了具体的任务按需创建服务账号。 将 ServiceAccount 的创建与新用户注册的步骤分离开来,使工作负载更易于遵从权限最小化原则。
对人员和服务账号审计所考虑的因素可能不同;这种分离更容易区分不同之处。
针对复杂系统的配置包可能包含系统组件相关的各种服务账号的定义。 因为服务账号的创建约束不多并且有名字空间域的名称,所以这种配置通常是轻量的。
官网:kubernetes.io
[root@master01 emporer]# kubectl create serviceaccount -n project emporer
[root@master01 emporer]# kubectl get role -n project
NAME CREATED AT
emporer-role 2023-03-22T07:10:22Z
[root@master01 emporer]# kubectl create rolebinding --role=emporer-role --serviceaccount=project:emporer emporer-rolebinding
rolebinding.rbac.authorization.k8s.io/emporer-rolebinding created
[root@master01 emporer]# kubectl get sa -n project emporer -o yaml
apiVersion: v1
kind: ServiceAccount
metadata:
creationTimestamp: "2023-03-22T08:48:07Z"
name: emporer
namespace: project
resourceVersion: "674761"
uid: 359a0ac3-26d2-41ad-8b24-ad0d2b57bb80
secrets:
- name: emporer-token-8m6pg
[root@master01 emporer]# kubectl get secrets -n project emporer-token-8m6pg
NAME TYPE DATA AGE
emporer-token-8m6pg kubernetes.io/service-account-token 3 2m43s
[root@master01 emporer]# kubectl describe secret -n project emporer-token-8m6pg
Name: emporer-token-8m6pg
Namespace: project
Labels: <none>
Annotations: kubernetes.io/service-account.name: emporer
kubernetes.io/service-account.uid: 359a0ac3-26d2-41ad-8b24-ad0d2b57bb80
Type: kubernetes.io/service-account-token
Data
====
ca.crt: 1099 bytes
namespace: 7 bytes
token: eyJhbGciOiJSUzI1NiIsImtpZCI6IlpobjJoMU1fSXZ0UFo2TUpZVXNtMnZ0TUptMFo4Q09kMkRpMUxKa21SQncifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJwcm9qZWN0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6ImVtcG9yZXItdG9rZW4tOG02cGciLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZW1wb3JlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjM1OWEwYWMzLTI2ZDItNDFhZC04YjI0LWFkMGQyYjU3YmI4MCIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDpwcm9qZWN0OmVtcG9yZXIifQ.YR1wBmjIhhST1xOmy_adITTTLyykNbMJObg3FiCm6Vt_8_tleTBBTdLBRkUvNUuiMnQxBOAOknE8tl-HDQjhSn7-11fRx1QuguoqfvITA0T34sUl0J-bivgm1mofkiTLVLTdF9VgINriakEjdFzv8ixl5omtgwvY_N8HGucI-LbHlTEMZUkGjcMz1pFzx-cZYXBKFeNIGuKQySOv_dHaXvrxNSYvHrJnUNo_19NUhOpSqgkB5U2-_3dc735oAJYOqsH4ZABrx-3Et5smJxOR6OyQncjxGY-N_iFfMur47Zy7lOxdYORGhMuqMrKb2zeLQPgQivBEb7NCERkptFujrA
这个token 和serviceaccount ,有点长脑子了。。裂开了。