1 回答

TA貢獻1872條經驗 獲得超4個贊
更新(最終答案)
附加
OP要求我修改我的答案以顯示“微調”或“特定”服務帳戶的配置,而不使用集群管理員..
據我所知,/healthz默認情況下,每個 pod 都具有讀取權限。例如,以下內容CronJob在不使用 aServiceAccount的情況下也可以正常工作:
# cronjob
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: is-healthz-ok-no-svc
spec:
schedule: "*/5 * * * *" # at every fifth minute
jobTemplate:
spec:
template:
spec:
######### serviceAccountName: health-reader-sa
containers:
- name: is-healthz-ok-no-svc
image: oze4/is-healthz-ok:latest
restartPolicy: OnFailure
原來的
我繼續為此寫了一個概念證明。您可以在此處找到完整的 repo,但代碼如下。
main.go
package main
import (
"os"
"errors"
"fmt"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/rest"
)
func main() {
client, err := newInClusterClient()
if err != nil {
panic(err.Error())
}
path := "/healthz"
content, err := client.Discovery().RESTClient().Get().AbsPath(path).DoRaw()
if err != nil {
fmt.Printf("ErrorBadRequst : %s\n", err.Error())
os.Exit(1)
}
contentStr := string(content)
if contentStr != "ok" {
fmt.Printf("ErrorNotOk : response != 'ok' : %s\n", contentStr)
os.Exit(1)
}
fmt.Printf("Success : ok!")
os.Exit(0)
}
func newInClusterClient() (*kubernetes.Clientset, error) {
config, err := rest.InClusterConfig()
if err != nil {
return &kubernetes.Clientset{}, errors.New("Failed loading client config")
}
clientset, err := kubernetes.NewForConfig(config)
if err != nil {
return &kubernetes.Clientset{}, errors.New("Failed getting clientset")
}
return clientset, nil
}
dockerfile
FROM golang:latest
RUN mkdir /app
ADD . /app
WORKDIR /app
RUN go build -o main .
CMD ["/app/main"]
部署.yaml
(作為 CronJob)
# cronjob
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: is-healthz-ok
spec:
schedule: "*/5 * * * *" # at every fifth minute
jobTemplate:
spec:
template:
spec:
serviceAccountName: is-healthz-ok
containers:
- name: is-healthz-ok
image: oze4/is-healthz-ok:latest
restartPolicy: OnFailure
---
# service account
apiVersion: v1
kind: ServiceAccount
metadata:
name: is-healthz-ok
namespace: default
---
# cluster role binding
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: is-healthz-ok
subjects:
- kind: ServiceAccount
name: is-healthz-ok
namespace: default
roleRef:
kind: ClusterRole
##########################################################################
# Instead of assigning cluster-admin you can create your own ClusterRole #
# I used cluster-admin because this is a homelab #
##########################################################################
name: cluster-admin
apiGroup: rbac.authorization.k8s.io
---
截屏
成功的 CronJob 運行
更新 1
OP 詢問如何部署“in-cluster-client-config”,所以我提供了一個示例部署(我正在使用的)..
你可以在這里找到回購
示例部署(我使用的是 CronJob,但它可以是任何東西):
cronjob.yaml
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: remove-terminating-namespaces-cronjob
spec:
schedule: "0 */1 * * *" # at minute 0 of each hour aka once per hour
#successfulJobsHistoryLimit: 0
#failedJobsHistoryLimit: 0
jobTemplate:
spec:
template:
spec:
serviceAccountName: svc-remove-terminating-namespaces
containers:
- name: remove-terminating-namespaces
image: oze4/service.remove-terminating-namespaces:latest
restartPolicy: OnFailure
rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: svc-remove-terminating-namespaces
namespace: default
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: crb-namespace-reader-writer
subjects:
- kind: ServiceAccount
name: svc-remove-terminating-namespaces
namespace: default
roleRef:
kind: ClusterRole
##########################################################################
# Instead of assigning cluster-admin you can create your own ClusterRole #
# I used cluster-admin because this is a homelab #
##########################################################################
name: cluster-admin
apiGroup: rbac.authorization.k8s.io
---
原始答案
聽起來您正在尋找的是來自 client-go 的“in-cluster-client-config”。
重要的是要記住,當使用“in-cluster-client-config”時,Go 代碼中的 API 調用使用“that”pod 的服務帳戶。只是想確保您使用有權讀取“/livez”的帳戶進行測試。
我測試了以下代碼,我能夠獲得“livez”狀態..
package main
import (
"errors"
"flag"
"fmt"
"path/filepath"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/tools/clientcmd"
"k8s.io/client-go/rest"
"k8s.io/client-go/util/homedir"
)
func main() {
// I find it easiest to use "out-of-cluster" for tetsing
// client, err := newOutOfClusterClient()
client, err := newInClusterClient()
if err != nil {
panic(err.Error())
}
livez := "/livez"
content, _ := client.Discovery().RESTClient().Get().AbsPath(livez).DoRaw()
fmt.Println(string(content))
}
func newInClusterClient() (*kubernetes.Clientset, error) {
config, err := rest.InClusterConfig()
if err != nil {
return &kubernetes.Clientset{}, errors.New("Failed loading client config")
}
clientset, err := kubernetes.NewForConfig(config)
if err != nil {
return &kubernetes.Clientset{}, errors.New("Failed getting clientset")
}
return clientset, nil
}
// I find it easiest to use "out-of-cluster" for tetsing
func newOutOfClusterClient() (*kubernetes.Clientset, error) {
var kubeconfig *string
if home := homedir.HomeDir(); home != "" {
kubeconfig = flag.String("kubeconfig", filepath.Join(home, ".kube", "config"), "(optional) absolute path to the kubeconfig file")
} else {
kubeconfig = flag.String("kubeconfig", "", "absolute path to the kubeconfig file")
}
flag.Parse()
// use the current context in kubeconfig
config, err := clientcmd.BuildConfigFromFlags("", *kubeconfig)
if err != nil {
return nil, err
}
// create the clientset
client, err := kubernetes.NewForConfig(config)
if err != nil {
return nil, err
}
return client, nil
}
- 1 回答
- 0 關注
- 247 瀏覽
添加回答
舉報