kubebuilder实践笔记(4) - 编写简单的业务逻辑
今天使用kubebuilder,在一個Controller中編寫簡單的業(yè)務邏輯。
需求:
1)實現(xiàn)自定義對象(ats/at-sample)的狀態(tài)(Status.Phase字段)的轉(zhuǎn)換:PENDING>RUNNING>DONE
2)當前時間到了指定的時間,controller會感知到,然后創(chuàng)建一個Pod。
3)在POD里面啟動一個BusyBox的容器,執(zhí)行其Command字段中的echo命令,打印出YOY。
聲明:
1)本文的部分源碼源于《Kubernetes編程》書籍,版權(quán)屬于此書作者。
2) 本文的源碼已放到碼云:
https://gitee.com/wqhn2020/ncat.git
)本文使用的操作環(huán)境,請參看我前面的幾篇文章:
kubebuilder實踐筆記(1) - CentOS 7 安裝kubebuilder?
kubebuilder實踐筆記(2) - 入門體驗
kubebuilder實踐筆記(3) - 修改CRD中的字段
4)對于上面的需求1)實現(xiàn)對象狀態(tài)的轉(zhuǎn)換:PENDING>RUNNING>DONE,實際上,我現(xiàn)在也還不能準確理解這個需求是什么,所以不能充分驗證其效果。
操作步驟:
第1步:創(chuàng)建工程。
[root@workstation ~]# cd "kubebuilder/projects/cnat" [root@workstation cnat]# kubebuilder init \ > --domain programming-kubernetes.info \ > --repo my.domain/cnat Writing kustomize manifests for you to edit... Writing scaffold for you to edit... Get controller runtime: $ go get sigs.k8s.io/controller-runtime@v0.11.0 Update dependencies: $ go mod tidy Next: define a resource with: $ kubebuilder create api [root@workstation cnat]#生成的文件如下:
[root@workstation cnat]# tree . ├── config │?? ├── default │?? │?? ├── kustomization.yaml │?? │?? ├── manager_auth_proxy_patch.yaml │?? │?? └── manager_config_patch.yaml │?? ├── manager │?? │?? ├── controller_manager_config.yaml │?? │?? ├── kustomization.yaml │?? │?? └── manager.yaml │?? ├── prometheus │?? │?? ├── kustomization.yaml │?? │?? └── monitor.yaml │?? └── rbac │?? ├── auth_proxy_client_clusterrole.yaml │?? ├── auth_proxy_role_binding.yaml │?? ├── auth_proxy_role.yaml │?? ├── auth_proxy_service.yaml │?? ├── kustomization.yaml │?? ├── leader_election_role_binding.yaml │?? ├── leader_election_role.yaml │?? ├── role_binding.yaml │?? └── service_account.yaml ├── Dockerfile ├── go.mod ├── go.sum ├── hack │?? └── boilerplate.go.txt ├── main.go ├── Makefile └── PROJECT6 directories, 24 files [root@workstation cnat]#第2步:創(chuàng)建api
[root@workstation cnat]# kubebuilder create api --group cnat --version v1alpha1 --kind At Create Resource [y/n] y Create Controller [y/n] y Writing kustomize manifests for you to edit... Writing scaffold for you to edit... api/v1alpha1/at_types.go controllers/at_controller.go Update dependencies: $ go mod tidy Running make: $ make generate go: creating new go.mod: module tmp Downloading sigs.k8s.io/controller-tools/cmd/controller-gen@v0.8.0 go get: installing executables with 'go get' in module mode is deprecated.To adjust and download dependencies of the current module, use 'go get -d'.To install using requirements of the current module, use 'go install'.To install ignoring the current module, use 'go install' with a version,like 'go install example.com/cmd@latest'.For more information, see https://golang.org/doc/go-get-install-deprecationor run 'go help get' or 'go help install'. go get: added github.com/fatih/color v1.12.0 go get: added github.com/go-logr/logr v1.2.0 go get: added github.com/gobuffalo/flect v0.2.3 go get: added github.com/gogo/protobuf v1.3.2 go get: added github.com/google/go-cmp v0.5.6 go get: added github.com/google/gofuzz v1.1.0 go get: added github.com/inconshreveable/mousetrap v1.0.0 go get: added github.com/json-iterator/go v1.1.12 go get: added github.com/mattn/go-colorable v0.1.8 go get: added github.com/mattn/go-isatty v0.0.12 go get: added github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd go get: added github.com/modern-go/reflect2 v1.0.2 go get: added github.com/spf13/cobra v1.2.1 go get: added github.com/spf13/pflag v1.0.5 go get: added golang.org/x/mod v0.4.2 go get: added golang.org/x/net v0.0.0-20210825183410-e898025ed96a go get: added golang.org/x/sys v0.0.0-20210831042530-f4d43177bf5e go get: added golang.org/x/text v0.3.7 go get: added golang.org/x/tools v0.1.6-0.20210820212750-d4cc65f0b2ff go get: added golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1 go get: added gopkg.in/inf.v0 v0.9.1 go get: added gopkg.in/yaml.v2 v2.4.0 go get: added gopkg.in/yaml.v3 v3.0.0-20210107192922-496545a6307b go get: added k8s.io/api v0.23.0 go get: added k8s.io/apiextensions-apiserver v0.23.0 go get: added k8s.io/apimachinery v0.23.0 go get: added k8s.io/klog/v2 v2.30.0 go get: added k8s.io/utils v0.0.0-20210930125809-cb0fa318a74b go get: added sigs.k8s.io/controller-tools v0.8.0 go get: added sigs.k8s.io/json v0.0.0-20211020170558-c049b76a60c6 go get: added sigs.k8s.io/structured-merge-diff/v4 v4.1.2 go get: added sigs.k8s.io/yaml v1.3.0 /root/kubebuilder/projects/cnat/bin/controller-gen object:headerFile="hack/boilerplate.go.txt" paths="./..." Next: implement your new API and generate the manifests (e.g. CRDs,CRs) with: $ make manifests [root@workstation cnat]# [root@workstation cnat]#再次查看生成的文件:
[root@workstation cnat]# tree . ├── api │?? └── v1alpha1 │?? ├── at_types.go │?? ├── groupversion_info.go │?? └── zz_generated.deepcopy.go ├── bin │?? └── controller-gen ├── config │?? ├── crd │?? │?? ├── kustomization.yaml │?? │?? ├── kustomizeconfig.yaml │?? │?? └── patches │?? │?? ├── cainjection_in_ats.yaml │?? │?? └── webhook_in_ats.yaml │?? ├── default │?? │?? ├── kustomization.yaml │?? │?? ├── manager_auth_proxy_patch.yaml │?? │?? └── manager_config_patch.yaml │?? ├── manager │?? │?? ├── controller_manager_config.yaml │?? │?? ├── kustomization.yaml │?? │?? └── manager.yaml │?? ├── prometheus │?? │?? ├── kustomization.yaml │?? │?? └── monitor.yaml │?? ├── rbac │?? │?? ├── at_editor_role.yaml │?? │?? ├── at_viewer_role.yaml │?? │?? ├── auth_proxy_client_clusterrole.yaml │?? │?? ├── auth_proxy_role_binding.yaml │?? │?? ├── auth_proxy_role.yaml │?? │?? ├── auth_proxy_service.yaml │?? │?? ├── kustomization.yaml │?? │?? ├── leader_election_role_binding.yaml │?? │?? ├── leader_election_role.yaml │?? │?? ├── role_binding.yaml │?? │?? └── service_account.yaml │?? └── samples │?? └── cnat_v1alpha1_at.yaml ├── controllers │?? ├── at_controller.go │?? └── suite_test.go ├── Dockerfile ├── go.mod ├── go.sum ├── hack │?? └── boilerplate.go.txt ├── main.go ├── Makefile └── PROJECT13 directories, 37 files [root@workstation cnat]#第3步:修改api/v1alpha1/at_types.go,如下:
[root@workstation cnat]# cat api/v1alpha1/at_types.go /* Copyright 2019 We, the Kube people.Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License athttp://www.apache.org/licenses/LICENSE-2.0Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */package v1alpha1import (metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" )const (PhasePending = "PENDING"PhaseRunning = "RUNNING"PhaseDone = "DONE" )// AtSpec defines the desired state of At type AtSpec struct {// Schedule is the desired time the command is supposed to be executed.// Note: the format used here is UTC time https://www.utctime.netSchedule string `json:"schedule,omitempty"`// Command is the desired command (executed in a Bash shell) to be executed.Command string `json:"command,omitempty"`// Important: Run "make" to regenerate code after modifying this file }// AtStatus defines the observed state of At type AtStatus struct {// Phase represents the state of the schedule: until the command is executed// it is PENDING, afterwards it is DONE.Phase string `json:"phase,omitempty"`// Important: Run "make" to regenerate code after modifying this file }// +genclient // +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object// At is the Schema for the ats API // +k8s:openapi-gen=true // +kubebuilder:subresource:status type At struct {metav1.TypeMeta `json:",inline"`metav1.ObjectMeta `json:"metadata,omitempty"`Spec AtSpec `json:"spec,omitempty"`Status AtStatus `json:"status,omitempty"` }// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object// AtList contains a list of At type AtList struct {metav1.TypeMeta `json:",inline"`metav1.ListMeta `json:"metadata,omitempty"`Items []At `json:"items"` }func init() {SchemeBuilder.Register(&At{}, &AtList{}) } [root@workstation cnat]#第4步:重新生成代碼
[root@workstation cnat]# make manifests /root/kubebuilder/projects/cnat/bin/controller-gen rbac:roleName=manager-role crd webhook paths="./..." output:crd:artifacts:config=config/crd/bases controllers/at_controller.go:39:2: no required module provides package github.com/programming-kubernetes/cnat/cnat-kubebuilder/pkg/apis/cnat/v1alpha1; to add it:go get github.com/programming-kubernetes/cnat/cnat-kubebuilder/pkg/apis/cnat/v1alpha1 controllers/at_controller.go:36:2: no required module provides package sigs.k8s.io/controller-runtime/pkg/runtime/log; to add it:go get sigs.k8s.io/controller-runtime/pkg/runtime/log main.go:35:2: found packages at (at_controller.go) and controllers (suite_test.go) in /root/kubebuilder/projects/cnat/controllers Error: not all generators ran successfully run `controller-gen rbac:roleName=manager-role crd webhook paths=./... output:crd:artifacts:config=config/crd/bases -w` to see all available markers, or `controller-gen rbac:roleName=manager-role crd webhook paths=./... output:crd:artifacts:config=config/crd/bases -h` for usage make: *** [manifests] Error 1 [root@workstation cnat]#這一步會報錯,根據(jù)提示,加上log依賴。
[root@workstation cnat]# go get sigs.k8s.io/controller-runtime/pkg/runtime/log go get: module sigs.k8s.io/controller-runtime@upgrade found (v0.11.2), but does not contain package sigs.k8s.io/controller-runtime/pkg/runtime/log [root@workstation cnat]#再試一次,代碼生成成功:
[root@workstation cnat]# make manifests /root/kubebuilder/projects/cnat/bin/controller-gen rbac:roleName=manager-role crd webhook paths="./..." output:crd:artifacts:config=config/crd/bases [root@workstation cnat]# [root@workstation cnat]#第5步:安裝CRD
[root@workstation cnat]# make install /root/kubebuilder/projects/cnat/bin/controller-gen rbac:roleName=manager-role crd webhook paths="./..." output:crd:artifacts:config=config/crd/bases /root/kubebuilder/projects/cnat/bin/kustomize build config/crd | kubectl apply -f - customresourcedefinition.apiextensions.k8s.io/ats.cnat.programming-kubernetes.info configured [root@workstation cnat]#第6步:修改at_controller.go,加上我們的業(yè)務邏輯,實現(xiàn)上述需求。
[root@workstation cnat]# cat controllers/at_controller.go /* Copyright 2019 We, the Kube people.Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License athttp://www.apache.org/licenses/LICENSE-2.0Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */package controllersimport ("context""fmt""k8s.io/klog/v2"ctrl "sigs.k8s.io/controller-runtime""sigs.k8s.io/controller-runtime/pkg/log""strings""time"corev1 "k8s.io/api/core/v1""k8s.io/apimachinery/pkg/api/errors"metav1 "k8s.io/apimachinery/pkg/apis/meta/v1""k8s.io/apimachinery/pkg/runtime""k8s.io/apimachinery/pkg/types""sigs.k8s.io/controller-runtime/pkg/client"//"sigs.k8s.io/controller-runtime/pkg/controller""sigs.k8s.io/controller-runtime/pkg/controller/controllerutil"//"sigs.k8s.io/controller-runtime/pkg/handler"//"sigs.k8s.io/controller-runtime/pkg/manager"//"sigs.k8s.io/controller-runtime/pkg/reconcile"// logf "sigs.k8s.io/controller-runtime/pkg/runtime/log"//"sigs.k8s.io/controller-runtime/pkg/source"// cnatv1alpha1 "github.com/programming-kubernetes/cnat/cnat-kubebuilder/pkg/apis/cnat/v1alpha1"cnatv1alpha1 "my.domain/cnat/api/v1alpha1" )// AtReconciler reconciles a At object type AtReconciler struct {client.ClientScheme *runtime.Scheme }//+kubebuilder:rbac:groups=cnat.programming-kubernetes.info,resources=ats,verbs=get;list;watch;create;update;patch;delete //+kubebuilder:rbac:groups=cnat.programming-kubernetes.info,resources=ats/status,verbs=get;update;patch //+kubebuilder:rbac:groups=cnat.programming-kubernetes.info,resources=ats/finalizers,verbs=update// Reconcile is part of the main kubernetes reconciliation loop which aims to // move the current state of the cluster closer to the desired state. // TODO(user): Modify the Reconcile function to compare the state specified by // the At object against the actual cluster state, and then // perform operations to make the cluster state reflect the state specified by // the user. // // For more details, check Reconcile and its Result here: // - https://pkg.go.dev/sigs.k8s.io/controller-runtime@v0.11.0/pkg/reconcile func (r *AtReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {_ = log.FromContext(ctx)// reqLogger := log.WithValues("namespace", request.Namespace, "at", request.Name)klog.Infof("=== Reconciling At")// Fetch the At instanceinstance := &cnatv1alpha1.At{}err := r.Get(context.TODO(), req.NamespacedName, instance)if err != nil {if errors.IsNotFound(err) {// Request object not found, could have been deleted after reconcile request - return and don't requeue:return ctrl.Result{}, nil}// Error reading the object - requeue the request:return ctrl.Result{}, err}// If no phase set, default to pending (the initial phase):if instance.Status.Phase == "" {instance.Status.Phase = cnatv1alpha1.PhasePending}// Now let's make the main case distinction: implementing// the state diagram PENDING -> RUNNING -> DONEswitch instance.Status.Phase {case cnatv1alpha1.PhasePending:klog.Infof("Phase: PENDING")// As long as we haven't executed the command yet, we need to check if it's time already to act:klog.Infof("Checking schedule", "Target", instance.Spec.Schedule)// Check if it's already time to execute the command with a tolerance of 2 seconds:d, err := timeUntilSchedule(instance.Spec.Schedule)if err != nil {//reqLogger.Error(err, "Schedule parsing failure")klog.Error(err, "Schedule parsing failure")// Error reading the schedule. Wait until it is fixed.return ctrl.Result{}, err}klog.Infof("Schedule parsing done", "Result", fmt.Sprintf("diff=%v", d))if d > 0 {// Not yet time to execute the command, wait until the scheduled timereturn ctrl.Result{RequeueAfter: d}, nil}klog.Infof("It's time!", "Ready to execute", instance.Spec.Command)instance.Status.Phase = cnatv1alpha1.PhaseRunningcase cnatv1alpha1.PhaseRunning:klog.Infof("Phase: RUNNING")pod := newPodForCR(instance)// Set At instance as the owner and controllerif err := controllerutil.SetControllerReference(instance, pod, r.Scheme); err != nil {// requeue with errorreturn ctrl.Result{}, err}found := &corev1.Pod{}err = r.Get(context.TODO(), types.NamespacedName{Name: pod.Name, Namespace: pod.Namespace}, found)// Try to see if the pod already exists and if not// (which we expect) then create a one-shot pod as per spec:if err != nil && errors.IsNotFound(err) {err = r.Create(context.TODO(), pod)if err != nil {// requeue with errorreturn ctrl.Result{}, err}klog.Infof("Pod launched", "name", pod.Name)} else if err != nil {// requeue with errorreturn ctrl.Result{}, err} else if found.Status.Phase == corev1.PodFailed || found.Status.Phase == corev1.PodSucceeded {klog.Infof("Container terminated", "reason", found.Status.Reason, "message", found.Status.Message)instance.Status.Phase = cnatv1alpha1.PhaseDone} else {// don't requeue because it will happen automatically when the pod status changesreturn ctrl.Result{}, nil}case cnatv1alpha1.PhaseDone:klog.Infof("Phase: DONE")return ctrl.Result{}, nildefault:klog.Infof("NOP")return ctrl.Result{}, nil}// Update the At instance, setting the status to the respective phase:err = r.Status().Update(context.TODO(), instance)if err != nil {return ctrl.Result{}, err}// Don't requeue. We should be reconcile because either the pod or the CR changes.//return reconcile.Result{}, nilreturn ctrl.Result{}, nil }// SetupWithManager sets up the controller with the Manager. func (r *AtReconciler) SetupWithManager(mgr ctrl.Manager) error {return ctrl.NewControllerManagedBy(mgr).For(&cnatv1alpha1.At{}).Complete(r) }// newPodForCR returns a busybox pod with the same name/namespace as the cr func newPodForCR(cr *cnatv1alpha1.At) *corev1.Pod {labels := map[string]string{"app": cr.Name,}return &corev1.Pod{ObjectMeta: metav1.ObjectMeta{Name: cr.Name + "-pod",Namespace: cr.Namespace,Labels: labels,},Spec: corev1.PodSpec{Containers: []corev1.Container{{Name: "busybox",Image: "hb.cn/repo/busybox:1.28",Command: strings.Split(cr.Spec.Command, " "),},},RestartPolicy: corev1.RestartPolicyOnFailure,},} }// timeUntilSchedule parses the schedule string and returns the time until the schedule. // When it is overdue, the duration is negative. func timeUntilSchedule(schedule string) (time.Duration, error) {now := time.Now().UTC()layout := "2006-01-02T15:04:05Z"s, err := time.Parse(layout, schedule)if err != nil {return time.Duration(0), err}return s.Sub(now), nil } [root@workstation cnat]#說明:
? ? 上面的代碼要實現(xiàn)監(jiān)聽到At這種自定義資源對象,然后取得它的spec.schedule字段,值為一個時間(目標時間)。不斷與當前時間比較,如果到了目標時間,則部署一個新的POD.
? ? ?代碼里面的newPodForCR()方法,創(chuàng)建一個新的POD,鏡像為:hb.cn/repo/busybox:1.28。這是我自己搭的harbor私有鏡像倉庫。大家也可以使用docker hub上的,或其他公開的鏡像庫。
? ? 創(chuàng)建了這個容器,其Command字段值為CR中的spec.command的值。這樣,在容器啟動后,馬上執(zhí)行這條Bash命令,即:echo YAY
第7步:運行controller
[root@workstation cnat]# make run /root/kubebuilder/projects/cnat/bin/controller-gen rbac:roleName=manager-role crd webhook paths="./..." output:crd:artifacts:config=config/crd/bases /root/kubebuilder/projects/cnat/bin/controller-gen object:headerFile="hack/boilerplate.go.txt" paths="./..." go fmt ./... go vet ./... go run ./main.go 1.6487973694384649e+09 INFO controller-runtime.metrics Metrics server is starting to listen {"addr": ":8080"} 1.648797369441649e+09 INFO setup starting manager 1.648797369442924e+09 INFO Starting server {"path": "/metrics", "kind": "metrics", "addr": "[::]:8080"} 1.648797369443343e+09 INFO Starting server {"kind": "health probe", "addr": "[::]:8081"} 1.6487973694439917e+09 INFO controller.at Starting EventSource {"reconciler group": "cnat.programming-kubernetes.info", "reconciler kind": "At", "source": "kind source: *v1alpha1.At"} 1.6487973694440632e+09 INFO controller.at Starting Controller {"reconciler group": "cnat.programming-kubernetes.info", "reconciler kind": "At"} 1.648797369544806e+09 INFO controller.at Starting workers {"reconciler group": "cnat.programming-kubernetes.info", "reconciler kind": "At", "worker count": 1}第8步:準備一份自定義資源(CR)
[root@workstation cnat]# cat config/samples/my-cr.yaml apiVersion: cnat.programming-kubernetes.info/v1alpha1 kind: At metadata:labels:controller-tools.k8s.io: "1.0"name: at-sample spec:# TODO(user): Add fields hereschedule: "2022-04-01T07:31:00Z"command: "echo YAY" [root@workstation cnat]#注意:
在我的環(huán)境里,運行controller的機器(workstation,運行環(huán)境見前兩篇文件的介紹)是北京時間,但上面的YAML文件中,卻要減8個小時才行,原因未知。
總之,?假如當前時間為:2022-04-01 15:29:00,如果我想在2022-04-01 15:31:00創(chuàng)建POD,?則要填寫“2022-04-01T07:31:00”,如上圖。
第9步:應用自定義資源(CR)
[root@workstation samples]# [root@workstation samples]# kubectl apply -f my-cr.yaml at.cnat.programming-kubernetes.info/at-sample created [root@workstation samples]#驗證步驟:
驗證1:查看controller日志。
可以看到當前狀態(tài)為PENDING,?還剩1分47秒,然后創(chuàng)建POD.
I0401 15:29:12.009964 34399 at_controller.go:69] === Reconciling At I0401 15:29:12.010004 34399 at_controller.go:91] Phase: PENDING I0401 15:29:12.010012 34399 at_controller.go:93] Checking schedule%!(EXTRA string=Target, string=2022-04-01T07:31:00Z) I0401 15:29:12.010025 34399 at_controller.go:103] Schedule parsing done%!(EXTRA string=Result, string=diff=1m47.989981839s)驗證2:等待1分47秒, Controller輸出創(chuàng)建POD的信息。
?留意那行:It's time!%!(EXTRA string=Ready to execute, string=echo YAY)
I0401 15:31:00.000813 34399 at_controller.go:69] === Reconciling At I0401 15:31:00.000927 34399 at_controller.go:91] Phase: PENDING I0401 15:31:00.000942 34399 at_controller.go:93] Checking schedule%!(EXTRA string=Target, string=2022-04-01T07:31:00Z) I0401 15:31:00.000987 34399 at_controller.go:103] Schedule parsing done%!(EXTRA string=Result, string=diff=-954.261μs) I0401 15:31:00.000997 34399 at_controller.go:108] It's time!%!(EXTRA string=Ready to execute, string=echo YAY) I0401 15:31:00.026744 34399 at_controller.go:69] === Reconciling At I0401 15:31:00.026961 34399 at_controller.go:111] Phase: RUNNING I0401 15:31:00.165139 34399 at_controller.go:128] Pod launched%!(EXTRA string=name, string=at-sample-pod) I0401 16:21:35.180053 34399 at_controller.go:69] === Reconciling At驗證3:等待1分47秒, 查看busybox容器打印的日志
[root@master ~]# k get pods NAME READY STATUS RESTARTS AGE at-sample-pod 0/1 Completed 0 38m[root@master ~]# k logs pod/at-sample-pod YAY [root@master ~]#看到上面的YAY,表示Controller已成功創(chuàng)建POD,并且POD中的容器已正常工作。
驗證4:查看ats/at-sample的描述,其Status.Phase字段值已改為RUNNING
[root@master ~]# k describe ats/at-sample Name: at-sample Namespace: default Labels: controller-tools.k8s.io=1.0 Annotations: <none> API Version: cnat.programming-kubernetes.info/v1alpha1 Kind: At ... Status:Phase: RUNNING Events: <none> [root@master ~]#參考:
1)?《Kubernetes編程》書籍
2)Github:??https://github.com/programming-kubernetes
總結(jié)
以上是生活随笔為你收集整理的kubebuilder实践笔记(4) - 编写简单的业务逻辑的全部內(nèi)容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: 极简典雅淡蓝花卉PPT-朴尔PPT
- 下一篇: 发那科机器人程序备份及导入