关于K8s中Ansible AWX(awx-operator 0.30.0)平台Helm部署的一些笔记
寫在前面
- 整理一些K8s中通過Helm的方式部署AWX的筆記分享給小伙伴
- 博文內容為部署過程和遇到問題的解決過程
- 食用方式:
- 需要了解K8s
- 需要預置的K8s+Helm環境
- 需要科學上網
- 理解不足小伙伴幫忙指正
嗯,希望疫情快點結束吧 ^_^
一些介紹
關于 AWX 做簡單介紹,AWX 提供基于 Web 的用戶界面、REST API 和基于Ansible構建的任務引擎。它是紅帽 Ansible 自動化平臺的上游項目之一。對應紅帽的訂閱產品Ansible Tower的開源版本。
在物理機的部署有單機版和單機版+遠程數據庫,高可用性集群的架構方式,這里部署使用AWX基于k8s的部署方案awx-operator來部署, 為了方便,我們使用Helm的方式,默認配置為單機版,即AWX和PostgreSQL位于同一個節點,對于節點要求內存不小于4G。存儲不小于20G。
關于AWX更多了解:項目地址: https://github.com/ansible/awx
需要使用訂閱版本 ansible-tower: https://docs.ansible.com/ansible-tower/index.html
要安裝 AWX,請查看安裝指南。
- AWX 安裝文檔:https://github.com/ansible/awx/blob/devel/INSTALL.md
- awx-operator 安裝文檔:https://github.com/ansible/awx-operator
- helm 方式安裝: https://github.com/ansible/awx-operator/blob/devel/.helm/starter/README.md
關于awx-operator:一個用于Kubernetes的Ansible AWX Operator,使用operator SDK和Ansible構建。關于Operator,這里簡單理解為自定義資源CustomResourceDefinition的具體實現來描述AWX的部署過程。下面為AWX部署后生成的自定義資源對象
┌──[root@vms81.liruilongs.github.io]-[~/awx-operator/crds] └─$kubectl get awxs,awxrestores,awxbackups NAME AGE awx.awx.ansible.com/awx-demo 14h ┌──[root@vms81.liruilongs.github.io]-[~/awx-operator/crds] └─$kubectl describe awx awx-demo Name: awx-demo Namespace: awx Labels: app.kubernetes.io/component=awxapp.kubernetes.io/managed-by=awx-operatorapp.kubernetes.io/name=awx-demoapp.kubernetes.io/operator-version=0.30.0app.kubernetes.io/part-of=awx-demo Annotations: <none> API Version: awx.ansible.com/v1beta1 Kind: AWX Metadata:Creation Timestamp: 2022-10-15T02:49:58ZGeneration: 1Managed Fields:API Version: awx.ansible.com/v1beta1.........................關于Helm:可以簡單理解為類似Ansible中角色的概念,或者yum,maven,npm等包管理器,用于對需要在Kubernetes上部署的復雜應用進行定義、安裝和更新,Helm以Chart的方式對應用軟件進行描述,可以方便地創建、版本化、共享和發布復雜的應用軟件。
環境要求
需要一個預置的K8s集群,這是使用的是1.22的版本
┌──[root@vms81.liruilongs.github.io]-[~] └─$kubectl get nodes NAME STATUS ROLES AGE VERSION vms81.liruilongs.github.io Ready control-plane,master 301d v1.22.2 vms82.liruilongs.github.io Ready <none> 301d v1.22.2 vms83.liruilongs.github.io Ready <none> 301d v1.22.2 ┌──[root@vms81.liruilongs.github.io]-[~] └─$需要安裝好Helm
┌──[root@vms81.liruilongs.github.io]-[~/AWK] └─$helm version version.BuildInfo{Version:"v3.2.1", GitCommit:"fe51cd1e31e6a202cba7dead9552a6d418ded79a", GitTreeState:"clean", GoVersion:"go1.13.10"}work 節點信息
┌──[root@vms81.liruilongs.github.io]-[~/awx-operator/crds] └─$hostnamectlStatic hostname: vms81.liruilongs.github.ioIcon name: computer-vmChassis: vmMachine ID: a5d2de32a7d4411d9c12cd390b672d32Boot ID: 1fd2c0810f6d4058a224d1ff966c0e09Virtualization: vmwareOperating System: CentOS Linux 7 (Core)CPE OS Name: cpe:/o:centos:centos:7Kernel: Linux 3.10.0-1160.76.1.el7.x86_64Architecture: x86-64 ┌──[root@vms81.liruilongs.github.io]-[~/awx-operator/crds] └─$Helm部署
配置awx-operator的Helm源
┌──[root@vms81.liruilongs.github.io]-[~/AWK] └─$helm repo add awx-operator https://ansible.github.io/awx-operator/ "awx-operator" has been added to your repositories ┌──[root@vms81.liruilongs.github.io]-[~/AWK] └─$helm repo update Hang tight while we grab the latest from your chart repositories... ...Successfully got an update from the "liruilong_repo" chart repository ...Successfully got an update from the "elastic" chart repository ...Successfully got an update from the "prometheus-community" chart repository ...Successfully got an update from the "azure" chart repository ...Unable to get an update from the "ali" chart repository (https://apphub.aliyuncs.com):failed to fetch https://apphub.aliyuncs.com/index.yaml : 504 Gateway Timeout ...Successfully got an update from the "awx-operator" chart repository ...Successfully got an update from the "stable" chart repository Update Complete. ? Happy Helming!?搜索awx-operator的Chart
┌──[root@vms81.liruilongs.github.io]-[~/AWK] └─$helm search repo awx-operator NAME CHART VERSION APP VERSION DESCRIPTION awx-operator/awx-operator 0.30.0 0.30.0 A Helm chart for the AWX Operator自定義參數安裝helm install my-awx-operator awx-operator/awx-operator -n awx --create-namespace -f myvalues.yaml
如果使用自定義的安裝,需要在myvalues.yaml中開啟對應的開關,可以配置HTTPS、獨立PG數據庫、LB、LDAP認證等。文件模板可以pull下chart包里找到,用里面的 values.yaml 做模板(有安裝高版本的小伙伴發現需要修改一下chart包 里面的enabled: false,為 true,使用這里配置信息,安裝不成功的小伙伴可以嘗試下 )
┌──[root@vms81.liruilongs.github.io]-[~/awx-operator] └─$cat values.yaml AWX:# enable use of awx-deploy templateenabled: falsename: awxspec:admin_user: admin ...........我們這里使用默認的配置安裝,不需要指定配置文件
┌──[root@vms81.liruilongs.github.io]-[~/AWK] └─$helm install -n awx --create-namespace my-awx-operator awx-operator/awx-operator NAME: my-awx-operator LAST DEPLOYED: Mon Oct 10 16:29:24 2022 NAMESPACE: awx STATUS: deployed REVISION: 1 TEST SUITE: None NOTES: AWX Operator installed with Helm Chart version 0.30.0 ┌──[root@vms81.liruilongs.github.io]-[~/AWK] └─$OK,這樣就安裝完成了。但是因為好多鏡像需要外網下載,所以需要處理下。為了方便我們切換一下命名空間
┌──[root@vms81.liruilongs.github.io]-[~/AWK] └─$kubectl config set-context $(kubectl config current-context) --namespace=awx Context "kubernetes-admin@kubernetes" modified. ┌──[root@vms81.liruilongs.github.io]-[~/AWK] └─$查看下pod狀態
┌──[root@vms81.liruilongs.github.io]-[~/AWK] └─$kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES awx-operator-controller-manager-79ff9599d8-mksmc 1/2 ErrImagePull 0 13m 10.244.171.167 vms82.liruilongs.github.io <none> <none> ┌──[root@vms81.liruilongs.github.io]-[~/AWK] └─$kubectl get pod NAME READY STATUS RESTARTS AGE awx-operator-controller-manager-79ff9599d8-mksmc 1/2 ImagePullBackOff 0 13m拉取鏡像失敗,解決報錯
┌──[root@vms81.liruilongs.github.io]-[~/AWK] └─$kubectl describe pod awx-operator-controller-manager-79ff9599d8-mksmc | grep -i event -A 30 Events:Type Reason Age From Message---- ------ ---- ---- -------Normal Scheduled 14m default-scheduler Successfully assigned awx/awx-operator-controller-manager-79ff9599d8-mksmc to vms82.liruilongs.github.ioNormal Pulling 14m kubelet Pulling image "quay.io/ansible/awx-operator:0.30.0"Normal Started 13m kubelet Started container awx-managerNormal Pulled 13m kubelet Successfully pulled image "quay.io/ansible/awx-operator:0.30.0" in 20.52788571sNormal Created 13m kubelet Created container awx-managerWarning Failed 13m (x3 over 14m) kubelet Failed to pull image "gcr.io/kubebuilder/kube-rbac-proxy:v0.13.0": rpc error: code = Unknown desc = Error response from daemon: Get "https://gcr.io/v2/": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)Warning Failed 13m (x3 over 14m) kubelet Error: ErrImagePullWarning Failed 12m (x5 over 13m) kubelet Error: ImagePullBackOffNormal Pulling 12m (x4 over 14m) kubelet Pulling image "gcr.io/kubebuilder/kube-rbac-proxy:v0.13.0"Warning Failed 11m kubelet Failed to pull image "gcr.io/kubebuilder/kube-rbac-proxy:v0.13.0": rpc error: code = Unknown desc = Error response from daemon: Get "https://gcr.io/v2/": dial tcp 74.125.203.82:443: i/o timeoutNormal BackOff 4m23s (x35 over 13m) kubelet Back-off pulling image "gcr.io/kubebuilder/kube-rbac-proxy:v0.13.0" ┌──[root@vms81.liruilongs.github.io]-[~/AWK] └─$Back-off pulling image "gcr.io/kubebuilder/kube-rbac-proxy:v0.13.0"
這個鏡像需要科學上網,下載下,然后本地導入,如果有谷歌賬號,可以在谷歌云下載
| 點擊那個 shell 中運行,然后導出鏡像 |
| 下載導出的鏡像 |
上傳到虛機
PS C:\Users\山河已無恙\Downloads> scp .\kube-rbac-proxy.tar root@192.168.26.81:~ root@192.168.26.81's password: kube-rbac-proxy.tar 100% 58MB 108.7MB/s 00:00 PS C:\Users\山河已無恙\Downloads>節點導入鏡像
┌──[root@vms81.liruilongs.github.io]-[~/ansible] └─$ansible node -m copy -a 'dest=/root/ src=../kube-rbac-proxy.tar' ┌──[root@vms81.liruilongs.github.io]-[~/ansible] └─$ansible node -m shell -a "docker load -i /root/kube-rbac-proxy.tar"OK,這個POD好了
┌──[root@vms81.liruilongs.github.io]-[~/ansible] └─$kubectl get pods -owide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES awx-operator-controller-manager-79ff9599d8-mksmc 2/2 Running 0 19h 10.244.171.167 vms82.liruilongs.github.io <none> <none> ┌──[root@vms81.liruilongs.github.io]-[~/ansible] └─$查看事件確認下
Events:Type Reason Age From Message---- ------ ---- ---- -------Warning Failed 41m (x187 over 19h) kubelet Failed to pull image "gcr.io/kubebuilder/kube-rbac-proxy:v0.13.0": rpc error: code = Unknown desc = Error response from daemon: Get "https://gcr.io/v2/": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)Normal Pulling 36m (x214 over 19h) kubelet Pulling image "gcr.io/kubebuilder/kube-rbac-proxy:v0.13.0"Normal BackOff 6m31s (x4861 over 19h) kubelet Back-off pulling image "gcr.io/kubebuilder/kube-rbac-proxy:v0.13.0"Normal Pulled 88s還有其他資源沒有創建好,PG等還沒創建,看下POD 中awx的日志排查下問題
┌──[root@vms81.liruilongs.github.io]-[~/awx/awx-operator] └─$kubectl logs awx-operator-controller-manager-79ff9599d8-mksmc -c awx-manager劇本執行報錯,unable to retrieve the complete list of server APIs
--------------------------- Ansible Task StdOut -------------------------------TASK [Verify imagePullSecrets ] ************************************************* task path: /opt/ansible/playbooks/awx.yml: 10------------------------------------------------------------------------------- I1015 11: 09: 32.772623 8 request.go: 601 ] Waited for 1.048239742s due to client-side throttling, not priority and fairness, request: GET:https: //10.96.0.1:443/apis/autoscaling/v2beta2?timeout=32s { "level": "error", "ts": 1665832173.374363, "logger": "proxy", "msg": "Unable to determine if virtual resource", "gvk": "/v1, Kind=Secret", "error": "unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: an error on the server (\"Internal Server Error: \\\"/apis/metrics.k8s.io/v1beta1?timeout=32s\\\": the server could not find the requested resource\") has prevented the request from succeeding", "stacktrace": "github.com/operator-framework/operator-sdk/internal/ansible/proxy.(*cacheResponseHandler).ServeHTTP\n\t/workspace/internal/ansible/proxy/cache_response.go:97\nnet/http.serverHandler.ServeHTTP\n\t/usr/local/go/src/net/http/server.go:2916\nnet/http.(*conn).serve\n\t/usr/local/go/src/net/http/server.go:1966" }關于這個問題在下面的issuse中找到了解決辦法:
- https://github.com/kiali/kiali/issues/3239
- https://github.com/helm/helm/issues/6361#issuecomment-538220109
具體操作可以參考:https://www.cnblogs.com/liruilong/p/16795064.html
解決問題之后我們需要重新helm repo update然后重新部署,這一步可以略去, 我的網不好所以需要
┌──[root@vms81.liruilongs.github.io]-[~/awx/awx-operator] └─$helm repo update Hang tight while we grab the latest from your chart repositories... ...Successfully got an update from the "liruilong_repo" chart repository ...Successfully got an update from the "elastic" chart repository ...Successfully got an update from the "prometheus-community" chart repository ...Successfully got an update from the "azure" chart repository ...Unable to get an update from the "ali" chart repository (https://apphub.aliyuncs.com):failed to fetch https://apphub.aliyuncs.com/index.yaml : 504 Gateway Timeout ...Successfully got an update from the "awx-operator" chart repository ...Successfully got an update from the "stable" chart repository Update Complete. ? Happy Helming!?因為之前已經install了,所以這里upgrade就可以
┌──[root@vms81.liruilongs.github.io]-[~/awx/awx-operator] └─$helm upgrade my-awx-operator awx-operator/awx-operator -n awx --create-namespace Release "my-awx-operator" has been upgraded. Happy Helming! NAME: my-awx-operator LAST DEPLOYED: Sat Oct 15 21:16:28 2022 NAMESPACE: awx STATUS: deployed REVISION: 3 TEST SUITE: None NOTES: AWX Operator installed with Helm Chart version 0.30.0在看下日志確認下,沒有error即可
┌──[root@vms81.liruilongs.github.io]-[~/awx/awx-operator] └─$kubectl logs awx-operator-controller-manager-79ff9599d8-2v5fn -c awx-manager在看下POD狀態
┌──[root@vms81.liruilongs.github.io]-[~/awx/awx-operator] └─$kubectl get pods NAME READY STATUS RESTARTS AGE awx-demo-postgres-13-0 0/1 Pending 0 105s awx-operator-controller-manager-79ff9599d8-2v5fn 2/2 Running 0 128m ┌──[root@vms81.liruilongs.github.io]-[~/awx/awx-operator] └─$kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE awx-demo-postgres-13 ClusterIP None <none> 5432/TCP 5m48s awx-operator-controller-manager-metrics-service ClusterIP 10.107.17.167 <none> 8443/TCP 132mpg對應的pod:awx-demo-postgres-13-0 pending了,看下事件
┌──[root@vms81.liruilongs.github.io]-[~/awx/awx-operator] └─$kubectl describe pods awx-demo-postgres-13-0 | grep -i -A 10 event Events:Type Reason Age From Message---- ------ ---- ---- -------Warning FailedScheduling 23s (x8 over 7m31s) default-scheduler 0/3 nodes are available: 3 pod has unbound immediate PersistentVolumeClaims. ┌──[root@vms81.liruilongs.github.io]-[~/awx/awx-operator] └─$kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE postgres-13-awx-demo-postgres-13-0 Pending 10m ┌──[root@vms81.liruilongs.github.io]-[~/awx/awx-operator] └─$kubectl describe pvc postgres-13-awx-demo-postgres-13-0 | grep -i -A 10 event Events:Type Reason Age From Message---- ------ ---- ---- -------Normal FailedBinding 82s (x42 over 11m) persistentvolume-controller no persistent volumes available for this claim and no storage class is set ┌──[root@vms81.liruilongs.github.io]-[~/awx/awx-operator] └─$kubectl get sc No resources foundOK ,Pending的原因是沒有默認SC
對于有狀態應用來講,在生成statefulsets之前需要創建好默認的SC(動態卷供應),由SC來動態處理PV和PVC的創建,生成PV用于PG的數據存儲,所以我們這里需要創建一個SC,創建之前我們需要一個分配器,不同的分配器指定了動態創建pv時使用什么后端存儲。
這里為了方便,使用本地存儲作為后端存儲,一般情況下,PV只能是網絡存儲,不屬于任何Node,所以通過NFS的方式比較多一點,SC會通過provisioner 字段指定分配器。創建好storageClass之后,用戶在定義pvc時使用默認SC的分配存儲
分配器及SC的創建: https://github.com/rancher/local-path-provisioner
yaml 文件下載不下來,所以瀏覽器訪問然后復制下執行,我這里集群本來沒有SC,如果小伙伴的集群有SC,直接設置默認SC即可
┌──[root@vms81.liruilongs.github.io]-[~/awx/awx-operator] └─$kubectl apply -f https://raw.githubusercontent.com/rancher/local-path-provisioner/v0.0.22/deploy/local-path-storage.yaml The connection to the server raw.githubusercontent.com was refused - did you specify the right host or port? ┌──[root@vms81.liruilongs.github.io]-[~/awx/awx-operator] └─$wget https://raw.githubusercontent.com/rancher/local-path-provisioner/v0.0.22/deploy/local-path-storage.yaml --2022-10-15 21:45:02-- https://raw.githubusercontent.com/rancher/local-path-provisioner/v0.0.22/deploy/local-path-storage.yaml 正在解析主機 raw.githubusercontent.com (raw.githubusercontent.com)... 0.0.0.0, :: 正在連接 raw.githubusercontent.com (raw.githubusercontent.com)|0.0.0.0|:443... 失敗:拒絕連接。 正在連接 raw.githubusercontent.com (raw.githubusercontent.com)|::|:443... 失敗:拒絕連接。 ┌──[root@vms81.liruilongs.github.io]-[~/awx/awx-operator] └─$vim local-path-storage.yaml ┌──[root@vms81.liruilongs.github.io]-[~/awx/awx-operator] └─$ [新] 128L, 2932C 已寫入 ┌──[root@vms81.liruilongs.github.io]-[~/awx/awx-operator] └─$kubectl get sc -A No resources found ┌──[root@vms81.liruilongs.github.io]-[~/awx/awx-operator] └─$mkdir -p /opt/local-path-provisioner ┌──[root@vms81.liruilongs.github.io]-[~/awx/awx-operator] └─$kubectl apply -f local-path-storage.yaml namespace/local-path-storage created serviceaccount/local-path-provisioner-service-account created clusterrole.rbac.authorization.k8s.io/local-path-provisioner-role created clusterrolebinding.rbac.authorization.k8s.io/local-path-provisioner-bind created deployment.apps/local-path-provisioner created storageclass.storage.k8s.io/local-path created configmap/local-path-config created ┌──[root@vms81.liruilongs.github.io]-[~/awx/awx-operator] └─$確認創建成功
┌──[root@vms81.liruilongs.github.io]-[~/awx/awx-operator] └─$kubectl get sc NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE local-path rancher.io/local-path Delete WaitForFirstConsumer false 2m6s設置為默認SC:https://kubernetes.io/zh-cn/docs/tasks/administer-cluster/change-default-storage-class/
┌──[root@vms81.liruilongs.github.io]-[~/awx/awx-operator] └─$kubectl patch storageclass local-path -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}' storageclass.storage.k8s.io/local-path patched ┌──[root@vms81.liruilongs.github.io]-[~/awx/awx-operator] └─$kubectl get pods NAME READY STATUS RESTARTS AGE awx-demo-postgres-13-0 0/1 Pending 0 46m awx-operator-controller-manager-79ff9599d8-2v5fn 2/2 Running 0 173m導出yaml文件,刪除重新創建
┌──[root@vms81.liruilongs.github.io]-[~/awx/awx-operator] └─$kubectl get pvc postgres-13-awx-demo-postgres-13-0 -o yaml > postgres-13-awx-demo-postgres-13-0.yaml ┌──[root@vms81.liruilongs.github.io]-[~/awx/awx-operator] └─$kubectl delete -f postgres-13-awx-demo-postgres-13-0.yaml persistentvolumeclaim "postgres-13-awx-demo-postgres-13-0" deleted ┌──[root@vms81.liruilongs.github.io]-[~/awx/awx-operator] └─$kubectl apply -f postgres-13-awx-demo-postgres-13-0.yaml persistentvolumeclaim/postgres-13-awx-demo-postgres-13-0 created查看pvc的狀態,這里需要等一會,Bound意味著已經綁定。
┌──[root@vms81.liruilongs.github.io]-[~/awx/awx-operator] └─$kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE postgres-13-awx-demo-postgres-13-0 Pending local-path 3s ┌──[root@vms81.liruilongs.github.io]-[~/awx/awx-operator] └─$kubectl describe pvc postgres-13-awx-demo-postgres-13-0 | grep -i -A 10 event Events:Type Reason Age From Message---- ------ ---- ---- -------Normal WaitForPodScheduled 42s persistentvolume-controller waiting for pod awx-demo-postgres-13-0 to be scheduledNormal ExternalProvisioning 41s persistentvolume-controller waiting for a volume to be created, either by external provisioner "rancher.io/local-path" or manually created by system administratorNormal Provisioning 41s rancher.io/local-path_local-path-provisioner-7c795b5576-gmrx4_d69ca393-bcbe-4abb-8b22-cd8db3b26bf8 External provisioner is provisioning volume for claim "awx/postgres-13-awx-demo-postgres-13-0"Normal ProvisioningSucceeded 39s rancher.io/local-path_local-path-provisioner-7c795b5576-gmrx4_d69ca393-bcbe-4abb-8b22-cd8db3b26bf8 Successfully provisioned volume pvc-44b7687c-de18-45d2-bef6-8fb2d1c415d3 ┌──[root@vms81.liruilongs.github.io]-[~/awx/awx-operator] └─$kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE postgres-13-awx-demo-postgres-13-0 Bound pvc-44b7687c-de18-45d2-bef6-8fb2d1c415d3 8Gi RWO local-path 53s ┌──[root@vms81.liruilongs.github.io]-[~/awx/awx-operator] └─$ ┌──[root@vms81.liruilongs.github.io]-[~/awx-operator/crds] └─$kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pvc-44b7687c-de18-45d2-bef6-8fb2d1c415d3 8Gi RWO Delete Bound awx/postgres-13-awx-demo-postgres-13-0 local-path 54s在看下POD的狀態,這里PG相關的POD創建成功,但是awx-demo-65d9bf775b-hc58x對應的初始化容器一個也沒有創建成功,應該是鏡像pull不下來。
┌──[root@vms81.liruilongs.github.io]-[~/awx/awx-operator] └─$kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES awx-demo-65d9bf775b-hc58x 0/4 Init:0/1 0 4m42s <none> vms82.liruilongs.github.io <none> <none> awx-demo-postgres-13-0 1/1 Running 0 68m 10.244.171.180 vms82.liruilongs.github.io <none> <none> awx-operator-controller-manager-79ff9599d8-m7t8k 2/2 Running 0 7m3s 10.244.171.178 vms82.liruilongs.github.io <none> <none> ┌──[root@vms81.liruilongs.github.io]-[~/awx/awx-operator] └─$kubectl describe pod awx-demo-65d9bf775b-hc58x | grep -i -A 10 event Events:Type Reason Age From Message---- ------ ---- ---- -------Normal Scheduled 4m47s default-scheduler Successfully assigned awx/awx-demo-65d9bf775b-hc58x to vms82.liruilongs.github.ioNormal Pulling 4m46s kubelet Pulling image "quay.io/ansible/awx-ee:latest"OK,然后我們以同樣的方式pull鏡像
┌──[root@vms81.liruilongs.github.io]-[~/awx/awx-operator] └─$cd /root/ansible/ ┌──[root@vms81.liruilongs.github.io]-[~/ansible] └─$ansible node -m copy -a 'dest=/root/ src=../awx-ee.tar' ┌──[root@vms81.liruilongs.github.io]-[~/ansible] └─$ansible node -m shell -a "docker load -i /root/awx-ee.tar"查看下其他的鏡像
┌──[root@vms81.liruilongs.github.io]-[~/awx-operator/crds] └─$kubectl describe pods awx-demo-65d9bf775b-hc58x | grep -i image:Image: quay.io/ansible/awx-ee:latestImage: docker.io/redis:7Image: quay.io/ansible/awx:21.7.0Image: quay.io/ansible/awx:21.7.0Image: quay.io/ansible/awx-ee:latest ┌──[root@vms81.liruilongs.github.io]-[~/awx-operator/crds] └─$可以手動在work節點pull鏡像,確認鏡像都pull成功
┌──[root@vms82.liruilongs.github.io]-[~] └─$docker pull quay.io/ansible/awx:21.7.0 21.7.0: Pulling from ansible/awx Digest: sha256:bca920f96fc6a77b72c4442088b53a90b22162cfa90503d3dcda4577afee58f8 Status: Image is up to date for quay.io/ansible/awx:21.7.0 quay.io/ansible/awx:21.7.0 ┌──[root@vms82.liruilongs.github.io]-[~] └─$docker pull docker.io/redis:7 7: Pulling from library/redis Digest: sha256:c95835a74c37b3a784fb55f7b2c211bd20c650d5e55dae422c3caa9c01eb39fa Status: Image is up to date for redis:7 docker.io/library/redis:7 ┌──[root@vms82.liruilongs.github.io]-[~] └─$docker pull quay.io/ansible/awx-ee:latest latest: Pulling from ansible/awx-ee Digest: sha256:a300d6522c9e4292c9f19b04e4544289cbcf7926bde4001131582f254d191494 Status: Image is up to date for quay.io/ansible/awx-ee:latest quay.io/ansible/awx-ee:latest ┌──[root@vms82.liruilongs.github.io]-[~] └─$這里需要等一會,會看到Pod都正常了
┌──[root@vms81.liruilongs.github.io]-[~/ansible] └─$kubectl get pods NAME READY STATUS RESTARTS AGE awx-demo-65d9bf775b-hc58x 4/4 Running 0 79m awx-demo-postgres-13-0 1/1 Running 0 143m awx-operator-controller-manager-79ff9599d8-m7t8k 2/2 Running 0 81m查看SVC訪問測試
┌──[root@vms81.liruilongs.github.io]-[~/ansible] └─$kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE awx-demo-postgres-13 ClusterIP None <none> 5432/TCP 143m awx-demo-service NodePort 10.104.176.210 <none> 80:30066/TCP 79m awx-operator-controller-manager-metrics-service ClusterIP 10.108.71.67 <none> 8443/TCP 82m ┌──[root@vms81.liruilongs.github.io]-[~/ansible] └─$curl 192.168.26.82:30066 <!doctype html><html lang="en"><head><script nonce="cw6jhvbF7S5bfKJPsimyabathhaX35F5hIyR7emZNT0=" type="text/javascript">window..... ┌──[root@vms81.liruilongs.github.io]-[~/ansible] └─$獲取密碼
┌──[root@vms81.liruilongs.github.io]-[~/ansible] └─$kubectl get secrets NAME TYPE DATA AGE awx-demo-admin-password Opaque 1 146m awx-demo-app-credentials Opaque 3 82m awx-demo-broadcast-websocket Opaque 1 146m awx-demo-postgres-configuration Opaque 6 146m awx-demo-receptor-ca kubernetes.io/tls 2 82m awx-demo-receptor-work-signing Opaque 2 82m awx-demo-secret-key Opaque 1 146m awx-demo-token-sc92t kubernetes.io/service-account-token 3 82m awx-operator-controller-manager-token-tpv2m kubernetes.io/service-account-token 3 84m default-token-864fk kubernetes.io/service-account-token 3 4h32m redhat-operators-pull-secret Opaque 1 146m sh.helm.release.v1.my-awx-operator.v1 helm.sh/release.v1 1 84m ┌──[root@vms81.liruilongs.github.io]-[~/awx-operator/crds] └─$echo $(kubectl get secret awx-demo-admin-password -o jsonpath="{.data.password}" | base64 --decode) tP59YoIWSS6NgCUJYQUG4cXXJIaIc7ci ┌──[root@vms81.liruilongs.github.io]-[~/awx-operator/crds] └─$訪問測試
默認的服務發布方式為NodePort,所以我們可以在任意子網IP通過節點加端口訪問:http://192.168.26.82:30066/#/login
沒有想到會是中文的界面,只能說國際化做的很好…
如果有面板工具可以簡單看下涉及的資源
通過命令行查看所有資源
┌──[root@vms81.liruilongs.github.io]-[~/ansible] └─$kubectl api-resources -o name --verbs=list --namespaced | xargs -n 1 kubectl get --show-kind --ignore-not-found -n awx NAME DATA AGE configmap/awx-demo-awx-configmap 5 116m configmap/awx-operator 0 5h7m configmap/awx-operator-awx-manager-config 1 119m configmap/kube-root-ca.crt 1 5h7m NAME ENDPOINTS AGE endpoints/awx-demo-postgres-13 10.244.171.180:5432 3h endpoints/awx-demo-service 10.244.171.181:8052 116m endpoints/awx-operator-controller-manager-metrics-service 10.244.171.178:8443 119m LAST SEEN TYPE REASON OBJECT MESSAGE 40m Normal Pulled pod/awx-demo-65d9bf775b-hc58x Successfully pulled image "quay.io/ansible/awx-ee:latest" in 1h16m36.915786211s 40m Normal Created pod/awx-demo-65d9bf775b-hc58x Created container init 40m Normal Started pod/awx-demo-65d9bf775b-hc58x Started container init 40m Normal Pulled pod/awx-demo-65d9bf775b-hc58x Container image "docker.io/redis:7" already present on machine 40m Normal Created pod/awx-demo-65d9bf775b-hc58x Created container redis 40m Normal Started pod/awx-demo-65d9bf775b-hc58x Started container redis 40m Normal Pulled pod/awx-demo-65d9bf775b-hc58x Container image "quay.io/ansible/awx:21.7.0" already present on machine 40m Normal Created pod/awx-demo-65d9bf775b-hc58x Created container awx-demo-web 40m Normal Started pod/awx-demo-65d9bf775b-hc58x Started container awx-demo-web 40m Normal Pulled pod/awx-demo-65d9bf775b-hc58x Container image "quay.io/ansible/awx:21.7.0" already present on machine 40m Normal Created pod/awx-demo-65d9bf775b-hc58x Created container awx-demo-task 40m Normal Started pod/awx-demo-65d9bf775b-hc58x Started container awx-demo-task 40m Normal Pulled pod/awx-demo-65d9bf775b-hc58x Container image "quay.io/ansible/awx-ee:latest" already present on machine 40m Normal Created pod/awx-demo-65d9bf775b-hc58x Created container awx-demo-ee 40m Normal Started pod/awx-demo-65d9bf775b-hc58x Started container awx-demo-ee NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE persistentvolumeclaim/postgres-13-awx-demo-postgres-13-0 Bound pvc-44b7687c-de18-45d2-bef6-8fb2d1c415d3 8Gi RWO local-path 117m NAME READY STATUS RESTARTS AGE pod/awx-demo-65d9bf775b-hc58x 4/4 Running 0 116m pod/awx-demo-postgres-13-0 1/1 Running 0 3h pod/awx-operator-controller-manager-79ff9599d8-m7t8k 2/2 Running 0 119m NAME TYPE DATA AGE secret/awx-demo-admin-password Opaque 1 3h secret/awx-demo-app-credentials Opaque 3 116m secret/awx-demo-broadcast-websocket Opaque 1 3h secret/awx-demo-postgres-configuration Opaque 6 3h secret/awx-demo-receptor-ca kubernetes.io/tls 2 116m secret/awx-demo-receptor-work-signing Opaque 2 116m secret/awx-demo-secret-key Opaque 1 3h secret/awx-demo-token-sc92t kubernetes.io/service-account-token 3 116m secret/awx-operator-controller-manager-token-tpv2m kubernetes.io/service-account-token 3 119m secret/default-token-864fk kubernetes.io/service-account-token 3 5h7m secret/redhat-operators-pull-secret Opaque 1 3h secret/sh.helm.release.v1.my-awx-operator.v1 helm.sh/release.v1 1 119m NAME SECRETS AGE serviceaccount/awx-demo 1 116m serviceaccount/awx-operator-controller-manager 1 119m serviceaccount/default 1 5h7m NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/awx-demo-postgres-13 ClusterIP None <none> 5432/TCP 3h service/awx-demo-service NodePort 10.104.176.210 <none> 80:30066/TCP 116m service/awx-operator-controller-manager-metrics-service ClusterIP 10.108.71.67 <none> 8443/TCP 119m NAME CONTROLLER REVISION AGE controllerrevision.apps/awx-demo-postgres-13-85958bcbcd statefulset.apps/awx-demo-postgres-13 1 3h NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/awx-demo 1/1 1 1 116m deployment.apps/awx-operator-controller-manager 1/1 1 1 119m NAME DESIRED CURRENT READY AGE replicaset.apps/awx-demo-65d9bf775b 1 1 1 116m replicaset.apps/awx-operator-controller-manager-79ff9599d8 1 1 1 119m NAME READY AGE statefulset.apps/awx-demo-postgres-13 1/1 3h NAME AGE awx.awx.ansible.com/awx-demo 13h NAME HOLDER AGE lease.coordination.k8s.io/awx-operator awx-operator-controller-manager-79ff9599d8-m7t8k_7502aa73-eaad-4b61-868e-4af77edaf856 5d7h NAME ADDRESSTYPE PORTS ENDPOINTS AGE endpointslice.discovery.k8s.io/awx-demo-postgres-13-4tc87 IPv4 5432 10.244.171.180 3h endpointslice.discovery.k8s.io/awx-demo-service-6gs4d IPv4 8052 10.244.171.181 116m endpointslice.discovery.k8s.io/awx-operator-controller-manager-metrics-service-7wtml IPv4 8443 10.244.171.178 119m LAST SEEN TYPE REASON OBJECT MESSAGE 40m Normal Pulled pod/awx-demo-65d9bf775b-hc58x Successfully pulled image "quay.io/ansible/awx-ee:latest" in 1h16m36.915786211s 40m Normal Created pod/awx-demo-65d9bf775b-hc58x Created container init 40m Normal Started pod/awx-demo-65d9bf775b-hc58x Started container init 40m Normal Pulled pod/awx-demo-65d9bf775b-hc58x Container image "docker.io/redis:7" already present on machine 40m Normal Created pod/awx-demo-65d9bf775b-hc58x Created container redis 40m Normal Started pod/awx-demo-65d9bf775b-hc58x Started container redis 40m Normal Pulled pod/awx-demo-65d9bf775b-hc58x Container image "quay.io/ansible/awx:21.7.0" already present on machine 40m Normal Created pod/awx-demo-65d9bf775b-hc58x Created container awx-demo-web 40m Normal Started pod/awx-demo-65d9bf775b-hc58x Started container awx-demo-web 40m Normal Pulled pod/awx-demo-65d9bf775b-hc58x Container image "quay.io/ansible/awx:21.7.0" already present on machine 40m Normal Created pod/awx-demo-65d9bf775b-hc58x Created container awx-demo-task 40m Normal Started pod/awx-demo-65d9bf775b-hc58x Started container awx-demo-task 40m Normal Pulled pod/awx-demo-65d9bf775b-hc58x Container image "quay.io/ansible/awx-ee:latest" already present on machine 40m Normal Created pod/awx-demo-65d9bf775b-hc58x Created container awx-demo-ee 40m Normal Started pod/awx-demo-65d9bf775b-hc58x Started container awx-demo-ee NAME ROLE AGE rolebinding.rbac.authorization.k8s.io/awx-demo Role/awx-demo 116m rolebinding.rbac.authorization.k8s.io/awx-operator-awx-manager-rolebinding Role/awx-operator-awx-manager-role 119m rolebinding.rbac.authorization.k8s.io/awx-operator-leader-election-rolebinding Role/awx-operator-leader-election-role 119m NAME CREATED AT role.rbac.authorization.k8s.io/awx-demo 2022-10-15T14:19:31Z role.rbac.authorization.k8s.io/awx-operator-awx-manager-role 2022-10-15T14:17:13Z role.rbac.authorization.k8s.io/awx-operator-leader-election-role 2022-10-15T14:17:13Z ┌──[root@vms81.liruilongs.github.io]-[~/ansible] └─$嗯,關于Helm方式安裝AWX和小伙伴們分享到這里,生活加油 ^_^
博文參考
https://blog.csdn.net/m0_51691302/article/details/126288338
https://zenn.dev/asterisk9101/articles/kubernetes-1
https://www.youtube.com/watch?v=AYfqkTbCDAw
https://www.youtube.com/watch?v=gCqCtAEP6lc
另:kube-rbac-proxy:v0.13.0 鏡像以上傳到了CSDN(0積分)有需要小伙伴可以下載:
https://download.csdn.net/download/sanhewuyang/86765668
總結
以上是生活随笔為你收集整理的关于K8s中Ansible AWX(awx-operator 0.30.0)平台Helm部署的一些笔记的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: 如何评价一个人的科研能力
- 下一篇: 刘润《商业简史》读书笔记--拾间房