rancher导入rke
2019獨角獸企業(yè)重金招聘Python工程師標準>>>
一、搭建RKE集群
1.1 概述
? ? REK是Rancher Kubernetes Engine,通過rke工具可以快速簡單地搭建一套?Kubernetes集群。
? ? 搭建的重點在于環(huán)境,如果環(huán)境配置正確,一切都會很順利。
1.2 環(huán)境準備
? ? 首先說明下依賴版本:????
| 序號 | 名稱 | 版本 | 備注 |
| 1 | ubuntu | 16.04.1 | ? |
| 2 | docker | 17.03.2-ce | ? |
| 3 | rke | 0.1.1 | ? |
| 4 | k8s | 1.8.7 | ? |
| 5 | rancher | 2.0.0-alpha16 | ? |
? ? 然后開始準備環(huán)境,注意需要兩臺機器,一臺用于安裝rke集群,一臺用于rke工具控制機。注意控制機最好和rke集群執(zhí)行同樣的環(huán)境準備,否則有可能出現(xiàn)莫名安裝不上的問題。
? ? 1、初裝的ubuntu自帶的編輯命令不太好用,這里卸載原來的,然后安裝vi
#此時未使用root登陸,需要增加sudo,以獲取root權限 #首先卸載舊版本的vi編輯器: sudo apt-get remove vim-common #然后安裝新版vi即可: $sudo apt-get install vim? ? 2、ubuntu默認不允許使用root登陸,為了減少不必要的麻煩,這里修改配置啟用root,然后以后所有步驟都在root用戶下操作。
#1、將PermitRootLogin prohibit-password修改為PermitRootLogin yes #2、取消注釋AuthorizedKeysFile %h/.ssh/authorized_keys sudo vi /etc/ssh/sshd_config #3、切換到root登陸 sudo su #4、設置root密碼 sudo passwd root #5、更新密碼 sudo passwd -u root #6、重啟ssh sudo service ssh restart? ? 修改完后,使用root登陸,重新打開客戶端。
? ? 3、修改主機名,注意不能包含大寫字母,并且最好不要包含特殊字符,修改完后重啟機器。
vi /etc/hostname? ? 4、確保hosts文件包含如下行
127.0.0.1 localhost? ? 另外ubuntu會把主機名對應127.0.0.1加入到hosts里,第三步我們改了名稱,所以hosts也要相應改變,但注意IP要改為真實IP
172.16.10.99 worker02? ? 5、啟用cgroup內(nèi)存和Swap限額,修改/etc/default/grub 配置文件,修改/增加兩項
GRUB_CMDLINE_LINUX_DEFAULT="cgroup_enable=memory swapaccount=1" GRUB_CMDLINE_LINUX="cgroup_enable=memory swapaccount=1"? ? 修改完后執(zhí)行
update-grub? ? 執(zhí)行完后,再次重啟機器。
? ? 6、永久禁用交換分區(qū),直接修改/etc/fstab文件,注釋掉swap項
# swap was on /dev/sda6 during installation #UUID=5e4b0d14-ad10-4d24-8f7c-4a07c4eb4d29 none swap sw 0 0????7、其他的檢查防火墻(默認無規(guī)則)、SELinux(默認未啟用)、ipv4轉(zhuǎn)發(fā)(默認已啟用)、配置rke控制機到rke集群機器的單向免密登陸
? ? 8、下載安裝docker17.03.2-ce版本(https://docs.docker.com/install/linux/docker-ce/ubuntu/#upgrade-docker-ce),最好再設置加速器,不然下載鏡像會很慢。最后將root再添加到docker用戶組中。
1.3安裝rke
? ? rke的github地址是https://github.com/rancher/rke,點擊到releases頁面,下載rke工具。下載成功后,無需其他操作,只需要賦權即可使用。
chmod 777 rke #驗證安裝 ./rke -version?1.4部署rke集群
? ? 1、重啟一下機器(為什么重啟?不知道,偶發(fā)裝不上,重啟就好了,這里為了確保成功,先重啟下。)
? ? 2、創(chuàng)建cluster.yml文件,內(nèi)容類似如下:
nodes:- address: 172.16.10.99user: rootrole: [controlplane,worker,etcd] services:etcd:image: quay.io/coreos/etcd:latestkube-api:image: rancher/k8s:v1.8.3-rancher2kube-controller:image: rancher/k8s:v1.8.3-rancher2scheduler:image: rancher/k8s:v1.8.3-rancher2kubelet:image: rancher/k8s:v1.8.3-rancher2kubeproxy:image: rancher/k8s:v1.8.3-rancher2? ? 注意etcd的那個鏡像,不要更換其他etcd鏡像,更換會導致啟動403錯誤。quay.io的鏡像比較慢,可以先pull下來,但絕對不能換。
????3、執(zhí)行
./rke up --config cluster.yml執(zhí)行結果如下:
root@master:/opt/rke# ./rke up --config cluster-99.yml INFO[0000] Building Kubernetes cluster INFO[0000] [dialer] Setup tunnel for host [172.16.10.99] INFO[0000] [network] Deploying port listener containers INFO[0001] [network] Successfully started [rke-etcd-port-listener] container on host [172.16.10.99] INFO[0001] [network] Successfully started [rke-cp-port-listener] container on host [172.16.10.99] INFO[0002] [network] Successfully started [rke-worker-port-listener] container on host [172.16.10.99] INFO[0002] [network] Port listener containers deployed successfully INFO[0002] [network] Running all -> etcd port checks INFO[0003] [network] Successfully started [rke-port-checker] container on host [172.16.10.99] INFO[0004] [network] Successfully started [rke-port-checker] container on host [172.16.10.99] INFO[0004] [network] Running control plane -> etcd port checks INFO[0005] [network] Successfully started [rke-port-checker] container on host [172.16.10.99] INFO[0005] [network] Running workers -> control plane port checks INFO[0005] [network] Successfully started [rke-port-checker] container on host [172.16.10.99] INFO[0006] [network] Checking KubeAPI port Control Plane hosts INFO[0006] [network] Removing port listener containers INFO[0006] [remove/rke-etcd-port-listener] Successfully removed container on host [172.16.10.99] INFO[0007] [remove/rke-cp-port-listener] Successfully removed container on host [172.16.10.99] INFO[0007] [remove/rke-worker-port-listener] Successfully removed container on host [172.16.10.99] INFO[0007] [network] Port listener containers removed successfully INFO[0007] [certificates] Attempting to recover certificates from backup on host [172.16.10.99] INFO[0007] [certificates] No Certificate backup found on host [172.16.10.99] INFO[0007] [certificates] Generating kubernetes certificates INFO[0007] [certificates] Generating CA kubernetes certificates INFO[0008] [certificates] Generating Kubernetes API server certificates INFO[0008] [certificates] Generating Kube Controller certificates INFO[0009] [certificates] Generating Kube Scheduler certificates INFO[0011] [certificates] Generating Kube Proxy certificates INFO[0011] [certificates] Generating Node certificate INFO[0012] [certificates] Generating admin certificates and kubeconfig INFO[0012] [certificates] Generating etcd-172.16.10.99 certificate and key INFO[0013] [certificates] Temporarily saving certs to etcd host [172.16.10.99] INFO[0018] [certificates] Saved certs to etcd host [172.16.10.99] INFO[0018] [reconcile] Reconciling cluster state INFO[0018] [reconcile] This is newly generated cluster INFO[0018] [certificates] Deploying kubernetes certificates to Cluster nodes INFO[0024] Successfully Deployed local admin kubeconfig at [./kube_config_cluster-99.yml] INFO[0024] [certificates] Successfully deployed kubernetes certificates to Cluster nodes INFO[0024] Pre-pulling kubernetes images INFO[0024] Kubernetes images pulled successfully INFO[0024] [etcd] Building up Etcd Plane.. INFO[0025] [etcd] Successfully started [etcd] container on host [172.16.10.99] INFO[0025] [etcd] Successfully started Etcd Plane.. INFO[0025] [controlplane] Building up Controller Plane.. INFO[0026] [controlplane] Successfully started [kube-api] container on host [172.16.10.99] INFO[0026] [healthcheck] Start Healthcheck on service [kube-api] on host [172.16.10.99] INFO[0036] [healthcheck] service [kube-api] on host [172.16.10.99] is healthy INFO[0037] [controlplane] Successfully started [kube-controller] container on host [172.16.10.99] INFO[0037] [healthcheck] Start Healthcheck on service [kube-controller] on host [172.16.10.99] INFO[0037] [healthcheck] service [kube-controller] on host [172.16.10.99] is healthy INFO[0038] [controlplane] Successfully started [scheduler] container on host [172.16.10.99] INFO[0038] [healthcheck] Start Healthcheck on service [scheduler] on host [172.16.10.99] INFO[0038] [healthcheck] service [scheduler] on host [172.16.10.99] is healthy INFO[0038] [controlplane] Successfully started Controller Plane.. INFO[0038] [authz] Creating rke-job-deployer ServiceAccount INFO[0038] [authz] rke-job-deployer ServiceAccount created successfully INFO[0038] [authz] Creating system:node ClusterRoleBinding INFO[0038] [authz] system:node ClusterRoleBinding created successfully INFO[0038] [certificates] Save kubernetes certificates as secrets INFO[0039] [certificates] Successfully saved certificates as kubernetes secret [k8s-certs] INFO[0039] [state] Saving cluster state to Kubernetes INFO[0039] [state] Successfully Saved cluster state to Kubernetes ConfigMap: cluster-state INFO[0039] [worker] Building up Worker Plane.. INFO[0039] [sidekick] Sidekick container already created on host [172.16.10.99] INFO[0040] [worker] Successfully started [kubelet] container on host [172.16.10.99] INFO[0040] [healthcheck] Start Healthcheck on service [kubelet] on host [172.16.10.99] INFO[0046] [healthcheck] service [kubelet] on host [172.16.10.99] is healthy INFO[0046] [worker] Successfully started [kube-proxy] container on host [172.16.10.99] INFO[0046] [healthcheck] Start Healthcheck on service [kube-proxy] on host [172.16.10.99] INFO[0052] [healthcheck] service [kube-proxy] on host [172.16.10.99] is healthy INFO[0052] [sidekick] Sidekick container already created on host [172.16.10.99] INFO[0052] [healthcheck] Start Healthcheck on service [kubelet] on host [172.16.10.99] INFO[0052] [healthcheck] service [kubelet] on host [172.16.10.99] is healthy INFO[0052] [healthcheck] Start Healthcheck on service [kube-proxy] on host [172.16.10.99] INFO[0052] [healthcheck] service [kube-proxy] on host [172.16.10.99] is healthy INFO[0052] [sidekick] Sidekick container already created on host [172.16.10.99] INFO[0052] [healthcheck] Start Healthcheck on service [kubelet] on host [172.16.10.99] INFO[0052] [healthcheck] service [kubelet] on host [172.16.10.99] is healthy INFO[0052] [healthcheck] Start Healthcheck on service [kube-proxy] on host [172.16.10.99] INFO[0052] [healthcheck] service [kube-proxy] on host [172.16.10.99] is healthy INFO[0052] [worker] Successfully started Worker Plane.. INFO[0052] [network] Setting up network plugin: flannel INFO[0052] [addons] Saving addon ConfigMap to Kubernetes INFO[0053] [addons] Successfully Saved addon to Kubernetes ConfigMap: rke-network-plugin INFO[0053] [addons] Executing deploy job.. INFO[0058] [sync] Syncing nodes Labels and Taints INFO[0058] [sync] Successfully synced nodes Labels and Taints INFO[0058] [addons] Setting up KubeDNS INFO[0058] [addons] Saving addon ConfigMap to Kubernetes INFO[0058] [addons] Successfully Saved addon to Kubernetes ConfigMap: rke-kubedns-addon INFO[0058] [addons] Executing deploy job.. INFO[0063] [addons] KubeDNS deployed successfully.. INFO[0063] [ingress] Setting up nginx ingress controller INFO[0063] [addons] Saving addon ConfigMap to Kubernetes INFO[0063] [addons] Successfully Saved addon to Kubernetes ConfigMap: rke-ingress-controller INFO[0063] [addons] Executing deploy job.. INFO[0068] [ingress] ingress controller nginx is successfully deployed INFO[0068] [addons] Setting up user addons.. INFO[0068] [addons] No user addons configured.. INFO[0068] Finished building Kubernetes cluster successfully? ? 如果執(zhí)行過程出現(xiàn)錯誤,可以先執(zhí)行remove,再執(zhí)行up,如果還不行,先remove再重啟機器,最后在up。一般都會成功,如果執(zhí)行了如上步驟還不行,再看具體報錯信息。
二、導入rke到rancher2.0(預覽版)
? ? 點擊創(chuàng)建集群,在最右邊有個導入現(xiàn)有集群。
? ? 執(zhí)行rke命令的目錄下會自動生成一個kube_config_cluster.yml文件,點擊頁面的導入,把該文件導入。
? ? 注意:rancher2.0,現(xiàn)在都是開發(fā)版,存在各種問題,僅適合調(diào)研用,缺少很多東西,不適合生產(chǎn)用。消息說三月底會推送2.0正式版。
三、附錄
? ? rke安裝注意事項:http://blog.csdn.net/csdn_duomaomao/article/details/79325759
? ? rancher導入rke:http://blog.csdn.net/csdn_duomaomao/article/details/79325436
轉(zhuǎn)載于:https://my.oschina.net/shyloveliyi/blog/1626397
總結
以上是生活随笔為你收集整理的rancher导入rke的全部內(nèi)容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: 2019最近计算机毕业设计-题目汇总大全
- 下一篇: 重装系统——制作PE系统盘 + 使用系统