kubernetes 1.7.2 安装 记录过程
系統信息
cat /etc/redhat-release CentOS Linux release 7.4.1708 (Core)環境信息
?
| IP地址 | 主機名稱 |
| 10.10.6.11 | master |
| 10.10.6.12 | node1 |
| 10.10.6.13 | node2 |
?
第一部分
基礎環境設置(三臺設備均需設置,以下master為例)
設置主機名
hostnamectl set-hostname master禁用selinux 和firewalld
sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config systemctl disable firewalld systemctl stop firewalld設置環境變量
cat >> /etc/sysctl.d/k8s.conf <<EOF net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 EOF sysctl -p /etc/sysctl.d/k8s.conf設置docker 、kubernetes yum 源
cat >> /etc/yum.repos.d/docker.repo <<EOF [docker-repo] name=Docker Repository baseurl=http://mirrors.aliyun.com/docker-engine/yum/repo/main/centos/7 enabled=1 gpgcheck=0 EOFcat >> /etc/yum.repos.d/kubernetes.repo <<EOF [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ enabled=1 gpgcheck=0 EOF?
第二部分(三臺設備都需要執行)
安裝docker 和kubeadm
cat > /etc/docker/daemon.json <<EOF { "registry-mirrors": ["https://vaflkxbk.mirror.aliyuncs.com"] } EOF
啟動docker ,查看docker信息 docker version
?
安裝kubernetes,
chmod +x /root/kubernetes.sh && sh /root/kubernetes.sh
設置Cgroup Driver: cgroupfs 類型
sed -i 's/cgroup-driver=systemd/cgroup-driver=cgroupfs/g' /etc/systemd/system/kubelet.service.d/10-kubeadm.conf## cgroupfs 是根據docker info 中 的 Cgroup Driver: cgroupfs 來設定。
啟動服務
?
下載 images
cat images.sh
?查看下載images 確認無誤
docker images REPOSITORY TAG IMAGE ID CREATED SIZE gcr.io/google_containers/kube-apiserver-amd64 v1.7.2 4935105a20b1 6 months ago 186MB gcr.io/google_containers/kube-proxy-amd64 v1.7.2 13a7af96c7e8 6 months ago 115MB gcr.io/google_containers/kube-controller-manager-amd64 v1.7.2 2790e95830f6 6 months ago 138MB gcr.io/google_containers/kube-scheduler-amd64 v1.7.2 5db1f9874ae0 6 months ago 77.2MB gcr.io/google_containers/flannel v0.8.0-amd64 9db3bab8c19e 6 months ago 50.7MB gcr.io/google_containers/k8s-dns-sidecar-amd64 1.14.4 38bac66034a6 7 months ago 41.8MB gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64 1.14.4 f7f45b9cb733 7 months ago 41.4MB gcr.io/google_containers/kubernetes-dashboard-amd64 v1.6.0 8b3d11182363 10 months ago 109MB gcr.io/google_containers/k8s-dns-kube-dns-amd64 1.14.4 f8363dbf447b 11 months ago 52.4MB gcr.io/google_containers/etcd-amd64 3.0.17 243830dae7dd 11 months ago 169MB gcr.io/google_containers/pause-amd64 3.0 99e59f495ffa 21 months ago 747kB View Code?
第三部分
在master 10.10.6.11 上執行
輸出信息
[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters. [init] Using Kubernetes version: v1.7.2 [init] Using Authorization modes: [Node RBAC] [preflight] Running pre-flight checks [preflight] WARNING: docker version is greater than the most recently validated version. Docker version: 17.12.0-ce. Max validated version: 1.12 [preflight] WARNING: hostname "master" could not be reached [preflight] WARNING: hostname "master" lookup master on 114.114.114.114:53: no such host [preflight] Starting the kubelet service [kubeadm] WARNING: starting in 1.8, tokens expire after 24 hours by default (if you require a non-expiring token use --token-ttl 0) [certificates] Generated CA certificate and key. [certificates] Generated API server certificate and key. [certificates] API Server serving cert is signed for DNS names [master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.10.6.11] [certificates] Generated API server kubelet client certificate and key. [certificates] Generated service account token signing key and public key. [certificates] Generated front-proxy CA certificate and key. [certificates] Generated front-proxy client certificate and key. [certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf" [apiclient] Created API client, waiting for the control plane to become ready [apiclient] All control plane components are healthy after 31.001278 seconds [token] Using token: 863f67.19babbff7bfe8543 [apiconfig] Created RBAC rules [addons] Applied essential addon: kube-proxy [addons] Applied essential addon: kube-dnsYour Kubernetes master has initialized successfully!To start using your cluster, you need to run (as a regular user):mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/configYou should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:http://kubernetes.io/docs/admin/addons/ You can now join any number of machines by running the following on each node as root:kubeadm join --token 863f67.19babbff7bfe8543 10.10.6.11:6443 View Code?
設置環境變量,這里是把變量放到/etc/profile
export KUBECONFIG=/etc/kubernetes/admin.conf
下載 kube-flannel-rbac.yml 和
vi kube-flannel-rbac.yml
?
wget https://raw.githubusercontent.com/coreos/flannel/v0.8.0/Documentation/kube-flannel-rbac.ymlwget https://raw.githubusercontent.com/coreos/flannel/v0.8.0/Documentation/kube-flannel.yml其中kube-flannel.yml 的flannel鏡像 要與上面下載的flannel 一致
# Create the clusterrole and clusterrolebinding: # $ kubectl create -f kube-flannel-rbac.yml # Create the pod using the same namespace used by the flannel serviceaccount: # $ kubectl create --namespace kube-system -f kube-flannel.yml --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1beta1 metadata:name: flannel rules:- apiGroups:- ""resources:- podsverbs:- get- apiGroups:- ""resources:- nodesverbs:- list- watch- apiGroups:- ""resources:- nodes/statusverbs:- patch --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1beta1 metadata:name: flannel roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: flannel subjects: - kind: ServiceAccountname: flannelnamespace: kube-system View Code?vi? kube-flannel.yml
--- apiVersion: v1 kind: ServiceAccount metadata:name: flannelnamespace: kube-system --- kind: ConfigMap apiVersion: v1 metadata:name: kube-flannel-cfgnamespace: kube-systemlabels:tier: nodeapp: flannel data:cni-conf.json: |{"name": "cbr0","type": "flannel","delegate": {"isDefaultGateway": true}}net-conf.json: |{"Network": "10.244.0.0/16","Backend": {"Type": "vxlan"}} --- apiVersion: extensions/v1beta1 kind: DaemonSet metadata:name: kube-flannel-dsnamespace: kube-systemlabels:tier: nodeapp: flannel spec:template:metadata:labels:tier: nodeapp: flannelspec:hostNetwork: truenodeSelector:beta.kubernetes.io/arch: amd64tolerations:- key: node-role.kubernetes.io/masteroperator: Existseffect: NoScheduleserviceAccountName: flannelcontainers:- name: kube-flannelimage: gcr.io/google_containers/flannel:v0.8.0-amd64command: [ "/opt/bin/flanneld", "--ip-masq", "--kube-subnet-mgr" ]securityContext:privileged: trueenv:- name: POD_NAMEvalueFrom:fieldRef:fieldPath: metadata.name- name: POD_NAMESPACEvalueFrom:fieldRef:fieldPath: metadata.namespacevolumeMounts:- name: runmountPath: /run- name: flannel-cfgmountPath: /etc/kube-flannel/- name: install-cniimage: gcr.io/google_containers/flannel:v0.8.0-amd64command: [ "/bin/sh", "-c", "set -e -x; cp -f /etc/kube-flannel/cni-conf.json /etc/cni/net.d/10-flannel.conf; while true; do sleep 3600; done" ]volumeMounts:- name: cnimountPath: /etc/cni/net.d- name: flannel-cfgmountPath: /etc/kube-flannel/volumes:- name: runhostPath:path: /run- name: cnihostPath:path: /etc/cni/net.d- name: flannel-cfgconfigMap:name: kube-flannel-cfg View Code執行以下命令:
kubectl --namespace kube-system apply -f kube-flannel-rbac.yml kubectl --namespace kube-system apply -f kube-flannel.yml在兩個node 節點上執行
kubeadm join --token 863f67.19babbff7bfe8543 10.10.6.11:6443 --skip-preflight-checks輸出信息
[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters. [preflight] Skipping pre-flight checks [discovery] Trying to connect to API Server "10.10.6.11:6443" [discovery] Created cluster-info discovery client, requesting info from "https://10.10.6.11:6443" [discovery] Cluster info signature and contents are valid, will use API Server "https://10.10.6.11:6443" [discovery] Successfully established connection with API Server "10.10.6.11:6443" [bootstrap] Detected server version: v1.7.2 [bootstrap] The server supports the Certificates API (certificates.k8s.io/v1beta1) [csr] Created API client to obtain unique certificate for this node, generating keys and certificate signing request [csr] Received signed certificate from the API server, generating KubeConfig... [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"Node join complete: * Certificate signing request sent to master and responsereceived. * Kubelet informed of new secure connection details.Run 'kubectl get nodes' on the master to see this machine join. View Code在master 上面查看信息
[root@master ~]# kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system etcd-master 1/1 Running 0 2h kube-system kube-apiserver-master 1/1 Running 0 2h kube-system kube-controller-manager-master 1/1 Running 0 2h kube-system kube-dns-2425271678-glrxd 3/3 Running 0 2h kube-system kube-flannel-ds-7tb2x 2/2 Running 0 2h kube-system kube-flannel-ds-pvwfv 2/2 Running 0 2h kube-system kube-flannel-ds-t5b3t 2/2 Running 1 2h kube-system kube-proxy-2k10j 1/1 Running 0 2h kube-system kube-proxy-6tdhl 1/1 Running 0 2h kube-system kube-proxy-dgfrb 1/1 Running 0 2h kube-system kube-scheduler-master 1/1 Running 0 2h [root@master ~]# kubectl get pods -n kube-system -o wide NAME READY STATUS RESTARTS AGE IP NODE etcd-master 1/1 Running 0 2h 10.10.6.11 master kube-apiserver-master 1/1 Running 0 2h 10.10.6.11 master kube-controller-manager-master 1/1 Running 0 2h 10.10.6.11 master kube-dns-2425271678-glrxd 3/3 Running 0 2h 10.244.0.3 master kube-flannel-ds-7tb2x 2/2 Running 0 2h 10.10.6.13 node2 kube-flannel-ds-pvwfv 2/2 Running 0 2h 10.10.6.11 master kube-flannel-ds-t5b3t 2/2 Running 1 2h 10.10.6.12 node1 kube-proxy-2k10j 1/1 Running 0 2h 10.10.6.13 node2 kube-proxy-6tdhl 1/1 Running 0 2h 10.10.6.12 node1 kube-proxy-dgfrb 1/1 Running 0 2h 10.10.6.11 master kube-scheduler-master 1/1 Running 0 2h 10.10.6.11 master [root@master ~]# View Code確保都是running 的狀態
?
轉載于:https://www.cnblogs.com/sxwen/p/8422972.html
總結
以上是生活随笔為你收集整理的kubernetes 1.7.2 安装 记录过程的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: python+selenium+chro
- 下一篇: docker-部署elk-6.1.3