2017年2月21日 星期二

MikroTik RouterOS L2TP over IPSec 連線設定

寫得太麻煩了,直接看圖

L2TP 設定:
IP >> Pool 新增一個給 L2TP 使用的 IP 網段(使用的 IP address: 192.168.125.100~192.168.125.200 )












PPP >> Profiles 按 新增一個 L2TP Profile























PPP >> Interface 啟用 L2TP Server














IPSec 設定:
IP >> IPSec >> Peer
新增一筆設定

IP >> IPSec >> Proposals 按 新增


新增使用者帳號:
PPP >> Secret 新增一個使用者帳號
























防火牆設定:
IP >> Firewall >> NAT 新增允許透過 VPN 上外網














IP >> Firewall >> Mangle 新增允許透過 VPN 內網













設定完成內、外網都可以連通。




2017年2月18日 星期六

Samba DC 使用指令修改 GPO 密碼原則

    Samba DC 可以用 RSAT 連線管理、修改群組原則,但發現<密碼原則>的套用一直有問題。
直接在 Samba DC 上用指令查看,發現用 RSAT 修改<密碼原則>後和在 Samba DC 用指令查詢的結果不同(不知是不是只有我有發生這個問題???)
下圖是原有的<密碼原則>預設值:
但其實我己經用 RSAT 連線修改過了:
看來是 RSAT 修改的沒有套用到 Samba DC 上。

只好直接在 Samba DC 上直接用指令修改:
samba-tool domain passwordsettings set --complexity=off    #是否啟用密碼複雜度
samba-tool domain passwordsettings set --history-length=0   #記錄幾組歷史密碼
samba-tool domain passwordsettings set --min-pwd-age=0    #最短密碼使用天數
samba-tool domain passwordsettings set --max-pwd-age=0   #最長密碼使用天數
samba-tool domain passwordsettings set --min-pwd-length=6   #密碼最少字元長度


2017年2月15日 星期三

Kubernetes SSL 連線設定

主機環境:
vi /etc/hosts
192.168.60.153  c7-k8s01 c7-k8s01.tw.company  #master & docker
192.168.60.154  c7-k8s02 c7-k8s02.tw.company
192.168.60.155  c7-k8s03 c7-k8s03.tw.company
測試時,我是在 DNS Server 上加上正、反解且由 DHCP Server 配發 IP,所以還在 ifcfg-eth0 裡加入
DOMAIN=tw.company
SEARCH=tw.company
這樣就不用在 hosts 檔案加主機 IP 對應

master 主機執行的步驟:
建立一個暫存 key 檔案資料夾
mkdir kube-key
cd kube-key

1. 建立 kubernetes cluster root CA  憑證:
openssl genrsa -out ca-key.pem 2048
openssl req -x509 -new -nodes -key ca-key.pem -days 10000 -out ca.pem -subj "/CN=kube-ca"

2. 新增 OpenSSL 設定檔 openssl.conf:
vi openssl.cnf
[req]
req_extensions = v3_req
distinguished_name = req_distinguished_name
[req_distinguished_name]
[ v3_req ]
basicConstraints = CA:FALSE
keyUsage = nonRepudiation, digitalSignature, keyEncipherment
subjectAltName = @alt_names
[alt_names]
DNS.1 = c7-k8s01.tw.company
IP.1 = 192.168.60.153

3. 產生 apiserver key & apiserver csr 憑證:
openssl genrsa -out apiserver-key.pem 2048
openssl req -new -key apiserver-key.pem -out apiserver.csr -subj "/CN=kube-apiserver" -config openssl.cnf

4. 依據 root ca key,crt 和 server.csr 產生 server.crt:
openssl x509 -req -in apiserver.csr -CA ca.pem -CAkey ca-key.pem -CAcreateserial -out apiserver.pem -days 365 -extensions v3_req -extfile openssl.cnf

5. 在master主機上先幫所有的 node 建立一個 OpenSSL 設定檔(openssl.conf)並產生 key & csr 憑證:
c7-k8s01 openssl.conf(c7-k8s01 是 master&docker service 在同一台,所以也必須執行下列動作):
[root@c7-k8s01 kube-key]# mkdir c7-k8s01 c7-k8s02 c7-k8s03
[root@c7-k8s01 kube-key]# vi c7-k8s01/openssl-c7-k8s01.cnf
[req]
req_extensions = v3_req
distinguished_name = req_distinguished_name
[req_distinguished_name]
[ v3_req ]
basicConstraints = CA:FALSE
keyUsage = nonRepudiation, digitalSignature, keyEncipherment
subjectAltName = @alt_names
[alt_names]
DNS.1 = c7-k8s01.tw.company
IP.1 = 192.168.60.153

c7-k8s01 key & csr:
[root@c7-k8s01 kube-key]# openssl genrsa -out c7-k8s01/c7-k8s01-key.pem 2048
[root@c7-k8s01 kube-key]# openssl req -new -key c7-k8s01/c7-k8s01-key.pem -out c7-k8s01/c7-k8s01.csr -subj "/CN=c7-k8s01.tw.company" -config c7-k8s01/openssl-c7-k8s01.cnf
[root@c7-k8s01 kube-key]# ls c7-k8s01
c7-k8s01.csr  c7-k8s01-key.pem  openssl-c7-k8s01.cnf

產生 c7-k8s01 root ca crt 憑證:
[root@c7-k8s01 kube-key]# openssl x509 -req -in c7-k8s01/c7-k8s01.csr -CA ca.pem -CAkey ca-key.pem -CAcreateserial -out c7-k8s01/c7-k8s01.pem -days 365 -extensions v3_req -extfile c7-k8s01/openssl-c7-k8s01.cnf
[root@c7-k8s01 kube-key]# ls c7-k8s01
c7-k8s01.csr  c7-k8s01-key.pem  c7-k8s01.pem  openssl-c7-k8s01.cnf
(重覆<步驟 5>為其他 node 產生憑證檔案)

6. 到這個步驟在 kubernetes cluster master 上應該會有如下列的 SSL 憑證檔案:
[root@c7-k8s01 kube-key]# ls -lR
-rw-r--r--. 1 root root 1045 Nov 15 22:04 apiserver.csr
-rw-r--r--. 1 root root 1679 Nov 15 22:04 apiserver-key.pem
-rw-r--r--. 1 root root 1143 Nov 15 22:05 apiserver.pem
-rw-r--r--. 1 root root 1679 Nov 15 21:53 ca-key.pem
-rw-r--r--. 1 root root 1147 Nov 15 21:53 ca.pem
-rw-r--r--. 1 root root 17 Nov 15 22:39 ca.srl
drwxr-xr-x. 2 root root 98 Nov 15 22:34 c7-k8s02
drwxr-xr-x. 2 root root 98 Nov 15 22:37 c7-k8s03
-rw-r--r--. 1 root root 319 Nov 15 21:56 openssl.cnf

./c7-k8s01:
-rw-r--r--. 1 root root 261 Nov 15 22:32 openssl-c7-k8s01.cnf
-rw-r--r--. 1 root root 997 Nov 15 22:33 c7-k8s01.csr
-rw-r--r--. 1 root root 1679 Nov 15 22:33 c7-k8s01-key.pem
-rw-r--r--. 1 root root 1094 Nov 15 22:34 c7-k8s01.pem

./c7-k8s02:
-rw-r--r--. 1 root root 261 Nov 15 22:32 openssl-c7-k8s02.cnf
-rw-r--r--. 1 root root 997 Nov 15 22:33 c7-k8s02.csr
-rw-r--r--. 1 root root 1679 Nov 15 22:33 c7-k8s02-key.pem
-rw-r--r--. 1 root root 1094 Nov 15 22:34 c7-k8s02.pem

./c7-k8s03:
-rw-r--r--. 1 root root 260 Nov 15 22:36 openssl-c7-k8s03.cnf
-rw-r--r--. 1 root root 997 Nov 15 22:37 c7-k8s03.csr
-rw-r--r--. 1 root root 1679 Nov 15 22:36 c7-k8s03-key.pem
-rw-r--r--. 1 root root 1094 Nov 15 22:37 c7-k8s03.pem

變更 master node 和其他 node 套用 pem 格式的 ssl 憑證:
Master Node (c7-k8s01):
7. 複製產生的 pem 格式的 ssl 檔案到 /etc/kubernetes/ssl 資料夾:
[root@c7-k8s01 kube-key]# mkdir -p /etc/kubernetes/ssl
[root@c7-k8s01 kube-key]# cd /etc/kubernetes/ssl
[root@c7-k8s01 ssl]# cp /root/kube-key/ca.pem .
[root@c7-k8s01 ssl]# cp /root/kube-key/apiserver.pem .
[root@c7-k8s01 ssl]# cp /root/kube-key/apiserver-key.pem .
[root@c7-k8s01 ssl]# ls -l
-rw-r--r--. 1 root root 1679 Nov 15 22:46 apiserver-key.pem
-rw-r--r--. 1 root root 1143 Nov 15 22:46 apiserver.pem
-rw-r--r--. 1 root root 1147 Nov 15 22:46 ca.pem

8. 新增一個 yaml 格式的 kubeconfig 配置檔:
[root@c7-k8s01 ~]# vi /etc/kubernetes/cm-kubeconfig.yaml
apiVersion: v1
kind: Config
clusters:
- name: local
  cluster:
    certificate-authority: /etc/kubernetes/ssl/ca.pem
users:
- name: controllermanager
  user:
    client-certificate: /etc/kubernetes/ssl/apiserver.pem
    client-key: /etc/kubernetes/ssl/apiserver-key.pem
contexts:
- context:
    cluster: local
    user: controllermanager
  name: kubelet-context
current-context: kubelet-context

9. 修改 master node 上的 controll-manager, scheduler and apiserver 服務設定:
(因為 controll-manager, scheduler and apiserver 在同一台且使用不安全的 8080 port)
[root@c7-k8s01 ssl]# vi /etc/kubernetes/config
KUBE_LOGTOSTDERR="--logtostderr=true"
KUBE_LOG_LEVEL="--v=0"
KUBE_ALLOW_PRIV="--allow-privileged=false"
KUBE_MASTER="--master=https://192.168.60.153:6443"

[root@c7-k8s01 ssl]# vi /etc/kubernetes/apiserver
KUBE_API_ADDRESS="--bind-address=0.0.0.0 --insecure-bind-address=0.0.0.0"
KUBE_API_PORT="--secure-port=6443 --insecure-port=8080"
#KUBE_ETCD_SERVERS="--etcd-servers=http://192.168.60.153:2379"
KUBE_ETCD_SERVERS="--etcd-servers=http://192.168.60.153:2379,http://192.168.60.154:2379,http://192.168.60.155:2379"    #ETCD cluster
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"
KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota"
KUBE_API_ARGS="--client-ca-file=/etc/kubernetes/ssl/ca.pem --tls-private-key-file=/etc/kubernetes/ssl/apiserver-key.pem --tls-cert-file=/etc/kubernetes/ssl/apiserver.pem --service-account-key-file=/etc/kubernetes/ssl/apiserver-key.pem"

[root@c7-k8s01 ssl]# vi /etc/kubernetes/controller-manager
KUBE_CONTROLLER_MANAGER_ARGS="--service-account-private-key-file=/etc/kubernetes/ssl/apiserver-key.pem --root-ca-file=/etc/kubernetes/ssl/ca.pem --master=http://192.168.60.153:8080 --kubeconfig=/etc/kubernetes/cm-kubeconfig.yaml"

[root@c7-k8s01 ssl]# vi /etc/kubernetes/scheduler
KUBE_SCHEDULER_ARGS="--master=http://192.168.60.153:8080 --kubeconfig=/etc/kubernetes/cm-kubeconfig.yaml"

重新啟動相關的 services:
for SERVICE in kube-apiserver kube-controller-manager kube-scheduler; do
systemctl restart $SERVICE
systemctl is-active $SERVICE
done

10. 觀察所有 services 狀態:
for SERVICE in kube-controller-manager kube-scheduler kube-apiserver; do
systemctl status $SERVICE -l
done

11. 確認 master node ssl 憑證是否正確配置:
[root@c7-k8s01 ssl]# curl https://192.168.60.153:6443/api/v1/nodes --cert /etc/kubernetes/ssl/apiserver.pem --key /etc/kubernetes/ssl/apiserver-key.pem --cacert /etc/kubernetes/ssl/ca.pem

以下步驟在其他 nodes 上執行(c7-k8s01(master&docker在同一台,所以也必須執行下列動作),c7-k8s02,c7-k8s03):
12. 複製產生的 pem 格式的 ssl 檔案到 /etc/kubernetes/ssl 資料夾:
master node 上有 docker service 在同一台,所以也必須執行下列動作:
[root@c7-k8s01 ~]# cd /etc/kubernetes/ssl
[root@c7-k8s01 ssl]# cp /root/kube-key/c7-k8s01/*.pem .
[root@c7-k8s01 ssl]# ln -s c7-k8s01.pem worker.pem
[root@c7-k8s01 ssl]# ln -s c7-k8s01-key.pem worker-key.pem
[root@c7-k8s01 ssl]# ls
apiserver-key.pem  apiserver.pem  c7-k8s01-key.pem  c7-k8s01.pem  ca.pem  worker-key.pem  worker.pem

slave node:
[root@c7-k8s02 ~]# mkdir -p /etc/kubernetes/ssl/
[root@c7-k8s02 ~]# cd /etc/kubernetes/ssl/
[root@c7-k8s02 ssl]# scp root@c7-k8s01:/root/kube-key/ca.pem .
root@c7-k8s01's password:
ca.pem 100% 1147 1.1KB/s 00:00

[root@c7-k8s02 ssl]# scp root@c7-k8s01:/root/kube-key/c7-k8s02/*.pem .    #要注意變更對應的資料夾
root@c7-k8s01's password:
c7-k8s02-key.pem 100% 1679 1.6KB/s 00:00
c7-k8s02.pem 100% 1094 1.1KB/s 00:00

[root@c7-k8s02 ssl]# ls
c7-k8s02-key.pem  c7-k8s02.pem  ca.pem  worker-key.pem  worker.pem

13. 新增連結檔:
[root@c7-k8s02 ssl]# ln -s c7-k8s02.pem worker.pem
[root@c7-k8s02 ssl]# ln -s c7-k8s02-key.pem worker-key.pem

14. 新增一個 yaml 格式的 worker-kubeconfig 配置檔:
[root@c7-k8s02 ssl]# vi /etc/kubernetes/worker-kubeconfig.yaml
apiVersion: v1
kind: Config
clusters:
- name: local
  cluster:
    certificate-authority: /etc/kubernetes/ssl/ca.pem
users:
- name: kubelet
  user:
    client-certificate: /etc/kubernetes/ssl/worker.pem
    client-key: /etc/kubernetes/ssl/worker-key.pem
contexts:
- context:
    cluster: local
    user: kubelet
  name: kubelet-context
current-context: kubelet-context

15. 修改其他 node 上的 docker kubelet kube-proxy 服務設定:
[root@c7-k8s02 ssl]# vi /etc/kubernetes/config
KUBE_LOGTOSTDERR="--logtostderr=true"
KUBE_LOG_LEVEL="--v=0"
KUBE_ALLOW_PRIV="--allow-privileged=false"
KUBE_MASTER="--master=https://192.168.60.153:6443"

[root@c7-k8s02 ssl]# vi /etc/kubernetes/kubelet
KUBELET_ADDRESS="--address=0.0.0.0"
KUBELET_HOSTNAME="--hostname-override=c7-k8s02"
KUBELET_API_SERVER="--api-servers=https://192.168.60.153:6443"
KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest"
KUBELET_ARGS="--cluster-dns=192.168.60.153 --cluster-domain=tw.company --tls-cert-file=/etc/kubernetes/ssl/worker.pem --tls-private-key-file=/etc/kubernetes/ssl/worker-key.pem --kubeconfig=/etc/kubernetes/worker-kubeconfig.yaml"

[root@c7-k8s02 ssl]# vi /etc/kubernetes/proxy
KUBE_PROXY_ARGS="--kubeconfig=/etc/kubernetes/worker-kubeconfig.yaml"

重新啟動相關的 services:
[root@c7-k8s02 ssl]# systemctl restart docker kubelet kube-proxy
[root@c7-k8s02 ssl]# systemctl status docker kubelet kube-proxy -l

16. 確認其他 node  ssl 憑證是否正確配置 :
[root@c7-k8s02 ssl]# curl https://192.168.60.153:6443/api/v1/nodes --cert /etc/kubernetes/ssl/worker.pem --key /etc/kubernetes/ssl/worker-key.pem --cacert /etc/kubernetes/ssl/ca.pem

建立 pod、service 看能否正常運作(可到 https://hub.docker.com/u/kubeguide/ 下載範例測試)
redis-frontend-controller.yaml
redis-frontend-service.yaml
redis-master-controller.yaml
redis-master-service.yaml
redis-slave-controller.yaml
redis-slave-service.yaml

kubectl create -f redis-xxxxxxxxx.yaml

安裝後,利用瀏覽器連線 http://kube_host_ip:30001 可以看到 Hello World 及輸入視窗,這表示 kubernetes 應該已正常運作了。

Kubernetes 主機系統資源使用情況,可以安裝
kube-ui-rc.yaml
kube-ui-svc.yaml

安裝後,利用瀏覽器連線
http://kube_master_host_ip:8080/api/v1/proxy/namespaces/kube-system/services/kube-ui/#/dashboard/
就可以看到每個 node 的系統使用狀態。


PULL 一個 nginx 來試試,先建立兩個 yaml 格式的設定檔
vi nginx-rc.yaml
內容:
apiVersion: v1
kind: ReplicationController
metadata:
  name: nginx
  labels:
    name: nginx
spec:
  replicas: 1
  selector:
    name: nginx
  template:
    metadata:
      labels:
        name: nginx
    spec:
      containers:
      - name: nginx
        image: nginx
        ports:
        - containerPort: 80

vi nginx-svc.yaml
內容:
apiVersion: v1
kind: Service
metadata:
  name: nginx
  labels:
    name: nginx
spec:
  type: NodePort
  ports:
  - port: 80
    nodePort: 30002    #NodePort 連接埠範圍限制在 30000~32767 Port,超出 Service 建立會失敗
  selector:
    name: nginx

執行:
kubectl create -f nginx-rc.yaml
kubectl create -f nginx-svc.yaml

觀察 nginx 建立的情況:
[root@c7-k8s01 ~]# kubectl get rc nginx -o wide
NAME      DESIRED   CURRENT   READY     AGE       CONTAINER(S)   IMAGE(S)   SELECTOR
nginx     3         3         3         30m       nginx          nginx      name=nginx

[root@c7-k8s01 ~]# kubectl get service nginx -o wide
NAME      CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE       SELECTOR
nginx     10.254.147.10   <nodes>       80/TCP    25m       name=nginx

增加或減少 RC 數量
kubectl scale rc nginx --replicas=3    #增加 nginx RC 為三個

連線試試
http://192.168.60.153:30002

正常運行 nginx 時,可以看到 歡迎頁面




2017年2月9日 星期四

Kubernetes ETCD Cluster 設定

Kubernetes 架構主機環境:
在 /etc/hosts 新增:
192.168.60.153 c7-k8s01    #master
192.168.60.154 c7-k8s02
192.168.60.155 c7-k8s03
192.168.60.156 c7-k8s04

TECD Cluster 設定
每台 kubernetes node 都要安裝 etcd 套件
yum install -y etcd

第一台叢集設定:
/etc/etcd/etcd.conf
ETCD_NAME=etcd01
ETCD_DATA_DIR="/var/lib/etcd/etcd01"
ETCD_LISTEN_PEER_URLS="http://192.168.60.153:2380"
ETCD_LISTEN_CLIENT_URLS="http://192.168.60.153:2379,http://127.0.0.1:2379 "
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.60.153:2380"
ETCD_INITIAL_CLUSTER="etcd01=http://192.168.60.153:2380,etcd02=http://192.168.60.154:2380,etcd03=http://192.168.60.155:2380"    #初始 Cluster 成員列表
ETCD_INITIAL_CLUSTER_STATE="new"    #初始 Cluster 狀態,new 為新建 Cluster
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"    # Cluster 名稱
ETCD_ADVERTISE_CLIENT_URLS="http://192.168.60.153:2379"

第二台叢集設定:
ETCD_NAME=etcd02
ETCD_DATA_DIR="/var/lib/etcd/etcd02"
ETCD_LISTEN_PEER_URLS="http://192.168.60.154:2380"
ETCD_LISTEN_CLIENT_URLS="http://192.168.60.154:2379,http://127.0.0.1:2379 "
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.60.154:2380"
ETCD_INITIAL_CLUSTER="etcd01=http://192.168.60.153:2380,etcd02=http://192.168.60.154:2380,etcd03=http://192.168.60.155:2380"    #初始 Cluster 成員列表
ETCD_INITIAL_CLUSTER_STATE="new"    #初始 Cluster 狀態,new 為新建 Cluster
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"    # Cluster 名稱
ETCD_ADVERTISE_CLIENT_URLS="http://192.168.60.154:2379"

第三台叢集設定:
ETCD_NAME=etcd03
ETCD_DATA_DIR="/var/lib/etcd/etcd03"
ETCD_LISTEN_PEER_URLS="http://192.168.60.155:2380"
ETCD_LISTEN_CLIENT_URLS="http://192.168.60.155:2379,http://127.0.0.1:2379 "
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.60.155:2380"
ETCD_INITIAL_CLUSTER="etcd01=http://192.168.60.153:2380,etcd02=http://192.168.60.154:2380,etcd03=http://192.168.60.155:2380"    #初始 Cluster 成員列表
ETCD_INITIAL_CLUSTER_STATE="new"    #初始 Cluster 狀態,new 為新建 Cluster
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"    # Cluster 名稱
ETCD_ADVERTISE_CLIENT_URLS="http://192.168.60.155:2379"

修改啟動設定檔
/usr/lib/systemd/system/etcd.service
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target

[Service]
Type=notify
WorkingDirectory=/var/lib/etcd/
EnvironmentFile=-/etc/etcd/etcd.conf
User=etcd
# set GOMAXPROCS to number of processors
ExecStart=/bin/bash -c "GOMAXPROCS=$(nproc) /usr/bin/etcd \
--name=\"${ETCD_NAME}\" \
--data-dir=\"${ETCD_DATA_DIR}\" \
--listen-peer-urls=\"${ETCD_LISTEN_PEER_URLS}\" \
--advertise-client-urls=\"${ETCD_ADVERTISE_CLIENT_URLS}\" \
--initial-cluster-token=\"${ETCD_INITIAL_CLUSTER_TOKEN}\" \
--initial-cluster=\"${ETCD_INITIAL_CLUSTER}\"  \
--initial-cluster-state=\"${ETCD_INITIAL_CLUSTER_STATE}\" \
--listen-client-urls=\"${ETCD_LISTEN_CLIENT_URLS}\""
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy
=multi-user.target

啟動 ETCD 服務
    systemctl restart etcd.service
    systemctl enable etcd.service
    systemctl status etcd.service

觀察 ETCD Cluster 狀態
# etcdctl member list
4be996007148b4a5: name=etcd01 peerURLs=http://192.168.60.153:2380 clientURLs=http://192.168.60.153:2379 isLeader=true
4f79c66abde833f4: name=etcd02 peerURLs=http://192.168.60.154:2380 clientURLs=http://192.168.60.154:2379 isLeader=false
5b938308c73bbf65: name=etcd03 peerURLs=http://192.168.60.155:2380 clientURLs=http://192.168.60.155:2379 isLeader=false

# etcdctl cluster-health
member 4be996007148b4a5 is healthy: got healthy result from http://192.168.60.153:2379
member 4f79c66abde833f4 is healthy: got healthy result from http://192.168.60.154:2379
member 5b938308c73bbf65 is healthy: got healthy result from http://192.168.60.155:2379

將 master apiserver 設定指向 etcd cluster 成員
vi /etc/kubernetes/apiserver
ETCD_INITIAL_CLUSTER="etcd01=http://192.168.60.153:2380,etcd02=http://192.168.60.154:2380,etcd03=http://192.168.60.155:2380"


新增 etcd cluster 節點:etcd04
ETCD_INITIAL_CLUSTER_STATE 初始化值全部為 new,後續加入的 TECD 成員依
etcdctl member add <TECD_NAME> http://new_etcd_cluster:2380
指令產生的設定值來修改 etcd.conf 設定檔

在已有 Cluster 增加新節點 etcd04 的資料
etcdctl member add etcd04 http://192.168.60.156:2380

Added member named etcd04 with ID 4054n89e532b87bc to cluster

ETCD_NAME="etcd04"
ETCD_INITIAL_CLUSTER="etcd01=http://192.168.60.153:2380,etcd02=http://192.168.60.154:2380,etcd03=http://192.168.60.155:2380,etcd04=http://192.168.60.156:2380"
ETCD_INITIAL_CLUSTER_STATE="existing"

增加新的節點資料後產生配置 ETCD_NAME、ETCD_INITIAL_CLUSTE、ETCD_INITIAL_CLUSTER_STATE 用於新的 etcd04 設定檔
修改 /etc/etcd/etcd.conf 設定檔:
ETCD_NAME=etcd04
ETCD_DATA_DIR="/var/lib/etcd/etcd04"
ETCD_LISTEN_PEER_URLS="http://192.168.60.156:2380"
ETCD_LISTEN_CLIENT_URLS="http://192.168.60.156:2379,http://127.0.0.1:2379"
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.60.156:2380"
ETCD_INITIAL_CLUSTER="etcd01=http://192.168.60.153:2380,etcd02=http://192.168.60.154:2380,etcd03=http://192.168.60.155:2380,etcd04=http://192.168.60.156:2380"    #每台的 etcd.conf 都必須修改 
ETCD_INITIAL_CLUSTER_STATE="existing"    #existing 表示加入己存在的 Cluster
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_ADVERTISE_CLIENT_URLS="http://192.168.60.156:2379"

啟動 etcd 服務可以查看 Cluster 多了一個新的節點 etcd04
systemctl restart etcd.service
systemctl enable etcd.service

觀察是否出現新加入的 etcd cluster 成員
etcdctl member list
4be996007148b4a5: name=etcd01
4f79c66abde833f4: name=etcd02
5b938308c73bbf65: name=etcd03
a153e6ec785245d6: name=etcd04

確認 etcd cluster 狀態
etcdctl cluster-health
member 4be996007148b4a5 is healthy
member 4f79c66abde833f4 is healthy
member 5b938308c73bbf65 is healthy

member a153e6ec785245d6 is healthy

CentOS 7 部署 Kubernetes

Kubernetes 架構主機環境:
在 /etc/hosts 新增:
192.168.60.153 c7-k8s01    #master
192.168.60.154 c7-k8s02
192.168.60.155 c7-k8s03
192.168.60.156 c7-k8s04

關閉防火牆
systemctl stop firewalld
systemctl disable firewalld

安裝 ntp 套件
yum install ntp
systemctl start ntpd
systemctl enable ntpd

設定 kubernetes master:
安裝 kubernetes 套件
yum install -y etcd kubernetes flannel

修改 /etc/etcd/etcd.conf
ETCD_NAME=default
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="http://192.168.60.153:2380"
ETCD_LISTEN_CLIENT_URLS="http://192.168.60.153:2379 "

ETCD_ADVERTISE_CLIENT_URLS="http://192.168.60.153:2379"

修改 /etc/sysconfig/docker
OPTIONS='--selinux-enabled=false --insecure-registry gcr.io --log-driver=journald'

修改 /etc/kubernetes/config
KUBE_LOGTOSTDERR="--logtostderr=true"
KUBE_LOG_LEVEL="--v=0"
KUBE_ALLOW_PRIV="--allow-privileged=false"
KUBE_MASTER="--master=http://192.168.60.153:8080"

修改 /etc/kubernetes/apiserver
KUBE_API_ADDRESS="--address=0.0.0.0"
KUBE_MASTER="--master=http://192.168.60.153:8080"
KUBE_ETCD_SERVERS="--etcd-servers=http://192.168.60.153:2379"
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"
KUBE_ADMISSION_CONTROL="--admission_control=NamespaceLifecycle,NamespaceExists,LimitRanger,ResourceQuota"
KUBE_API_ARGS=""

修改 /etc/kubernetes/kubelet
KUBELET_ADDRESS="--address=0.0.0.0"
KUBELET_PORT="--port=10250"
KUBELET_HOSTNAME="--hostname-override=c7-k8s01"
KUBELET_API_SERVER="--api-servers=http://127.0.0.1:8080"
KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest"
KUBELET_ARGS=""

啟動 ETCD、kubernetes 服務
for SERVICES in etcd docker kube-apiserver kube-controller-manager kube-scheduler kubelet kube-proxy; do
    systemctl restart $SERVICES
    systemctl enable $SERVICES
    systemctl status $SERVICES
done

修改 /etc/sysconfig/flanneld
FLANNEL_ETCD_ENDPOINTS="http://192.168.60.153:2379"

設定 etcd.conf 結合 flannel 服務
etcdctl mk /atomic.io/network/config '{"Network":"172.17.0.0/16"}'

再重啟動 ETCD、kubernetes 服務
for SERVICES in etcd flanneld docker kube-apiserver kube-controller-manager kube-scheduler kubelet kube-proxy; do
    systemctl restart $SERVICES
    systemctl enable $SERVICES
    systemctl status $SERVICES
done

檢查 kubernetes master 是否正常運作
kubectl get nodes      #出現 kubetnetes master node 名稱

遇到的問題:
/var/log/messages 一直出現:
the cluster IP 172.16.0.1 for service kubernetes/default is not within the service CIDR 10.254.0.0/16; please recreate
利用 kubectl get services --all-namespaces 顯示與設定的 cluster ip 網段不同
所以利用 kubectl delete services/kubernetes 將原來的設定刪除,kubernetes 會自動產生新的
再用 kubectl get services --all-namespaces 檢查一遍
/var/log/messages 己無錯誤訊息


以下所有安裝、設定步驟需在每一台 kubernetes nodes 上執行(在不同台 node 執行時會特別標註)
設定 kubernetes nodes:
安裝 flannel、kubernetes
yum install -y flannel kubernetes

修改 /etc/sysconfig/flanneld
FLANNEL_ETCD_ENDPOINTS="http://192.168.60.153:2379"      #新版本改為 FLANNEL_ETCD_ENDPOINTS 參數

修改 /etc/kubernetes/config
KUBE_MASTER="--master=http://192.168.60.153:8080"      #輸入 kubernetes master 主機 IP

修改 /etc/kubernetes/kubelet    #在 c7-k8s02 修改
KUBELET_ADDRESS="--address=0.0.0.0"
KUBELET_PORT="--port=10250"
KUBELET_HOSTNAME="--hostname_override=c7-k8s02"
KUBELET_API_SERVER="--api_servers=http://192.168.60.153:8080"
KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest"
KUBELET_ARGS=""

修改 /etc/kubernetes/kubelet    #在 c7-k8s03 修改
KUBELET_ADDRESS="--address=0.0.0.0"
KUBELET_PORT="--port=10250"
KUBELET_HOSTNAME="--hostname_override=c7-k8s03"
KUBELET_API_SERVER="--api_servers=http://192.168.60.153:8080"
KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest"
KUBELET_ARGS=""

設定啟動 kube-proxy kubelet docker flanneld
systemctl daemon-reload
for SERVICES in kubelet kube-proxy docker flanneld; do
    systemctl restart $SERVICES
    systemctl enable $SERVICES
    systemctl status $SERVICES
done
或者
systemctl daemon-reload
for SERVICES in kubelet kube-proxy docker flanneld; do
    systemctl restart $SERVICES
    systemctl status $SERVICES
done

遇到的問題:
/var/log/messages 一直出現:
unable to communicate with Rkt api service: rkt: cannot tcp Dial rkt api service: dial tcp 127.0.0.1:15441: getsockopt

解決方法:在安裝、設定 flannel 之前,docker 己經先安裝並設定好了,有可能會出現上面的錯訊訊息,這時必須刪掉舊的 docker 網路介面:
systemctl stop docker
ip link delete docker0
systemctl start docker
ip -4 a|grep inet


kubernetes 內建的主機狀態監控網頁
https://host_ip:4194

Nutanix平台虛擬機(UBUNTU),利用Veeam備份移轉至VMware平台,安裝套件、系統更新出現錯誤

 mount: /var/lib/grub/esp: special device /dev/disk/by-id/scsi-SNUTANIX_VDISK_NFS_4_0_7672_2d41cbaa_025e_4fac_849c_9e620eff5bff-part1 does n...