Warning: error_log(/data/www/wwwroot/hmttv.cn/caches/error_log.php): failed to open stream: Permission denied in /data/www/wwwroot/hmttv.cn/phpcms/libs/functions/global.func.php on line 537 Warning: error_log(/data/www/wwwroot/hmttv.cn/caches/error_log.php): failed to open stream: Permission denied in /data/www/wwwroot/hmttv.cn/phpcms/libs/functions/global.func.php on line 537 久久成人永久免费播放,欧美久在线观看在线观看,免费看一级黄色录像

          整合營銷服務(wù)商

          電腦端+手機(jī)端+微信端=數(shù)據(jù)同步管理

          免費(fèi)咨詢熱線:

          快速安裝一個(gè)OpenShift 4 準(zhǔn)生產(chǎn)集群

          快速安裝一個(gè)OpenShift 4 準(zhǔn)生產(chǎn)集群

          penShift是RedHat開發(fā)的PaaS,使用需要付費(fèi)訂閱,它的社區(qū)版為OKD,兩者安裝方法幾乎一致,只是在操作系統(tǒng)和上層應(yīng)用軟件上有不同,本文講述OKD的安裝。

          集群環(huán)境

          集群主機(jī)配置

          注意:

          集群中的主機(jī)是普通的PC,是實(shí)實(shí)在在的主機(jī),不是虛擬機(jī);

          各角色的主機(jī)內(nèi)存不要低于16GB,特別作為“Worker Host”的機(jī)器,因?yàn)閮H部署“openshift-logging”就會(huì)消耗不小的內(nèi)存,所以內(nèi)存建議越大越好;

          如果計(jì)劃部署“Storage”(比如Ceph)在“Worker Host”,搭建集群前就給主機(jī)安裝所需的非系統(tǒng)硬盤,否則還要關(guān)機(jī)安裝硬盤;

          集群搭建過程需要從Quay.io下載很多的鏡像,如果你的網(wǎng)絡(luò)很慢,那安裝時(shí)間將會(huì)相當(dāng)長,建議配置一個(gè)鏡像站點(diǎn)或者想辦法改善自己的網(wǎng)絡(luò)環(huán)境。

          下面開始演示安裝過程,總共分六個(gè)部分:

          一,DHCP和DNS

          安裝集群前,需要完成DHCP和DNS的配置

          1,DHCP

          集群主機(jī)使用PXE方式安裝操作系統(tǒng),通過DHCP得到網(wǎng)絡(luò)地址信息。配置DHCP主要是下面兩點(diǎn)(筆者使用是Windows Server 2008自帶的DHCP服務(wù))

          (1)綁定主機(jī)的IP和MAC,方便對DNS進(jìn)行配置

          (2)配置PXE相關(guān)

          DHCP配置

          這里使用的bootfile文件為“l(fā)pxelinux.0”,原因后面解釋。

          2,DNS

          本來也準(zhǔn)備使用Windows Server 2008自帶的DNS服務(wù),剛好有個(gè)CoreDNS容器,就使用了它,配置文件如下,

          $ORIGIN okd-infra.wumi.ai.     ; designates the start of this zone file in the namespace
          $TTL 1h                  ; default expiration time of all resource records without their own TTL value
          okd-infra.wumi.ai.  IN  SOA   ns.okd-infra.wumi.ai. host-1.example.xyz. ( 2007120710 1d 2h 4w 1h )
          okd-infra.wumi.ai.  IN  NS    ns                    ; ns.example.com is a nameserver for example.com
          okd-infra.wumi.ai.  IN  A     10.1.95.9             ; IPv4 address for example.com
          ns            IN  A     10.1.95.9             ; IPv4 address for ns.example.com
          
          bootstrap     IN  A     10.1.99.7 
          master-1      IN  A     10.1.99.11           
          master-2      IN  A     10.1.99.3           
          master-3      IN  A     10.1.99.8           
          worker-1      IN  A     10.1.99.14           
          worker-2      IN  A     10.1.99.15           
          worker-3      IN  A     10.1.99.16           
          
          etcd-0        IN  A     10.1.99.11          
          etcd-1        IN  A     10.1.99.3          
          etcd-2        IN  A     10.1.99.8          
          _etcd-server-ssl._tcp 86400 IN SRV 0 10 2380 etcd-0 
          _etcd-server-ssl._tcp 86400 IN SRV 0 10 2380 etcd-1 
          _etcd-server-ssl._tcp 86400 IN SRV 0 10 2380 etcd-2 
          
          api              IN  A     10.1.95.9          ; host-1 haproxy
          api-int        IN  A     10.1.95.9          ; host-1 haproxy
          *.apps        IN  A     10.1.95.9          ; host-1 haproxy

          二,HAProxy

          HAProxy主要實(shí)現(xiàn)對APIServer和Ingress負(fù)載均衡訪問,直接上配置文件,

          /etc/haproxy/haproxy.cfg

          defaults
              mode                    tcp
              option                  dontlognull
              timeout connect      10s
              timeout client          1m
              timeout server          1m
              
              
          #---------------------------------------------------------------------
          frontend openshift-api-server
              bind	10.1.95.9:6443
              default_backend	api-backend
              mode tcp
          #---------------------------------------------------------------------
          backend	api-backend
              balance	source
              mode        tcp
          #    server	bootstrap	10.1.99.7:6443  check port 6443
              server	master-1	10.1.99.11:6443  check port 6443
              server	master-2	10.1.99.3:6443  check port 6443
              server	master-3	10.1.99.8:6443  check port 6443
          
          
          #---------------------------------------------------------------------
          frontend machine-config-server
              bind	10.1.95.9:22623
              default_backend	machine-config-server
              mode tcp
          #---------------------------------------------------------------------
          backend	machine-config-server
              balance	source
              mode        tcp
          #    server	bootstrap	10.1.99.7:22623  check port 22623
              server	master-1	10.1.99.11:22623  check port 22623
              server	master-2	10.1.99.3:22623  check port 22623
              server	master-3	10.1.99.8:22623  check port 22623
          
          
          #---------------------------------------------------------------------
          frontend ingress-http
              bind	10.1.95.9:80
              default_backend	ingress-http
              mode tcp
          #---------------------------------------------------------------------
          backend	ingress-http
              balance	source
              mode        tcp
              server	worker-1	10.1.99.14:80  check port 80
              server	worker-2	10.1.99.15:80  check port 80
              server	worker-3	10.1.99.16:80  check port 80
          
          
          #---------------------------------------------------------------------
          frontend ingress-https
              bind	10.1.95.9:443
              default_backend	ingress-https
              mode tcp
          #---------------------------------------------------------------------
          backend	ingress-https
              balance	source
              mode        tcp
              server	worker-1	10.1.99.14:443  check port 443
              server	worker-2	10.1.99.15:443  check port 443
              server	worker-3	10.1.99.16:443  check port 443
          
          
          #---------------------------------------------------------------------
          listen  admin_stats  # 網(wǎng)頁管理頁面
              bind 0.0.0.0:8081  
              mode http
              log 127.0.0.1 local0 err
              stats refresh 10s
              stats uri /haproxy
              stats realm welcome login\ Haproxy
              stats hide-version
              stats admin if TRUE

          初始配置“backend machine-config-server”和“backend api-backend”不要注釋bootstrap部分,安裝過程中,如果命令“./openshift-install --dir=<installation_directory> wait-for bootstrap-complete --log-level=info”輸出結(jié)果提示移除bootstrap再注釋bootstrap

          三,下載需要的軟件并準(zhǔn)備安裝配置文件

          1,下載需要的軟件

          (1) 從“https://github.com/openshift/okd/releases”下載集群安裝工具:openshift-install,該工具協(xié)助在公有云和本地基礎(chǔ)設(shè)施上部署OpenShift 4集群

          (2)安裝集群管理工具:oc,從https://mirror.openshift.com/pub/openshift-v4/clients/oc/latest/下載最新版,oc可以通過命令行的方式連接管理集群

          2,定制安裝配置文件

          OpenShift 4集群安裝和OpenShift 3完全不同,在安裝前,需要定制安裝配置文件,下面是一個(gè)樣例配置文件(文件名必須為install-config.yaml):

          apiVersion: v1
          baseDomain: wumi.ai
          compute:
          - hyperthreading: Enabled
            name: worker
            replicas: 0   //在自維護(hù)的物理機(jī)上部署集群,需設(shè)置為0
          controlPlane:
            hyperthreading: Enabled
            name: master
            replicas: 3
          metadata:
            name: okd-infra
          networking:
            clusterNetwork:
            - cidr: 10.128.0.0/14
              hostPrefix: 23
            networkType: OpenShiftSDN
            serviceNetwork:
            - 172.30.0.0/16
          platform:
              none: {}
          fips: false
          pullSecret: 'pullsecret obtained from redhat'
          sshKey: 'sshkey that is created by ssh-keygen command'

          搭建的集群里面有三臺作為“master”,主要運(yùn)行apiserver、etcd cluster等。

          “pullSecret”:從紅帽官網(wǎng)獲得,部署集群需要的鏡像存儲在Quay.io,該key主要用來驗(yàn)證并獲取鏡像。

          “sshKey”:是ssh public key,可以通過命令“ssh-keygen”獲得,在配置對應(yīng)私鑰的主機(jī)上,使用ssh命令可以直接登錄集群中的服務(wù)器,不用輸入密碼,方便對集群進(jìn)行調(diào)試等。

          四,生成k8s manifest和ignition配置文件

          1,生成k8s manifest文件

          創(chuàng)建目錄“config-install”,將上個(gè)步驟中編寫的集群安裝配置文件“install-config.yaml”拷貝到該目錄,然后執(zhí)行下面命令,

          ./openshift-install create manifests --dir=config-install

          執(zhí)行后,安裝程序在目錄“config-install”中生成manifests文件(文件install-config.yaml會(huì)被消耗掉)

          我們不打算運(yùn)行用戶的pod在master上,修改文件“config-install/manifests/cluster-scheduler-02-config.yml”,將參數(shù)“mastersSchedulable”設(shè)置為“false”,保存并退出。

          2,生成ignition配置文件,該文件完成CoreOS的定制(Openshift 4集群的主機(jī)都必須運(yùn)行CoreOS)

           ./openshift-install create ignition-configs --dir=config-install

          執(zhí)行后,安裝程序在目錄“config-install”生成ignitions文件(manifests文件會(huì)被消耗掉)

          五,搭建PXE安裝環(huán)境

          在得到ignition文件和系統(tǒng)鏡像文件后,配置PXE安裝環(huán)境。ignition、kernel、initrd等通過http供集群主機(jī)下載,所以首先需要配置http服務(wù)器和tftp服務(wù)器

          1,TFTP服務(wù)器

          對tftp的配置主要是兩個(gè)部分:

          PXE的Bootfile要使用“l(fā)pxelinux.0”,這樣才可以使用http協(xié)議;

          pxelinux.cfg配置:

          # D-I config version 2.0
          # search path for the c32 support libraries (libcom32, libutil etc.)
          path debian-installer/amd64/boot-screens/
          include debian-installer/amd64/boot-screens/menu.cfg
          default debian-installer/amd64/boot-screens/vesamenu.c32
          prompt 0
          timeout 0
          
          label fedora-coreos-bootstrap
            KERNEL http://10.1.95.10:8000/fedora-coreos-32.20200923.3.0-live-kernel-x86_64
            APPEND ip=dhcp initrd=http://10.1.95.10:8000/fedora-coreos-32.20200923.3.0-live-initramfs.x86_64.img \
            console=tty0 console=ttyS0 coreos.inst.install_dev=/dev/sda \
            coreos.inst.ignition_url=http://10.1.95.10:8000/bootstrap.ign \
            coreos.inst.install_dev=/dev/sda \
            coreos.live.rootfs_url=http://10.1.95.10:8000/fedora-coreos-32.20200923.3.0-live-rootfs.x86_64.img
          
          label fedora-coreos-master
            KERNEL http://10.1.95.10:8000/fedora-coreos-32.20200923.3.0-live-kernel-x86_64
            APPEND ip=dhcp initrd=http://10.1.95.10:8000/fedora-coreos-32.20200923.3.0-live-initramfs.x86_64.img \
            console=tty0 console=ttyS0 coreos.inst.install_dev=/dev/sda \
            coreos.inst.ignition_url=http://10.1.95.10:8000/master.ign \
            coreos.inst.install_dev=/dev/sda \
            coreos.live.rootfs_url=http://10.1.95.10:8000/fedora-coreos-32.20200923.3.0-live-rootfs.x86_64.img
          
          label fedora-coreos-worker
            KERNEL http://10.1.95.10:8000/fedora-coreos-32.20200923.3.0-live-kernel-x86_64
            APPEND ip=dhcp initrd=http://10.1.95.10:8000/fedora-coreos-32.20200923.3.0-live-initramfs.x86_64.img \
            console=tty0 console=ttyS0 coreos.inst.install_dev=/dev/sda \
            coreos.inst.ignition_url=http://10.1.95.10:8000/worker.ign \
            coreos.inst.install_dev=/dev/sda \
            coreos.live.rootfs_url=http://10.1.95.10:8000/fedora-coreos-32.20200923.3.0-live-rootfs.x86_64.img

          2,HTTP服務(wù)器

          筆者使用nginx來配置HTTP服務(wù),nginx的配置文件沒有什么好展示的,將所需的鏡像文件放到“/var/www/html”目錄即可,集群主機(jī)在PXE安裝環(huán)節(jié)會(huì)請求這些文件,

          aneirin@vm-1:/var/www/html$ ls -lh
          total 732M
          -rwxrwxrwx 1 root root 297K Oct 16 15:32 bootstrap.ign
          -rwxrwxrwx 1 root root  70M Oct 15 10:44 fedora-coreos-32.20200923.3.0-live-initramfs.x86_64.img
          -rwxrwxrwx 1 root root  12M Oct 15 10:44 fedora-coreos-32.20200923.3.0-live-kernel-x86_64
          -rwxrwxrwx 1 root root 651M Oct 15 10:45 fedora-coreos-32.20200923.3.0-live-rootfs.x86_64.img
          -rwxrwxrwx 1 root root  11K Sep  5  2019 index.html //nginx自帶的文件
          -rwxrwxrwx 1 root root  612 Apr 22 11:50 index.nginx-debian.html //nginx自帶的文件
          -rwxrwxrwx 1 root root 1.9K Oct 16 15:32 master.ign
          -rwxrwxrwx 1 root root 1.9K Oct 16 15:32 worker.ign

          六,集群安裝

          PXE環(huán)境配置好之后,就可以為集群的主機(jī)安裝操作系統(tǒng),七臺主機(jī)逐次安裝即可(bootstrap->master->worker),不用刻意等待一臺安裝好,再安裝另一臺,直接同時(shí)裝也是沒問題的。

          使用命令“./openshift-install --dir=<installation_directory> wait-for bootstrap-complete --log-level=info”查看bootstrap的過程,當(dāng)提示remove bootstrap時(shí),從haproxy的配置文件中移除bootstrap相關(guān)配置即可,bootstrap主機(jī)的使命就完成了。

          1,配置登錄憑據(jù)

          export KUBECONFIG=<installation_directory>/auth/kubeconfig

          可以直接寫在“~/.bashrc”中,下次登錄Shell,KUBECONFIG環(huán)境變量是一直存在的

          2,連接集群Approving CSR

          上步配置好后,就能以用戶“system:admin”連接集群,該用戶對集群有超級管理的權(quán)限(集群安裝完成后建議禁用該賬戶,它是一個(gè)安全隱患),我們需要對一些對象生成的CSR做Approve操作,這樣組件安裝才能繼續(xù)進(jìn)行,

          oc get csr //查看需要Approve的CSR
          oc adm certificate approve <csr_name> //Approve指定的CSR

          操作完成后,輸出如下,

          approving CSR

          3,等待clusteroperators安裝完成

          OKD集群基礎(chǔ)設(shè)施組件嚴(yán)重依賴各類“clusteroperators”,需要等待“AVAILABLE”列全部變?yōu)椤癟rue”


          clusteroperators

          4,為image-registry配置存儲

          在非公有云平臺部署OKD4,image-registry沒有現(xiàn)成的存儲可用。在非生產(chǎn)環(huán)境,可以使用“emptyDir”來作為臨時(shí)存儲(重啟registry,鏡像會(huì)丟失,生產(chǎn)環(huán)境勿用),這樣就可以使用集群內(nèi)的本地鏡像倉庫,配置命令如下,

          oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{"spec":{"storage":{"emptyDir":{}}}}'

          大功告成:

          aneirin@host-1:~$ ./openshift-install --dir=config-install wait-for install-complete
          INFO Waiting up to 30m0s for the cluster at https://api.okd-infra.wumi.ai:6443 to initialize... 
          INFO Waiting up to 10m0s for the openshift-console route to be created... 
          INFO Install complete!                            
          INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/aneirin/okd4/config-install/auth/kubeconfig' 
          INFO Access the OpenShift web-console here: https://console-openshift-console.apps.okd-infra.wumi.ai 
          INFO Login to the console with user: "kubeadmin", and password: "CaEJY-myzAi-R7Wtj-XXXX" 
          INFO Time elapsed: 1s

          本篇只是OpenShift 4使用萬里長征的第一步,后面還有很多工作要做,比如monitoring、logging、storage等等,敬請期待!


          文參照紅帽官方文檔,在裸機(jī)安裝Openshift4.3文檔進(jìn)行。因?yàn)橹挥幸慌_64G內(nèi)存的PC機(jī),安裝vmware vsphere 6.7免費(fèi)版進(jìn)行本測試,所以嘗試在OCP官方文檔要求的最低內(nèi)存需求基礎(chǔ)上,內(nèi)存減半安裝,記錄如下。


          發(fā)現(xiàn)頭條號不支持markdown,對程序員太不又好了,拷貝過來的代碼格式都沒了,簡書的這一點(diǎn)就要好一些。

          https://www.jianshu.com/p/7c0c2affadb8

          1、ocp安裝的過程

          紅帽官方文檔記載的安裝過程如下:

          1. bootstrap啟動(dòng)并從準(zhǔn)備好master需要的資源
          2. master從bootstrap獲取需要的資源并完成啟動(dòng)
          3. master通過bootstrap構(gòu)建etcd集群
          4. bootstrap使用剛才構(gòu)建的etcd集群啟動(dòng)一個(gè)臨時(shí)的kubernetes control plane
          5. 臨時(shí)control plane在master節(jié)點(diǎn)啟動(dòng)生產(chǎn)control plane
          6. 臨時(shí)control plane關(guān)閉并將控制權(quán)移交給生產(chǎn)control plane
          7. bootstrap將ocp組建注入到生產(chǎn)control plane
          8. 安裝程序關(guān)閉bootstrap
          9. control plane 部署計(jì)算節(jié)點(diǎn)
          10. control plane 通過operator方式安裝其他服務(wù)

          2、準(zhǔn)備服務(wù)器資源

          服務(wù)器規(guī)劃如下:

          • 3臺control plane節(jié)點(diǎn),安裝etcd、control plane組件和infras基礎(chǔ)組件,因?yàn)橘Y源緊張,不部署dns服務(wù)器,通過hosts文件解析域名;
          • 2臺compute 節(jié)點(diǎn),運(yùn)行實(shí)際負(fù)載;
          • 1臺bootstrap節(jié)點(diǎn),執(zhí)行安裝任務(wù);
          • 1臺misc/lb節(jié)點(diǎn),用于準(zhǔn)備安裝資源、啟動(dòng)bootstrap,并作為lb節(jié)點(diǎn)使用。

          Hostname vcpu ram hdd ip fqdn misc/lb 4 8g 120g 192.168.128.30 misc.ocptest.ipingcloud.com/lb.ocptest.ipincloud.com bootstrap 4 8g 120g 192.168.128.31 bootstrap.ocptest.ipincloud.com master1 4 8g 120g 192.168.128.32 master1.ocptest.ipincloud.com master2 4 8g 120g 192.168.128.33 master2.ocptest.ipincloud.com master3 4 8g 120g 192.168.128.34 master3.ocptest.ipincloud.com worker1 2 4g 120g 192.168.128.35 worker1.ocptest.ipincloud.com worker2 2 4g 120g 192.168.128.36 worker2.ocptest.ipincloud.com

          3、準(zhǔn)備網(wǎng)絡(luò)資源

          api server和ingress公用一個(gè)lb,即misc/lb 以為dns配置記錄,ocptest是cluster名,ipingcloud.com是基礎(chǔ)域名.這些配置,需要修改ansi-playbook文件的tasks/相應(yīng)模板。 參見 https://github.com/scwang18/ocp4-upi-helpernode.git

          • dns配置

          組件 dns記錄 描述 Kubernetes API api.ocptest.ipincloud.com 該DNS記錄指向control plane節(jié)點(diǎn)的負(fù)載平衡器。群集外部和群集中所有節(jié)點(diǎn)都必須可以解析此記錄。 Kubernetes API api-int.ocptest.ipincloud.com 該DNS記錄指向control plane節(jié)點(diǎn)的負(fù)載平衡器。該記錄必須可從群集中的所有節(jié)點(diǎn)上解析。 Routes *.apps.ocptest.ipincloud.com 通配符DNS記錄指向ingress slb。群集外部和群集中所有節(jié)點(diǎn)都必須可以解析此記錄。 etcd etcd-.ocptest.ipincloud.com DNS記錄指向etcd節(jié)點(diǎn),群集所有節(jié)點(diǎn)都必須可以解析此記錄。 etcd _etcd-server-ssl._tcp.ocptest.ipincloud.com 因?yàn)閑tcd使用2380對外服務(wù),因此,需要建立對應(yīng)每臺etcd節(jié)點(diǎn)的srv dns記錄,優(yōu)先級0,權(quán)重10和端口2380,如下表

          • etcd srv dns記錄表

          #一下激怒是必須的,用于bootstrap創(chuàng)建etcd服務(wù)器上,自動(dòng)配置etcd服務(wù)解析

          #_service._proto.name. TTL class SRV priority weight port target. _etcd-server-ssl._tcp.<cluster_name>.<base_domain> 86400 IN SRV 0 10 2380 etcd-0.<cluster_name>.<base_domain>. _etcd-server-ssl._tcp.<cluster_name>.<base_domain> 86400 IN SRV 0 10 2380 etcd-1.<cluster_name>.<base_domain>. _etcd-server-ssl._tcp.<cluster_name>.<base_domain> 86400 IN SRV 0 10 2380 etcd-2.<cluster_name>.<base_domain>.

          • 創(chuàng)建ssh私鑰并加入ssh agent

          通過免登陸ssh私鑰,可以用core用戶身份登錄到master節(jié)點(diǎn),在集群上進(jìn)行安裝調(diào)試和災(zāi)難恢復(fù)。

          (1)在misc節(jié)點(diǎn)上執(zhí)行一下命令創(chuàng)建sshkey

          ssh-keygen -t rsa -b 4096 -N '' 
          

          以上命令在~/.ssh/文件夾下創(chuàng)建id_rsa和id_rsa.pub兩個(gè)文件。

          (2)啟動(dòng)ssh agent進(jìn)程并把將無密碼登錄的私鑰加入ssh agent

          eval "$(ssh-agent -s)"
          ssh-add ~/.ssh/id_rsa
          
          

          下一步安裝ocp時(shí),需要將ssh公鑰提供給安裝程序配置文件。

          因?yàn)槲覀儾捎米约菏謩?dòng)準(zhǔn)備資源方式,因此,需要將ssh公鑰放到集群各節(jié)點(diǎn),本機(jī)就可以免密碼登錄集群節(jié)點(diǎn)

          #將剛才生成的 ~/.ssh目錄中的 id_rsa.pub 這個(gè)文件拷貝到你要登錄的集群節(jié)點(diǎn) 的~/.ssh目錄中
          scp ~/.ssh/id_rsa.pub root@192.168.128.31:~/.ssh/
          #然后在集群節(jié)點(diǎn)上運(yùn)行以下命令來將公鑰導(dǎo)入到~/.ssh/authorized_keys這個(gè)文件中
          cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
          

          4、獲取安裝程序

          需要注冊紅帽官網(wǎng)賬號,下載測試版安裝程序,下載鏈接具體過程略。 https://cloud.redhat.com/openshift/install/metal/user-provisioned

          • 下載安裝程序
          rm -rf /data/pkg
          mkdir -p /data/pkg
          cd /data/pkg
          
          #ocp安裝程序
          #wget https://mirror.openshift.com/pub/openshift-v4/clients/ocp/latest/openshift-install-linux-4.3.0.tar.gz
          
          #ocp 客戶端
          #wget https://mirror.openshift.com/pub/openshift-v4/clients/ocp/latest/openshift-client-linux-4.3.0.tar.gz
          
          #rhcos安裝程序
          wget https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/latest/latest/rhcos-4.3.0-x86_64-installer.iso
          
          #rhcos  bios raw文件
          wget https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/latest/latest/rhcos-4.3.0-x86_64-metal.raw.gz
          
          #如果采用iso文件方式安裝,相面兩個(gè)文件都不需要下載
          
          #rhcos安裝程序內(nèi)核文件,用于使用ipex方式安裝
          wget https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/latest/latest/rhcos-4.3.0-x86_64-installer-kernel
          
          #rhcos初始化鏡像文件,用于使用ipex方式安裝
          wget https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/latest/latest/rhcos-4.3.0-x86_64-installer-initramfs.img
          

          5、準(zhǔn)備工具機(jī)misc

          參照王征的腳本修改的工具機(jī)準(zhǔn)備工具,可以方便的在工具機(jī)上啟動(dòng) LB、DHCP、PXE、DNS和HTTP服務(wù) (1)安裝ansible和git

          yum -y install ansible git
          

          (2)從github拉取playbook

          cd /data/pkg
          git clone https://github.com/scwang18/ocp4-upi-helpernode.git
          

          (3)修改playbook的參數(shù)文件 根據(jù)自己的網(wǎng)絡(luò)規(guī)劃修改參數(shù)文件

          [root@centos75 pkg]# cd /data/pkg/ocp4-upi-helpernode/
          [root@centos75 ocp4-upi-helpernode]# cat vars-static.yaml
          [root@misc pkg]# cat vars-static.yaml
          ---
          staticips: true
          named: true
          helper:
            name: "helper"
            ipaddr: "192.168.128.30"
            networkifacename: "ens192"
          dns:
            domain: "ipincloud.com"
            clusterid: "ocptest"
            forwarder1: "192.168.128.30"
            forwarder2: "192.168.128.30"
            registry:
              name: "registry"
              ipaddr: "192.168.128.30"
            yum:
              name: "yum"
              ipaddr: "192.168.128.30"
          bootstrap:
            name: "bootstrap"
            ipaddr: "192.168.128.31"
          masters:
            - name: "master1"
              ipaddr: "192.168.128.32"
            - name: "master2"
              ipaddr: "192.168.128.33"
            - name: "master3"
              ipaddr: "192.168.128.34"
          workers:
            - name: "worker1"
              ipaddr: "192.168.128.35"
            - name: "worker2"
              ipaddr: "192.168.128.36"
          force_ocp_download: false
          
          ocp_bios: "file:///data/pkg/rhcos-4.3.0-x86_64-metal.raw.gz"
          ocp_initramfs: "file:///data/pkg/rhcos-4.3.0-x86_64-installer-initramfs.img"
          ocp_install_kernel: "file:///data/pkg/rhcos-4.3.0-x86_64-installer-kernel"
          ocp_client: "file:///data/pkg/openshift-client-linux-4.3.0.tar.gz"
          ocp_installer: "file:///data/pkg/openshift-install-linux-4.3.0.tar.gz"
          ocp_filetranspiler: "file:///data/pkg/filetranspiler-master.zip"
          registry_server: "registry.ipincloud.com:8443"
          [root@misc pkg]#
          
          

          (4)執(zhí)行ansible安裝

          ansible-playbook -e @vars-static.yaml tasks/main.yml
          

          6、準(zhǔn)備docker env

          # 在可以科學(xué)上網(wǎng)的機(jī)器上打包必要的鏡像文件
          
          #rm -rf /data/ocp4
          mkdir -p /data/ocp4
          cd /data/ocp4
          
          # 這個(gè)腳本不好用,不下載,使用下面自己修改過
          # wget https://raw.githubusercontent.com/wangzheng422/docker_env/dev/redhat/ocp4/4.3/scripts/build.dist.sh
          
          yum -y install podman docker-distribution pigz skopeo docker buildah jq python3-pip 
          
          pip3 install yq
          
          # https://blog.csdn.net/ffzhihua/article/details/85237411
          wget http://mirror.centos.org/centos/7/os/x86_64/Packages/python-rhsm-certificates-1.19.10-1.el7_4.x86_64.rpm
          rpm2cpio python-rhsm-certificates-1.19.10-1.el7_4.x86_64.rpm | cpio -iv --to-stdout ./etc/rhsm/ca/redhat-uep.pem | tee /etc/rhsm/ca/redhat-uep.pem
          
          systemctl start docker
          
          docker login -u wuliangye2019 -p Red@123! registry.redhat.io
          docker login -u wuliangye2019 -p Red@123! registry.access.redhat.com
          docker login -u wuliangye2019 -p Red@123! registry.connect.redhat.com
          
          podman login -u wuliangye2019 -p Red@123! registry.redhat.io
          podman login -u wuliangye2019 -p Red@123! registry.access.redhat.com
          podman login -u wuliangye2019 -p Red@123! registry.connect.redhat.com
          
          # to download the pull-secret.json, open following link
          # https://cloud.redhat.com/openshift/install/metal/user-provisioned
          cat << 'EOF' > /data/pull-secret.json
          {"auths":{"cloud.openshift.com":{"auth":"xxxxxxxxxxx}}}
          EOF
          
          

          創(chuàng)建 build.dist.sh文件

          #!/usr/bin/env bash
          
          set -e
          set -x
          
          var_date=$(date '+%Y-%m-%d')
          echo $var_date
          #以下不用每次都執(zhí)行
          #cat << EOF >>  /etc/hosts
          #127.0.0.1 registry.ipincloud.com
          #EOF
          
          
          #mkdir -p /etc/crts/
          #cd /etc/crts
          #openssl req \
          #   -newkey rsa:2048 -nodes -keyout ipincloud.com.key \
          #   -x509 -days 3650 -out ipincloud.com.crt -subj \
          #   "/C=CN/ST=GD/L=SZ/O=Global Security/OU=IT Department/CN=*.ipincloud.com"
          
          #cp /etc/crts/ipincloud.com.crt /etc/pki/ca-trust/source/anchors/
          #update-ca-trust extract
          
          systemctl stop docker-distribution
          
          rm -rf /data/registry
          mkdir -p /data/registry
          cat << EOF > /etc/docker-distribution/registry/config.yml
          version: 0.1
          log:
            fields:
              service: registry
          storage:
              cache:
                  layerinfo: inmemory
              filesystem:
                  rootdirectory: /data/registry
              delete:
                  enabled: true
          http:
              addr: :8443
              tls:
                 certificate: /etc/crts/ipincloud.com.crt
                 key: /etc/crts/ipincloud.com.key
          EOF
          systemctl restart docker
          systemctl enable docker-distribution
          
          systemctl restart docker-distribution
          
          build_number_list=$(cat << EOF
          4.3.0
          EOF
          )
          mkdir -p /data/ocp4
          cd /data/ocp4
          
          install_build() {
              BUILDNUMBER=$1
              echo ${BUILDNUMBER}
              
              mkdir -p /data/ocp4/${BUILDNUMBER}
              cd /data/ocp4/${BUILDNUMBER}
          
              #下載并安裝openshift客戶端和安裝程序 第一次需要運(yùn)行,工具機(jī)ansi初始化時(shí),已經(jīng)完成這些動(dòng)作了
              #wget https://mirror.openshift.com/pub/openshift-v4/clients/ocp/${BUILDNUMBER}/release.txt
          
              #wget https://mirror.openshift.com/pub/openshift-v4/clients/ocp/${BUILDNUMBER}/openshift-client-linux-${BUILDNUMBER}.tar.gz
              #wget https://mirror.openshift.com/pub/openshift-v4/clients/ocp/${BUILDNUMBER}/openshift-install-linux-${BUILDNUMBER}.tar.gz
          
              #解壓安裝程序和客戶端到用戶執(zhí)行目錄 第一次需要運(yùn)行
              #tar -xzf openshift-client-linux-${BUILDNUMBER}.tar.gz -C /usr/local/bin/
              #tar -xzf openshift-install-linux-${BUILDNUMBER}.tar.gz -C /usr/local/bin/
              
              export OCP_RELEASE=${BUILDNUMBER}
              export LOCAL_REG='registry.ipincloud.com:8443'
              export LOCAL_REPO='ocp4/openshift4'
              export UPSTREAM_REPO='openshift-release-dev'
              export LOCAL_SECRET_JSON="/data/pull-secret.json"
              export OPENSHIFT_INSTALL_RELEASE_IMAGE_OVERRIDE=${LOCAL_REG}/${LOCAL_REPO}:${OCP_RELEASE}
              export RELEASE_NAME="ocp-release"
          
              oc adm release mirror -a ${LOCAL_SECRET_JSON} \
              --from=quay.io/${UPSTREAM_REPO}/${RELEASE_NAME}:${OCP_RELEASE}-x86_64 \
              --to-release-image=${LOCAL_REG}/${LOCAL_REPO}:${OCP_RELEASE} \
              --to=${LOCAL_REG}/${LOCAL_REPO}
          
          }
          
          while read -r line; do
              install_build $line
          done <<< "$build_number_list"
          
          cd /data/ocp4
          
          #wget -O ocp4-upi-helpernode-master.zip https://github.com/wangzheng422/ocp4-upi-helpernode/archive/master.zip
          
          #以下注釋,因?yàn)閝uay.io/wangzheng422這個(gè)倉庫的registry版本是v1不能與v2共存
          #podman pull quay.io/wangzheng422/filetranspiler
          #podman save quay.io/wangzheng422/filetranspiler | pigz -c > filetranspiler.tgz
          
          #podman pull docker.io/library/registry:2
          #podman save docker.io/library/registry:2 | pigz -c > registry.tgz
          
          systemctl start docker
          
          docker login -u wuliangye2019 -p Red@123! registry.redhat.io
          docker login -u wuliangye2019 -p Red@123! registry.access.redhat.com
          docker login -u wuliangye2019 -p Red@123! registry.connect.redhat.com
          
          podman login -u wuliangye2019 -p Red@123! registry.redhat.io
          podman login -u wuliangye2019 -p Red@123! registry.access.redhat.com
          podman login -u wuliangye2019 -p Red@123! registry.connect.redhat.com
          
          # 以下命令要運(yùn)行 2-3個(gè)小時(shí),耐心等待。。。
          
          # build operator catalog
          podman login registry.ipincloud.com:8443 -u root -p Scwang18
          oc adm catalog build \
              --appregistry-endpoint https://quay.io/cnr \
              --appregistry-org redhat-operators \
              --to=${LOCAL_REG}/ocp4-operator/redhat-operators:v1
              
          oc adm catalog mirror \
              ${LOCAL_REG}/ocp4-operator/redhat-operators:v1 \
              ${LOCAL_REG}/operator
          
          #cd /data
          #tar cf - registry/ | pigz -c > registry.tgz
          
          #cd /data
          #tar cf - ocp4/ | pigz -c > ocp4.tgz
          
          

          執(zhí)行build.dist.sh腳本

          這里有個(gè)巨坑,因?yàn)閺膓uay.io拉取image鏡像到本地時(shí),拉取的文件有5G多,通常一次拉取不完,會(huì)出錯(cuò),每次出錯(cuò)后,重新運(yùn)行build.dist.sh會(huì)把以前的registry刪除掉,從頭再來,浪費(fèi)很多時(shí)間,實(shí)際上可以不用刪除,執(zhí)行oc adm release mirror時(shí)會(huì)自動(dòng)跳過已經(jīng)存在的image。血淚教訓(xùn)。

          bash build.dist.sh
          
          

          oc adm release mirror執(zhí)行完畢后,回根據(jù)官方鏡像倉庫生成本地鏡像倉庫,返回的信息需要記錄下來,特別是imageContentSource信息,后面 install-config.yaml 文件里配置進(jìn)去

          
          Success
          Update image:  registry.ipincloud.com:8443/ocp4/openshift4:4.3.0
          Mirror prefix: registry.ipincloud.com:8443/ocp4/openshift4
          
          To use the new mirrored repository to install, add the following section to the install-config.yaml:
          
          imageContentSources:
          - mirrors:
            - registry.ipincloud.com:8443/ocp4/openshift4
            source: quay.io/openshift-release-dev/ocp-release
          - mirrors:
            - registry.ipincloud.com:8443/ocp4/openshift4
            source: quay.io/openshift-release-dev/ocp-v4.0-art-dev
          
          
          To use the new mirrored repository for upgrades, use the following to create an ImageContentSourcePolicy:
          
          apiVersion: operator.openshift.io/v1alpha1
          kind: ImageContentSourcePolicy
          metadata:
            name: example
          spec:
            repositoryDigestMirrors:
            - mirrors:
              - registry.ipincloud.com:8443/ocp4/openshift4
              source: quay.io/openshift-release-dev/ocp-release
            - mirrors:
              - registry.ipincloud.com:8443/ocp4/openshift4
              source: quay.io/openshift-release-dev/ocp-v4.0-art-dev
          

          以下命令不需要執(zhí)行,在build.dish.sh里已經(jīng)執(zhí)行了

          oc adm release mirror -a /data/pull-secret.json --from=quay.io/openshift-release-dev/ocp-release:4.3.0-x86_64 --to-release-image=registry.ipincloud.com:8443/ocp4/openshift4:4.3.0 --to=registry.ipincloud.com:8443/ocp4/openshift4    
          
          podman login registry.ipincloud.com:8443 -u root -p Scwang18
          oc adm catalog build \
              --appregistry-endpoint https://quay.io/cnr \
              --appregistry-org redhat-operators \
              --to=registry.ipincloud.com:8443/ocp4-operator/redhat-operators:v1
              
          oc adm catalog mirror \
              registry.ipincloud.com:8443/ocp4-operator/redhat-operators:v1 \
              registry.ipincloud.com:8443/operator
          
          #如果oc adm catalog mirror執(zhí)行不成功,會(huì)生成一個(gè)mapping.txt的文件,可以根據(jù)這個(gè)文件,執(zhí)行不成功的行刪除,再以下面的方式執(zhí)行
          oc image mirror -a /data/pull-secret.json -f /data/mapping-ok.txt
          
          
          oc image mirror quay.io/external_storage/nfs-client-provisioner:latest registry.ipincloud.com:8443/ocp4/openshift4/nfs-client-provisioner:latest
          
          oc image mirror quay.io/external_storage/nfs-client-provisioner:latest registry.ipincloud.com:8443/quay.io/external_storage/nfs-client-provisioner:latest
          
          #查看鏡像的sha
          curl -v --silent -H "Accept: application/vnd.docker.distribution.manifest.v2+json" -X GET  https://registry.ipincloud.com:8443/v2/ocp4/openshift4/nfs-client-provisioner/manifests/latest 2>&1 | grep Docker-Content-Digest | awk '{print ($3)}'
          
          #刪除鏡像摘要
          curl -v --silent -H "Accept: application/vnd.docker.distribution.manifest.v2+json" -X DELETE https://registry.ipincloud.com:8443/v2/ocp4/openshift4/nfs-client-provisioner/manifests/sha256:022ea0b0d69834b652a4c53655d78642ae23f0324309097be874fb58d09d2919
          
          #回收鏡像空間
          podman exec -it  mirror-registry /bin/registry garbage-collect  /etc/docker/registry/config.yml
          
          

          7、創(chuàng)建installer配置文件

          (1)創(chuàng)建installer文件夾

          rm -rf /data/install
          mkdir -p /data/install
          cd /data/install
          

          (2)定制install-config.yaml文件

          • 補(bǔ)充pullSecret
          [root@misc data]# cat /data/pull-secret.json
          {"auths":{"cloud.openshift.com":{"auth":"省略"}}}
          
          • 添加sshKey(3.1創(chuàng)建的公鑰文件內(nèi)容)
          cat ~/.ssh/id_rsa.pub
          
          • additionalTrustBundle(Mirror registry創(chuàng)建是生成的csr)
          [root@misc crts]# cat /etc/crts/ipincloud.com.crt
          -----BEGIN CERTIFICATE-----
          xxx省略
          -----END CERTIFICATE-----
          
          • 添加代理

          生產(chǎn)環(huán)境可以不用直連外網(wǎng),通過在install-config.yaml文件為集群設(shè)置代理。

          本次測試,為了加速外網(wǎng)下載,我在aws上事先搭建了一個(gè)v2ray server,misc服務(wù)器作為v2ray客戶端,具體搭建過程另文敘述。

          • 在反復(fù)試驗(yàn)時(shí),比如 install-config.yaml 所在的目錄是 config,必須 rm -rf install 而不是 rm -rf install/*,后者未刪除其中的隱藏文件 .openshift_install_state.json,有可能引起:x509: certificate has expired or is not yet valid。
          • 在文檔和博客示例中 install-config.yaml 的 cidr 配置為 10 網(wǎng)段,由于未細(xì)看文檔理解成了節(jié)點(diǎn)機(jī)網(wǎng)段,這造成了整個(gè)過程中最莫名其妙的錯(cuò)誤:no matches for kind MachineConfig。
          • 最終文件內(nèi)容如下:
          [root@centos75 install]# vi install-config.yaml
          apiVersion: v1
          baseDomain: ipincloud.com
          proxy:
            httpProxy: http://192.168.128.30:8001
            httpsProxy: http://192.168.128.30:8001
          compute:
          - hyperthreading: Enabled
            name: worker
            replicas: 0
          controlPlane:
            hyperthreading: Enabled
            name: master
            replicas: 3
          metadata:
            name: ocptest
          networking:
            clusterNetwork:
            - cidr: 10.128.0.0/14
              hostPrefix: 23
            networkType: OpenShiftSDN
            serviceNetwork:
            - 172.30.0.0/16
          platform:
            none: {}
          fips: false
          pullSecret: '{"auths":{"省略'
          additionalTrustBundle: |
            -----BEGIN CERTIFICATE-----
            省略,注意這里要前面空兩格
            -----END CERTIFICATE-----
          imageContentSources:
          - mirrors:
            - registry.ipincloud.com:8443/ocp4/openshift4
            source: quay.io/openshift-release-dev/ocp-release
          - mirrors:
            - registry.ipincloud.com:8443/ocp4/openshift4
            source: quay.io/openshift-release-dev/ocp-v4.0-art-dev
          
          

          (3)備份定制install-config.yaml文件,便于以后可以重復(fù)使用

          cd /data/install
          cp install-config.yaml  ../install-config.yaml.20200205
          

          8、創(chuàng)建Kubernetes manifest和Ignition配置文件

          (1)生成Kubernetes manifests文件

          openshift-install create manifests --dir=/data/install
          

          注意:指定install-config.yaml所在目錄是,需要使用絕的路徑

          (2)修改 manifests/cluster-scheduler-02-config.yml文件以防止pod調(diào)度到control plane節(jié)點(diǎn)

          紅帽官方安裝文檔說明,kubernetes不支持ingress的load balancer訪問control-plane節(jié)點(diǎn)的pod

          a.打開manifests/cluster-scheduler-02-config.yml
          b.找到mastersSchedulable參數(shù),設(shè)置為False
          c.保存并退出。
          
          vi /data/install/manifests/cluster-scheduler-02-config.yml
          
          

          (3)創(chuàng)建Ignition配置文件

          注意:創(chuàng)建Ignition配置文件完成后,install-config.yaml文件將被刪除,請務(wù)必先備份此文件。

          openshift-install create ignition-configs --dir=/data/install
          

          (4)將Ignition配置文件拷貝到http服務(wù)器目錄,待安裝時(shí)使用

          cd /data/install
          \cp -f bootstrap.ign /var/www/html/ignition/bootstrap.ign
          \cp -f master.ign /var/www/html/ignition/master1.ign
          \cp -f master.ign /var/www/html/ignition/master2.ign
          \cp -f master.ign /var/www/html/ignition/master3.ign
          \cp -f worker.ign /var/www/html/ignition/worker1.ign
          \cp -f worker.ign /var/www/html/ignition/worker2.ign
          
          cd /var/www/html/ignition/
          chmod 755 *.ign
          
          

          至此,已完成必要的配置文件設(shè)置,開始進(jìn)入下一步創(chuàng)建節(jié)點(diǎn)。

          9、定制RHCOS ISO

          安裝時(shí)需要修改啟動(dòng)參數(shù),只能手動(dòng)錄入,每臺機(jī)器修改很麻煩,容易出錯(cuò),因此我們采用genisoimage來定制每臺機(jī)器的安裝鏡像。

          #安裝鏡像創(chuàng)建工具
          yum -y install genisoimage libguestfs-tools
          systemctl start libvirtd
          
          #設(shè)置環(huán)境變量
          export NGINX_DIRECTORY=/data/pkg
          export RHCOSVERSION=4.3.0
          export VOLID=$(isoinfo -d -i ${NGINX_DIRECTORY}/rhcos-${RHCOSVERSION}-x86_64-installer.iso | awk '/Volume id/ { print $3 }')
          #生成一個(gè)臨時(shí)文件目錄,用于放置過程文件
          TEMPDIR=$(mktemp -d)
          echo $VOLID
          echo $TEMPDIR
          
          
          cd ${TEMPDIR}
          # Extract the ISO content using guestfish (to avoid sudo mount)
          #使用guestfish可以將不用sudo mount將iso文件解壓出來
          guestfish -a ${NGINX_DIRECTORY}/rhcos-${RHCOSVERSION}-x86_64-installer.iso \
            -m /dev/sda tar-out / - | tar xvf -
          
          #定義修改配置文件的函數(shù)
          modify_cfg(){
            for file in "EFI/redhat/grub.cfg" "isolinux/isolinux.cfg"; do
              # 添加恰當(dāng)?shù)?image 和 ignition url
              sed -e '/coreos.inst=yes/s|$| coreos.inst.install_dev=sda coreos.inst.image_url='"${URL}"'\/install\/'"${BIOSMODE}"'.raw.gz coreos.inst.ignition_url='"${URL}"'\/ignition\/'"${NODE}"'.ign ip='"${IP}"'::'"${GATEWAY}"':'"${NETMASK}"':'"${FQDN}"':'"${NET_INTERFACE}"':none:'"${DNS}"' nameserver='"${DNS}"'|' ${file} > $(pwd)/${NODE}_${file##*/}
              # 修改參數(shù)里的啟動(dòng)等待時(shí)間
              sed -i -e 's/default vesamenu.c32/default linux/g' -e 's/timeout 600/timeout 10/g' $(pwd)/${NODE}_${file##*/}
            done
          }
          
          #設(shè)置url,網(wǎng)關(guān)、dns等iso啟動(dòng)通用參數(shù)變量
          URL="http://192.168.128.30:8080"
          GATEWAY="192.168.128.254"
          NETMASK="255.255.255.0"
          DNS="192.168.128.30"
          
          #設(shè)置bootstrap節(jié)點(diǎn)變量
          NODE="bootstrap"
          IP="192.168.128.31"
          FQDN="bootstrap"
          BIOSMODE="bios"
          NET_INTERFACE="ens192"
          modify_cfg
          
          #設(shè)置master1節(jié)點(diǎn)變量
          NODE="master1"
          IP="192.168.128.32"
          FQDN="master1"
          BIOSMODE="bios"
          NET_INTERFACE="ens192"
          modify_cfg
          
          #設(shè)置master2節(jié)點(diǎn)變量
          NODE="master2"
          IP="192.168.128.33"
          FQDN="master2"
          BIOSMODE="bios"
          NET_INTERFACE="ens192"
          modify_cfg
          
          #設(shè)置master3節(jié)點(diǎn)變量
          NODE="master3"
          IP="192.168.128.34"
          FQDN="master3"
          BIOSMODE="bios"
          NET_INTERFACE="ens192"
          modify_cfg
          
          #設(shè)置master4節(jié)點(diǎn)變量
          NODE="worker1"
          IP="192.168.128.35"
          FQDN="worker1"
          BIOSMODE="bios"
          NET_INTERFACE="ens192"
          modify_cfg
          
          #設(shè)置master5節(jié)點(diǎn)變量
          NODE="worker2"
          IP="192.168.128.36"
          FQDN="worker2"
          BIOSMODE="bios"
          NET_INTERFACE="ens192"
          modify_cfg
          
          
          # 為每個(gè)節(jié)點(diǎn)創(chuàng)建不同的安裝鏡像
          # https://github.com/coreos/coreos-assembler/blob/master/src/cmd-buildextend-installer#L97-L103
          for node in bootstrap master1 master2 master3 worker1 worker2; do
            # 為每個(gè)節(jié)點(diǎn)創(chuàng)建不同的 grub.cfg and isolinux.cfg 文件
            for file in "EFI/redhat/grub.cfg" "isolinux/isolinux.cfg"; do
              /bin/cp -f $(pwd)/${node}_${file##*/} ${file}
            done
            # 創(chuàng)建iso鏡像
            genisoimage -verbose -rock -J -joliet-long -volset ${VOLID} \
              -eltorito-boot isolinux/isolinux.bin -eltorito-catalog isolinux/boot.cat \
              -no-emul-boot -boot-load-size 4 -boot-info-table \
              -eltorito-alt-boot -efi-boot images/efiboot.img -no-emul-boot \
              -o ${NGINX_DIRECTORY}/${node}.iso .
          done
          
          # 清除過程文件
          cd
          rm -Rf ${TEMPDIR}
          
          cd ${NGINX_DIRECTORY}
          
          

          9、在節(jié)點(diǎn)機(jī)器上安裝RHCOS

          (1)將定制的ISO文件拷貝到vmware esxi主機(jī)上,準(zhǔn)備裝節(jié)點(diǎn)

          [root@misc pkg]# scp bootstrap.iso root@192.168.128.200:/vmfs/volumes/hdd/iso
          [root@misc pkg]# scp m*.iso root@192.168.128.200:/vmfs/volumes/hdd/iso
          [root@misc pkg]# scp w*.iso root@192.168.128.200:/vmfs/volumes/hdd/iso
          
          

          (2)按規(guī)劃創(chuàng)建master,設(shè)置從iso啟動(dòng)安裝

          • 進(jìn)入啟動(dòng)界面后,直接點(diǎn)擊安裝,系統(tǒng)自動(dòng)回自動(dòng)下載bios和配置文件,完成安裝
          • 安裝完成后,需要將iso文件退出來,避免再次進(jìn)入安裝界面
          • 安裝順序是bootstrap,master1,master2,master3,待master安裝并啟動(dòng)完成后,再進(jìn)行worker安裝
          • 安裝過程中可以通過proxy查看進(jìn)度 http://registry.ipincloud.com:9000/
          • 安裝過程中可以在misc節(jié)點(diǎn)查看詳細(xì)的bootstrap進(jìn)度。

          openshift-install --dir=/data/install wait-for bootstrap-complete --log-level debug
          

          注意事項(xiàng):

          • ignition和iso文件的正確匹配
          • 我在安裝的時(shí)候,master1提示etcdmain: member ab84b6a6e4a3cc9a has already been bootstrapped,花了很多時(shí)間分析和解決問題,因?yàn)閙aster1在安裝完成后,etcd組件會(huì)自動(dòng)安裝并注冊為member,我再次使用iso文件重新安裝master1后,etcd自動(dòng)安裝注冊時(shí),會(huì)檢測到etcd及集群里已經(jīng)有這個(gè)member,無法重新注冊,因此這個(gè)節(jié)點(diǎn)的etcd一直無法正常啟動(dòng),解決辦法是:

          手工修改-aster1節(jié)點(diǎn)的etcd的yaml文件,在exec etcd命令末尾增加–initial-cluster-state=existing參數(shù),再刪除問題POD后,系統(tǒng)會(huì)自動(dòng)重新安裝etcd pod,恢復(fù)正常。 正常啟動(dòng)以后,要把這個(gè)改回去,否則machine-config回一直無法完成

          #
          [root@master1 /]# vi /etc/kubernetes/manifests/etcd-member.yaml
          
                exec etcd \
                  --initial-advertise-peer-urls=https://${ETCD_IPV4_ADDRESS}:2380 \
                  --cert-file=/etc/ssl/etcd/system:etcd-server:${ETCD_DNS_NAME}.crt \
                  --key-file=/etc/ssl/etcd/system:etcd-server:${ETCD_DNS_NAME}.key \
                  --trusted-ca-file=/etc/ssl/etcd/ca.crt \
                  --client-cert-auth=true \
                  --peer-cert-file=/etc/ssl/etcd/system:etcd-peer:${ETCD_DNS_NAME}.crt \
                  --peer-key-file=/etc/ssl/etcd/system:etcd-peer:${ETCD_DNS_NAME}.key \
                  --peer-trusted-ca-file=/etc/ssl/etcd/ca.crt \
                  --peer-client-cert-auth=true \
                  --advertise-client-urls=https://${ETCD_IPV4_ADDRESS}:2379 \
                  --listen-client-urls=https://0.0.0.0:2379 \
                  --listen-peer-urls=https://0.0.0.0:2380 \
                  --listen-metrics-urls=https://0.0.0.0:9978 \
                  --initial-cluster-state=existing
                  
          [root@master1 /]# crictl pods
          POD ID              CREATED             STATE               NAME                                                     NAMESPACE                                ATTEMPT
          c4686dc3e5f4f       38 minutes ago      Ready               etcd-member-master1.ocptest.ipincloud.com                openshift-etcd                           5        
          [root@master1 /]# crictl rmp xxx
          
          
          • 檢查是否安裝完成
            如果出現(xiàn)INFO It is now safe to remove the bootstrap resources,表示master節(jié)點(diǎn)安裝完成,控制面轉(zhuǎn)移到master集群。
          
          [root@misc install]# openshift-install --dir=/data/install wait-for bootstrap-complete --log-level debug
          DEBUG OpenShift Installer v4.3.0
          DEBUG Built from commit 2055609f95b19322ee6cfdd0bea73399297c4a3e
          INFO Waiting up to 30m0s for the Kubernetes API at https://api.ocptest.ipincloud.com:6443...
          INFO API v1.16.2 up
          INFO Waiting up to 30m0s for bootstrapping to complete...
          DEBUG Bootstrap status: complete
          INFO It is now safe to remove the bootstrap resources
          [root@misc install]#
          

          (3)安裝worker

          • 進(jìn)入啟動(dòng)界面后,直接點(diǎn)擊安裝,系統(tǒng)自動(dòng)回自動(dòng)下載bios和配置文件,完成安裝
          • 安裝完成后,需要將iso文件退出來,避免再次進(jìn)入安裝界面
          • 安裝順序是bootstrap,master1,master2,master3,待master安裝并啟動(dòng)完成后,再進(jìn)行worker安裝
          • 安裝過程中可以通過proxy查看進(jìn)度 http://registry.ipincloud.com:9000/
          • 也可以在misc節(jié)點(diǎn)是查看詳細(xì)安裝節(jié)點(diǎn)
          [root@misc redhat-operators-manifests]#  openshift-install --dir=/data/install wait-for install-complete --log-level debug
          DEBUG OpenShift Installer v4.3.0
          DEBUG Built from commit 2055609f95b19322ee6cfdd0bea73399297c4a3e
          INFO Waiting up to 30m0s for the cluster at https://api.ocptest.ipincloud.com:6443 to initialize...
          DEBUG Cluster is initialized
          INFO Waiting up to 10m0s for the openshift-console route to be created...
          DEBUG Route found in openshift-console namespace: console
          DEBUG Route found in openshift-console namespace: downloads
          DEBUG OpenShift console route is created
          INFO Install complete!
          INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/data/install/auth/kubeconfig'
          INFO Access the OpenShift web-console here:
          https://console-openshift-console.apps.ocptest.ipincloud.com
          INFO Login to the console with user: kubeadmin, password: pubmD-8Baaq-IX36r-WIWWf
          
          
          
          • 需要審批worker節(jié)點(diǎn)的加入申請

          查看待審批的csr

          [root@misc ~]# oc get csr
          NAME        AGE   REQUESTOR                                                                   CONDITION
          csr-7lln5   70m   system:serviceaccount:openshift-machine-config-operator:node-bootstrapper   Approved,Issued
          csr-d48xk   69m   system:node:master1.ocptest.ipincloud.com                                   Approved,Issued
          csr-f2g7r   69m   system:node:master2.ocptest.ipincloud.com                                   Approved,Issued
          csr-gbn2n   69m   system:node:master3.ocptest.ipincloud.com                                   Approved,Issued
          csr-hwxwx   13m   system:serviceaccount:openshift-machine-config-operator:node-bootstrapper   Pending
          csr-ppgxx   13m   system:serviceaccount:openshift-machine-config-operator:node-bootstrapper   Pending
          csr-wg874   70m   system:serviceaccount:openshift-machine-config-operator:node-bootstrapper   Approved,Issued
          csr-zkp79   70m   system:serviceaccount:openshift-machine-config-operator:node-bootstrapper   Approved,Issued
          [root@misc ~]#
          

          執(zhí)行審批

          oc get csr -ojson | jq -r '.items[] | select(.status=={} ) | .metadata.name' | xargs oc adm certificate approve
          

          (3)在misc上啟動(dòng)nfs

          
          bash /data/pkg/ocp4-upi-helpernode/files/nfs-provisioner-setup.sh
          #查看狀態(tài)
          oc get pods -n nfs-provisioner
          (4)ocp內(nèi)部registry使用nfs作為存儲
          oc patch configs.imageregistry.operator.openshift.io cluster -p '{"spec":{"storage":{"pvc":{"claim":""}}}}' --type=merge
          
          oc get clusteroperator image-registry
          

          10 配置登錄

          (1)配置普通管理員賬號

          帽為開發(fā)者提供了openshift快速開發(fā)環(huán)境,也就是CRC(CodeReady Containers),這套環(huán)境可以在數(shù)分鐘內(nèi)快速搭建起openshfit環(huán)境,在這套環(huán)境中,我們可以很輕松的體驗(yàn)openshift的各項(xiàng)功能。由于這套環(huán)境是通過虛擬化實(shí)現(xiàn)的,所以要求配置較高,推薦配置是:8vcpu,32G內(nèi)存。(內(nèi)存越大越好)。通過這套環(huán)境,您可以快速體驗(yàn)ci/cd,s2i,通過knaitve實(shí)現(xiàn)serverless等功能。

          CRC(CodeReady Containers)安裝流程


          一、注冊紅帽開發(fā)者賬戶

          https://developers.redhat.com/about

          如果有紅帽賬戶,可以直接登錄,然后再點(diǎn)擊加入,沒有紅帽賬戶的話,點(diǎn)擊join now后會(huì)自動(dòng)進(jìn)入紅帽賬戶創(chuàng)建流程。


          二、下載CRC


          https://cloud.redhat.com/openshift/create/local



          安裝步驟請參考文檔,對于macos只有2步,執(zhí)行下載的CRC setup ,然后粘貼上面的pull secret,用來從紅帽倉庫拉鏡像。此文件對于OKD也有效


          正常啟動(dòng)完成會(huì)提示用戶名。密碼。


          安裝完成

          安裝完成。我們可以快速體驗(yàn)紅帽openshift的強(qiáng)大功能了


          主站蜘蛛池模板: 亚洲日韩激情无码一区| 国模大尺度视频一区二区| 亚洲AV无码一区二区三区DV| 国产精品揄拍一区二区久久| 一区二区三区免费在线视频| 国99精品无码一区二区三区| 人妻无码一区二区三区四区| 亚洲乱码一区二区三区国产精品 | 亚洲av成人一区二区三区观看在线| 色多多免费视频观看区一区| 亚洲中文字幕在线无码一区二区 | 久久99热狠狠色精品一区| 欧美av色香蕉一区二区蜜桃小说| 中文字幕亚洲一区| 日韩精品无码Av一区二区| 亚洲AV日韩综合一区| 人妻体内射精一区二区三四| 国产一区二区视频在线播放| 美女免费视频一区二区| 在线观看中文字幕一区| 国产精品香蕉一区二区三区| 人妻体内射精一区二区三区| 国产一区视频在线| 中文字幕精品亚洲无线码一区应用| 日韩精品一区二区三区中文版| 波多野结衣久久一区二区| 精品人伦一区二区三区潘金莲| 成人精品视频一区二区三区不卡 | 理论亚洲区美一区二区三区| 国产乱码一区二区三区四| 麻豆一区二区三区精品视频| 国产婷婷色一区二区三区| 无码人妻精品一区二区三区蜜桃| 无码毛片一区二区三区中文字幕| 亚洲V无码一区二区三区四区观看| 一区二区三区久久精品| 国产精品成人99一区无码| 国产精品高清视亚洲一区二区| 91video国产一区| 少妇无码一区二区二三区| 亚洲A∨无码一区二区三区|