標籤: 如何寫文案

  • 比亞迪電動巴士入日本市場 遭日本人吐槽「跑跑就散架」

    比亞迪電動巴士入日本市場 遭日本人吐槽「跑跑就散架」

    近日,比亞迪電動巴士K9亮相日本京都地區,成為首個進入日本市場的中國汽車品牌。據日本當地媒體報導,本次引入京都的比亞迪電動巴士共計5輛,每輛車核定乘員人數69人,將由京都急行巴士株式會社運營。   環球網掌握的公開資料顯示,該款單層電動巴士完成單次充電後,可行駛約250公里。比亞迪在美國亦有生產電動巴士K9,售價約80萬美元。而從 2011年3月比亞迪與丹麥最大的公交公司Movia達成純電動公交合作開始,K9已經駛入了全球五大洲的多個城市,從德國的法蘭克福、到美國的華盛頓、再到英國的倫敦,多個新能源公交領域的訂單花落比亞迪。   而對於比亞迪電動巴士的亮相,日本各界表現出了截然相反的態度。一方面,日本國土交通省對於中國自主品牌積極「點贊」。在京都國立博物館內舉行的K9運營啟動儀式上,日本國土交通省官員阪部光雄表示:「非常高興能在京都迎來比亞迪的純電動巴士,這是京都公交系統首次採用零排放的純電動巴士,運營純電動巴士對於地區的環境治理意義非常深遠。」

    但也有一些日本線民並不是表現那麼「淡定」,他們反應強烈。有網友在網上留言表示,「還不知道一年後是啥樣兒••••估計跑著跑著就散架了,這都能想像到!反正我堅決不坐。」有的網友則質疑純電動公車的續航里程,「到底一次能跑幾個小時啊?不說滿電能跑幾天,也不說充滿電能運行多久••••總感覺這車的事故處理費用和乘客賠償費用會跟高啊!」

    有的甚至表示,「買這廉價玩意兒,還不如讓有軌電車重新上路呢!」還有人則對日本媒體對此事的報導不滿,「看到這條新聞,總感覺很不爽,報導最後竟然還來了一句「今後訪問京都的中國遊客多數人都會看到BYD在日本街頭的雄姿」。」

    不少人則開始擔憂中國產品大量湧入日本,其驚呼,「(中國製造開始不斷湧入日本市場了!)從電視到電腦,再到智慧手機,現在居然輪到汽車了啊!」

    也有一些上了年紀的日本網友感歎中國製造的迅速崛起,「日本的車企也在做HV巴士,但純電動巴士好像還沒有在做呢。估計(比亞迪K9)比較便宜吧,但還是希望安全第一。1985年的時候,日本贈送給了北京兩輛公車,30年後,中國巴士開始進入日本了!好神奇!不過首先讓我們來一起見證這輛車的性能吧!」   文章來源:環球網

    本站聲明:網站內容來源於EnergyTrend https://www.energytrend.com.tw/ev/,如有侵權,請聯繫我們,我們將及時處理

    【其他文章推薦】

    ※自行創業缺乏曝光? 網頁設計幫您第一時間規劃公司的形象門面

    網頁設計一頭霧水該從何著手呢? 台北網頁設計公司幫您輕鬆架站!

    ※想知道最厲害的網頁設計公司"嚨底家"!

    ※幫你省時又省力,新北清潔一流服務好口碑

    ※別再煩惱如何寫文案,掌握八大原則!

  • 台達電電動車驅動馬達 打入福斯供應鏈

    電源供應器廠積極跨入車用市場,台達電耕耘 3 年的電動車驅動馬達打入福斯集團供應鏈,今年開始出貨;新巨微動開關則陸續取得德國雙 B、奧迪與福斯,美系福特和日系車廠訂單;康舒電源轉換器也與多家國際電動車大廠談合作。   台達電董事長海英俊表示,目前各國積極發展電動車,基礎設施也開始動起來,因此台達電最近在電動車零組件有不錯的進展,包括出貨給美國知名電動車廠充電樁、轉換器,台達電在歐、美、日、中國等各地的電動車標準都通過認證。   海英俊透露,台達電研發電動車驅動馬達,經過 3 年努力,已獲歐洲車廠認證。

    本站聲明:網站內容來源於EnergyTrend https://www.energytrend.com.tw/ev/,如有侵權,請聯繫我們,我們將及時處理

    【其他文章推薦】

    ※廣告預算用在刀口上,台北網頁設計公司幫您達到更多曝光效益

    新北清潔公司,居家、辦公、裝潢細清專業服務

    ※別再煩惱如何寫文案,掌握八大原則!

    ※教你寫出一流的銷售文案?

    ※超省錢租車方案

  • 雷軍:小米3至5年內不做電動車

    今日,雷軍出席亞布力中國企業家論壇第十五屆年會,並發表了演講。對於進入電動車領域的傳聞,雷軍稱小米三到五年內不會涉及,主要目標是把現有產品做好。   他明確表示,小米佈局已經完成,包括三類、五種產品,包括手機、平板、電視、電視盒、路由器等,主要是為了智慧家居。他稱,「(不涉及電動車)跟市場好壞沒關係,我們專注的把現在幾個產品做好,就已經很不容易了。不是不看好,是精力有限。」

    本站聲明:網站內容來源於EnergyTrend https://www.energytrend.com.tw/ev/,如有侵權,請聯繫我們,我們將及時處理

    【其他文章推薦】

    新北清潔公司,居家、辦公、裝潢細清專業服務

    ※別再煩惱如何寫文案,掌握八大原則!

    網頁設計一頭霧水該從何著手呢? 台北網頁設計公司幫您輕鬆架站!

    ※超省錢租車方案

  • 基於flink和drools的實時日誌處理

    基於flink和drools的實時日誌處理

    1、背景

    日誌系統接入的日誌種類多、格式複雜多樣,主流的有以下幾種日誌:

    • filebeat採集到的文本日誌,格式多樣
    • winbeat採集到的操作系統日誌
    • 設備上報到logstash的syslog日誌
    • 接入到kafka的業務日誌

    以上通過各種渠道接入的日誌,存在2個主要的問題:

    • 格式不統一、不規範、標準化不夠
    • 如何從各類日誌中提取出用戶關心的指標,挖掘更多的業務價值

    為了解決上面2個問題,我們基於flink和drools規則引擎做了實時的日誌處理服務。

    2、系統架構

    架構比較簡單,架構圖如下:

     

    各類日誌都是通過kafka匯總,做日誌中轉。

    flink消費kafka的數據,同時通過API調用拉取drools規則引擎,對日誌做解析處理后,將解析后的數據存儲到Elasticsearch中,用於日誌的搜索和分析等業務。

    為了監控日誌解析的實時狀態,flink會將日誌處理的統計數據,如每分鐘處理的日誌量,每種日誌從各個機器IP來的日誌量寫到Redis中,用於監控統計。

    3、模塊介紹

    系統項目命名為eagle。

    eagle-api:基於springboot,作為drools規則引擎的寫入和讀取API服務。

    eagle-common:通用類模塊。

    eagle-log:基於flink的日誌處理服務。

    重點講一下eagle-log:

    對接kafka、ES和Redis

    對接kafka和ES都比較簡單,用的官方的connector(flink-connector-kafka-0.10和flink-connector-elasticsearch6),詳見代碼。

    對接Redis,最開始用的是org.apache.bahir提供的redis connector,後來發現靈活度不夠,就使用了Jedis。

    在將統計數據寫入redis的時候,最開始用的keyby分組后緩存了分組數據,在sink中做統計處理后寫入,參考代碼如下:

            String name = "redis-agg-log";
            DataStream<Tuple2<String, List<LogEntry>>> keyedStream = dataSource.keyBy((KeySelector<LogEntry, String>) log -> log.getIndex())
                    .timeWindow(Time.seconds(windowTime)).trigger(new CountTriggerWithTimeout<>(windowCount, TimeCharacteristic.ProcessingTime))
                    .process(new ProcessWindowFunction<LogEntry, Tuple2<String, List<LogEntry>>, String, TimeWindow>() {
                        @Override
                        public void process(String s, Context context, Iterable<LogEntry> iterable, Collector<Tuple2<String, List<LogEntry>>> collector) {
                            ArrayList<LogEntry> logs = Lists.newArrayList(iterable);
                            if (logs.size() > 0) {
                                collector.collect(new Tuple2(s, logs));
                            }
                        }
                    }).setParallelism(redisSinkParallelism).name(name).uid(name);

    後來發現這樣做對內存消耗比較大,其實不需要緩存整個分組的原始數據,只需要一個統計數據就OK了,優化后:

            String name = "redis-agg-log";
            DataStream<LogStatWindowResult> keyedStream = dataSource.keyBy((KeySelector<LogEntry, String>) log -> log.getIndex())
                    .timeWindow(Time.seconds(windowTime))
                    .trigger(new CountTriggerWithTimeout<>(windowCount, TimeCharacteristic.ProcessingTime))
                    .aggregate(new LogStatAggregateFunction(), new LogStatWindowFunction())
                    .setParallelism(redisSinkParallelism).name(name).uid(name);

    這裏使用了flink的聚合函數和Accumulator,通過flink的agg操作做統計,減輕了內存消耗的壓力。

    使用broadcast廣播drools規則引擎

    1、drools規則流通過broadcast map state廣播出去。

    2、kafka的數據流connect規則流處理日誌。

    //廣播規則流
    env.addSource(new RuleSourceFunction(ruleUrl)).name(ruleName).uid(ruleName).setParallelism(1)
                    .broadcast(ruleStateDescriptor);
    
    //kafka數據流
    FlinkKafkaConsumer010<LogEntry> source = new FlinkKafkaConsumer010<>(kafkaTopic, new LogSchema(), properties);
    env.addSource(source).name(kafkaTopic).uid(kafkaTopic).setParallelism(kafkaParallelism);
    //數據流connect規則流處理日誌 BroadcastConnectedStream<LogEntry, RuleBase> connectedStreams = dataSource.connect(ruleSource); connectedStreams.process(new LogProcessFunction(ruleStateDescriptor, ruleBase)).setParallelism(processParallelism).name(name).uid(name);

    具體細節參考開源代碼。

    4、小結

    本系統提供了一個基於flink的實時數據處理參考,對接了kafka、redis和elasticsearch,通過可配置的drools規則引擎,將數據處理邏輯配置化和動態化。

    對於處理后的數據,也可以對接到其他sink,為其他各類業務平台提供數據的解析、清洗和標準化服務。

     

    項目地址:

    https://github.com/luxiaoxun/eagle

     

    本站聲明:網站內容來源於博客園,如有侵權,請聯繫我們,我們將及時處理

    【其他文章推薦】

    網頁設計一頭霧水該從何著手呢? 台北網頁設計公司幫您輕鬆架站!

    網頁設計公司推薦不同的風格,搶佔消費者視覺第一線

    ※Google地圖已可更新顯示潭子電動車充電站設置地點!!

    ※廣告預算用在刀口上,台北網頁設計公司幫您達到更多曝光效益

    ※別再煩惱如何寫文案,掌握八大原則!

  • 【原】二進制部署 k8s 1.18.3

    【原】二進制部署 k8s 1.18.3

    二進制部署 k8s 1.18.3

    插播一條:ansible 一鍵部署:https://github.com/liyongjian5179/k8s-ansible

    1、相關前置信息

    1.1 版本信息

    kube_version: v1.18.3

    etcd_version: v3.4.9

    flannel: v0.12.0

    coredns: v1.6.7

    cni-plugins: v0.8.6

    pod 網段:10.244.0.0/16

    service 網段:10.96.0.0/12

    kubernetes 內部地址:10.96.0.1

    coredns 地址: 10.96.0.10

    apiserver 域名:lb.5179.top

    1.2 機器安排

    主機名 IP 角色及組件 k8s 相關組件
    centos7-nginx 10.10.10.127 nginx 四層代理 nginx
    centos7-a 10.10.10.128 master,node,etcd,flannel kube-apiserver kube-controller-manager kube-scheduler kubelet kube-proxy
    centos7-b 10.10.10.129 master,node,etcd,flannel kube-apiserver kube-controller-manager kube-scheduler kubelet kube-proxy
    centos7-c 10.10.10.130 master,node,etcd,flannel kube-apiserver kube-controller-manager kube-scheduler kubelet kube-proxy
    centos7-d 10.10.10.131 node,flannel kubelet kube-proxy
    centos7-e 10.10.10.132 node,flannel kubelet kube-proxy

    2、部署前環境準備

    centos7-nginx 當主控機對其他機器做免密

    2.1、 安裝ansible用於批量操作

    安裝過程略

    [root@centos7-nginx ~]# cat /etc/ansible/hosts
    [masters]
    10.10.10.128
    10.10.10.129
    10.10.10.130
    
    [nodes]
    10.10.10.131
    10.10.10.132
    
    [k8s]
    10.10.10.[128:132]
    

    推送宿主機 hosts 文件

    cat /etc/hosts
    127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
    ::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
    10.10.10.127 centos7-nginx lb.5179.top
    10.10.10.128 centos7-a
    10.10.10.129 centos7-b
    10.10.10.130 centos7-c
    10.10.10.131 centos7-d
    10.10.10.132 centos7-e
    
    ansible k8s -m shell -a "mv /etc/hosts /etc/hosts.bak"
    ansible k8s -m copy -a "src=/etc/hosts dest=/etc/hosts"
    

    2.2 關閉防火牆及SELINUX

    # 關閉防火牆
    ansible k8s -m shell -a "systemctl stop firewalld &&  systemctl disable firewalld"
    # 關閉 selinux
    ansible k8s -m shell -a "setenforce  0  && sed -i "s/^SELINUX=enforcing/SELINUX=disabled/g" /etc/sysconfig/selinux && sed -i "s/^SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config "
    

    2.3 關閉 swap 分區

    ansible k8s -m shell -a "swapoff -a && sed -i 's/.*swap.*/#&/' /etc/fstab"
    

    2.4 安裝 docker及加速器

    vim ./install_docker.sh
    #!/bin/bash
    #
    yum install -y yum-utils device-mapper-persistent-data lvm2
    yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
    yum -y install docker-ce-19.03.11-19.03.11
    systemctl enable docker
    systemctl start docker
    docker version
    
    # 安裝加速器
    tee /etc/docker/daemon.json <<-'EOF'
    {
      "registry-mirrors": ["https://ajpb7tdn.mirror.aliyuncs.com"],
      "log-opts": {"max-size":"100m", "max-file":"5"}
    }
    EOF
    systemctl daemon-reload
    systemctl restart docker
    

    然後使用 ansible 批量執行

    ansible k8s -m script -a "./install_docker.sh"
    

    2.5 修改內核參數

    vim 99-k8s.conf
    #sysctls for k8s node config
    net.ipv4.ip_forward=1
    net.ipv4.tcp_slow_start_after_idle=0
    net.core.rmem_max=16777216
    fs.inotify.max_user_watches=524288
    kernel.softlockup_all_cpu_backtrace=1
    kernel.softlockup_panic=1
    fs.file-max=2097152
    fs.inotify.max_user_instances=8192
    fs.inotify.max_queued_events=16384
    vm.max_map_count=262144
    vm.swappiness=0
    vm.overcommit_memory=1
    vm.panic_on_oom=0
    fs.may_detach_mounts=1
    net.core.netdev_max_backlog=16384
    net.ipv4.tcp_wmem=4096 12582912 16777216
    net.core.wmem_max=16777216
    net.core.somaxconn=32768
    net.ipv4.ip_forward=1
    net.ipv4.tcp_max_syn_backlog=8096
    net.bridge.bridge-nf-call-iptables=1
    net.bridge.bridge-nf-call-ip6tables=1
    net.ipv4.tcp_rmem=4096 12582912 16777216
    

    拷貝至遠程

    ansible k8s -m copy -a "src=./99-k8s.conf dest=/etc/sysctl.d/"
    ansible k8s -m shell -a "cd /etc/sysctl.d/ && sysctl --system"
    

    2.6 創建對應的目錄

    master 用

    vim mkdir_k8s_master.sh
    #!/bin/bash
    mkdir /opt/etcd/{bin,data,cfg,ssl} -p
    mkdir /opt/kubernetes/{bin,cfg,ssl,logs}  -p
    mkdir /opt/kubernetes/logs/{kubelet,kube-proxy,kube-scheduler,kube-apiserver,kube-controller-manager} -p
    
    echo 'export PATH=$PATH:/opt/kubernetes/bin' >> /etc/profile
    echo 'export PATH=$PATH:/opt/etcd/bin' >> /etc/profile
    source /etc/profile
    

    node 用

    vim mkdir_k8s_node.sh
    #!/bin/bash
    mkdir /opt/kubernetes/{bin,cfg,ssl,logs}  -p
    mkdir /opt/kubernetes/logs/{kubelet,kube-proxy} -p
    
    echo 'export PATH=$PATH:/opt/kubernetes/bin' >> /etc/profile
    source /etc/profile
    

    調用 ansible 執行

    ansible masters -m script -a "./mkdir_k8s_master.sh"
    ansible nodes -m script -a "./mkdir_k8s_node.sh"
    

    2.7 準備 LB

    為三台master提供高可用,可以選用雲廠商的 slb,也可以用 兩台 nginx + keepalived 實現。

    此處,為實驗環境,用單台 nginx 坐四層代理實現

    # 安裝 nginx
    [root@centos7-nginx ~]# yum install -y nginx
    # 創建子配置文件
    [root@centos7-nginx ~]# cd /etc/nginx/conf.d/
    [root@centos7-nginx conf.d]# vim lb.tcp
    stream {
        upstream master {
            hash $remote_addr consistent;
            server 10.10.10.128:6443 max_fails=3 fail_timeout=30;
            server 10.10.10.129:6443 max_fails=3 fail_timeout=30;
            server 10.10.10.130:6443 max_fails=3 fail_timeout=30;
        }
    
        server {
            listen 6443;
            proxy_pass master;
        }
    }
    # 在主配置文件中引入該文件
    [root@centos7-nginx ~]# cd /etc/nginx/
    [root@centos7-nginx nginx]# vim nginx.conf
    ...
    include /etc/nginx/conf.d/*.tcp;
    ...
    # 加入開機自啟,並啟動 nginx
    [root@centos7-nginx nginx]# systemctl enable nginx
    Created symlink from /etc/systemd/system/multi-user.target.wants/nginx.service to /usr/lib/systemd/system/nginx.service.
    [root@centos7-nginx nginx]# systemctl start nginx
    

    3、部署

    3.1 生成證書

    執行腳本

    [root@centos7-nginx ~]# mkdir ssl && cd ssl
    [root@centos7-nginx ssl]# vim ./k8s-certificate.sh
    [root@centos7-nginx ssl]# ./k8s-certificate.sh 10.10.10.127,10.10.10.128,10.10.10.129,10.10.10.130,lb.5179.top,10.96.0.1
    

    IP 說明:

    • 10.10.10.127|lb.5179.top: nginx

    • 10.10.10.128|129|130: masters

    • 10.96.0.1: kubernetes(service 網段的第一個 IP)

    腳本內容如下

    #!/bin/bash
    # 二進制部署,生成 k8s 證書文件
    
    if [ $# -ne 1 ];then
        echo "please user in: `basename $0` MASTERS[10.10.10.127,10.10.10.128,10.10.10.129,10.10.10.130,lb.5179.top,10.96.0.1]"
        exit 1
    fi
    MASTERS=$1
    
    KUBERNETES_HOSTNAMES=kubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.cluster,kubernetes.svc.cluster.local
    
    
    
    for i in `echo $MASTERS | tr ',' ' '`;do
       if [ -z $IPS ];then
            IPS=\"$i\",
       else
            IPS=$IPS\"$i\",
       fi
    done
    
    
    command_exists() {
        command -v "$@" > /dev/null 2>&1
    }
    
    if command_exists cfssl; then
        echo "命令已存在"
    else
        # 下載生成證書命令
        wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
        wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
        wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
    
        # 添加執行權限
        chmod +x cfssl_linux-amd64 cfssljson_linux-amd64 cfssl-certinfo_linux-amd64
    
        # 移動到 /usr/local/bin 目錄下
        mv cfssl_linux-amd64 /usr/local/bin/cfssl
        mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
        mv cfssl-certinfo_linux-amd64 /usr/bin/cfssl-certinfo
    fi
    
    
    # 默認簽 10 年
    cat > ca-config.json <<EOF
    {
      "signing": {
        "default": {
          "expiry": "87600h"
        },
        "profiles": {
          "kubernetes": {
             "expiry": "87600h",
             "usages": [
                "signing",
                "key encipherment",
                "server auth",
                "client auth"
            ]
          }
        }
      }
    }
    EOF
    
    cat > ca-csr.json <<EOF
    {
        "CN": "kubernetes",
        "key": {
            "algo": "rsa",
            "size": 2048
        },
        "names": [
            {
                "C": "CN",
                "L": "Beijing",
                "ST": "Beijing",
                "O": "k8s",
                "OU": "System"
            }
        ]
    }
    EOF
    
    cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
    
    #-----------------------
    
    cat > server-csr.json <<EOF
    {
        "CN": "kubernetes",
        "hosts": [
          ${IPS}
          "127.0.0.1",
          "kubernetes",
          "kubernetes.default",
          "kubernetes.default.svc",
          "kubernetes.default.svc.cluster",
          "kubernetes.default.svc.cluster.local"
        ],
        "key": {
            "algo": "rsa",
            "size": 2048
        },
        "names": [
            {
                "C": "CN",
                "L": "BeiJing",
                "ST": "BeiJing",
                "O": "k8s",
                "OU": "System"
            }
        ]
    }
    EOF
    
    cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server
    
    # 或者
    
    #cat > server-csr.json <<EOF
    #{
    #    "CN": "kubernetes",
    #    "key": {
    #        "algo": "rsa",
    #        "size": 2048
    #    },
    #    "names": [
    #        {
    #            "C": "CN",
    #            "L": "BeiJing",
    #            "ST": "BeiJing",
    #            "O": "k8s",
    #            "OU": "System"
    #        }
    #    ]
    #}
    #EOF
    #
    #cfssl gencert \
    #  -ca=ca.pem \
    #  -ca-key=ca-key.pem \
    #  -config=ca-config.json \
    #  -hostname=${MASTERS},127.0.0.1,${KUBERNETES_HOSTNAMES} \
    #  -profile=kubernetes \
    #  server-csr.json | cfssljson -bare server
    
    
    
    
    #-----------------------
    
    cat > admin-csr.json <<EOF
    {
      "CN": "admin",
      "hosts": [],
      "key": {
        "algo": "rsa",
        "size": 2048
      },
      "names": [
        {
          "C": "CN",
          "L": "BeiJing",
          "ST": "BeiJing",
          "O": "system:masters",
          "OU": "System"
        }
      ]
    }
    EOF
    
    cfssl gencert \
      -ca=ca.pem \
      -ca-key=ca-key.pem \
      -config=ca-config.json \
      -profile=kubernetes \
      admin-csr.json | cfssljson -bare admin
    
    #-----------------------
    
    cat > kube-proxy-csr.json <<EOF
    {
      "CN": "system:kube-proxy",
      "hosts": [],
      "key": {
        "algo": "rsa",
        "size": 2048
      },
      "names": [
        {
          "C": "CN",
          "L": "BeiJing",
          "ST": "BeiJing",
          "O": "k8s",
          "OU": "System"
        }
      ]
    }
    EOF
    
    cfssl gencert \
      -ca=ca.pem \
      -ca-key=ca-key.pem \
      -config=ca-config.json \
      -profile=kubernetes \
      kube-proxy-csr.json | cfssljson -bare kube-proxy
    
    
    # 注意: "CN": "system:metrics-server" 一定是這個,因為後面授權時用到這個名稱,否則會報禁止匿名訪問
    cat > metrics-server-csr.json <<EOF
    {
      "CN": "system:metrics-server",
      "hosts": [],
      "key": {
        "algo": "rsa",
        "size": 2048
      },
      "names": [
        {
          "C": "CN",
          "ST": "BeiJing",
          "L": "BeiJing",
          "O": "k8s",
          "OU": "system"
        }
      ]
    }
    EOF
    
    cfssl gencert \
      -ca=ca.pem \
      -ca-key=ca-key.pem \
      -config=ca-config.json \
      -profile=kubernetes \
      metrics-server-csr.json | cfssljson -bare metrics-server
    
    
    for item in $(ls *.pem |grep -v key) ;do echo ======================$item===================;openssl x509 -in $item -text -noout| grep Not;done
    
    #[root@aliyun k8s]# for item in $(ls *.pem |grep -v key) ;do echo ======================$item===================;openssl x509 -in $item -text -noout| grep Not;done
    #======================admin.pem====================
    #            Not Before: Jun 18 14:32:00 2020 GMT
    #            Not After : Jun 16 14:32:00 2030 GMT
    #======================ca.pem=======================
    #            Not Before: Jun 18 14:32:00 2020 GMT
    #            Not After : Jun 17 14:32:00 2025 GMT
    #======================kube-proxy.pem===============
    #            Not Before: Jun 18 14:32:00 2020 GMT
    #            Not After : Jun 16 14:32:00 2030 GMT
    #======================metrics-server.pem===========
    #            Not Before: Jun 18 14:32:00 2020 GMT
    #            Not After : Jun 16 14:32:00 2030 GMT
    #======================server.pem===================
    #            Not Before: Jun 18 14:32:00 2020 GMT
    #            Not After : Jun 16 14:32:00 2030 GMT
    

    注意:cfssl產生的ca證書固定5年有效期

    https://github.com/cloudflare/cfssl/blob/793fa93522ffd9a66d743ce4fa0958b6662ac619/initca/initca.go#L224

    // CAPolicy contains the CA issuing policy as default policy.
    var CAPolicy = func() *config.Signing {
    	return &config.Signing{
    		Default: &config.SigningProfile{
    			Usage:        []string{"cert sign", "crl sign"},
    			ExpiryString: "43800h",
    			Expiry:       5 * helpers.OneYear,
    			CAConstraint: config.CAConstraint{IsCA: true},
    		},
    	}
    }
    

    可以通過修改源碼方式重新編譯更改 ca 過期時間,或者在ca-csr.json添加如下

    "ca": {
          "expiry": "438000h"   #---> 50年
        }
    

    3.2 拷貝證書

    3.2.1 拷貝 etcd 集群使用的證書

    [root@centos7-nginx ~]# cd ssl
    [root@centos7-nginx ssl]#
    [root@centos7-nginx ssl]# ansible masters -m copy -a "src=./ca.pem dest=/opt/etcd/ssl"
    [root@centos7-nginx ssl]# ansible masters -m copy -a "src=./server.pem dest=/opt/etcd/ssl"
    [root@centos7-nginx ssl]# ansible masters -m copy -a "src=./server-key.pem dest=/opt/etcd/ssl"
    

    3.2.2 拷貝 k8s 集群使用的證書

    [root@centos7-nginx ~]# cd ssl
    [root@centos7-nginx ssl]#
    [root@centos7-nginx ssl]# scp *.pem  root@10.10.10.128:/opt/kubernetes/ssl/
    [root@centos7-nginx ssl]# scp *.pem  root@10.10.10.129:/opt/kubernetes/ssl/
    [root@centos7-nginx ssl]# scp *.pem  root@10.10.10.130:/opt/kubernetes/ssl/
    [root@centos7-nginx ssl]# scp *.pem  root@10.10.10.131:/opt/kubernetes/ssl/
    [root@centos7-nginx ssl]# scp *.pem  root@10.10.10.132:/opt/kubernetes/ssl/
    

    3.3 安裝 ETCD 集群

    下載二進制etcd包,並把執行文件推到各 master節點的 /opt/etcd/bin/ 目錄下

    [root@centos7-nginx ~]# mkdir ./etcd && cd ./etcd
    [root@centos7-nginx etcd]# wget https://github.com/etcd-io/etcd/releases/download/v3.4.9/etcd-v3.4.9-linux-amd64.tar.gz
    [root@centos7-nginx etcd]# tar zxvf etcd-v3.3.12-linux-amd64.tar.gz
    [root@centos7-nginx etcd]# cd etcd-v3.4.9-linux-amd64
    [root@centos7-nginx etcd-v3.4.9-linux-amd64]# ll
    總用量 40540
    drwxr-xr-x. 14 630384594 600260513     4096 5月  22 03:54 Documentation
    -rwxr-xr-x.  1 630384594 600260513 23827424 5月  22 03:54 etcd
    -rwxr-xr-x.  1 630384594 600260513 17612384 5月  22 03:54 etcdctl
    -rw-r--r--.  1 630384594 600260513    43094 5月  22 03:54 README-etcdctl.md
    -rw-r--r--.  1 630384594 600260513     8431 5月  22 03:54 README.md
    -rw-r--r--.  1 630384594 600260513     7855 5月  22 03:54 READMEv2-etcdctl.md
    
    [root@centos7-nginx etcd-v3.4.9-linux-amd64]# ansible masters -m copy -a "src=./etcd dest=/opt/etcd/bin mode=755"
    [root@centos7-nginx etcd-v3.4.9-linux-amd64]# ansible masters -m copy -a "src=./etcdctl dest=/opt/etcd/bin mode=755"
    

    編寫 etcd 配置文件腳本

    #!/bin/bash
    # 使用說明
    #./etcd.sh etcd01 10.10.10.128 etcd01=https://10.10.10.128:2380,etcd02=https://10.10.10.129:2380,etcd03=https://10.10.10.130:2380
    #./etcd.sh etcd02 10.10.10.129 etcd01=https://10.10.10.128:2380,etcd02=https://10.10.10.129:2380,etcd03=https://10.10.10.130:2380
    #./etcd.sh etcd03 10.10.10.130 etcd01=https://10.10.10.128:2380,etcd02=https://10.10.10.129:2380,etcd03=https://10.10.10.130:2380
    
    ETCD_NAME=${1:-"etcd01"}
    ETCD_IP=${2:-"127.0.0.1"}
    ETCD_CLUSTER=${3:-"etcd01=https://127.0.0.1:2379"}
    
    # ETCD 版本選擇[3.3,3.4]
    # 要用 3.3.14 以上版本:https://kubernetes.io/zh/docs/tasks/administer-cluster/configure-upgrade-etcd/#%E5%B7%B2%E7%9F%A5%E9%97%AE%E9%A2%98-%E5%85%B7%E6%9C%89%E5%AE%89%E5%85%A8%E7%AB%AF%E7%82%B9%E7%9A%84-etcd-%E5%AE%A2%E6%88%B7%E7%AB%AF%E5%9D%87%E8%A1%A1%E5%99%A8
    
    ETCD_VERSION=3.4.9
    
    if [ ${ETCD_VERSION%.*} == "3.4" ] ;then
    
    cat <<EOF >/opt/etcd/cfg/etcd.yml
    #etcd ${ETCD_VERSION}
    name: ${ETCD_NAME}
    data-dir: /opt/etcd/data
    listen-peer-urls: https://${ETCD_IP}:2380
    listen-client-urls: https://${ETCD_IP}:2379,https://127.0.0.1:2379
    
    advertise-client-urls: https://${ETCD_IP}:2379
    initial-advertise-peer-urls: https://${ETCD_IP}:2380
    initial-cluster: ${ETCD_CLUSTER}
    initial-cluster-token: etcd-cluster
    initial-cluster-state: new
    enable-v2: true
    
    client-transport-security:
      cert-file: /opt/etcd/ssl/server.pem
      key-file: /opt/etcd/ssl/server-key.pem
      client-cert-auth: false
      trusted-ca-file: /opt/etcd/ssl/ca.pem
      auto-tls: false
    
    peer-transport-security:
      cert-file: /opt/etcd/ssl/server.pem
      key-file: /opt/etcd/ssl/server-key.pem
      client-cert-auth: false
      trusted-ca-file: /opt/etcd/ssl/ca.pem
      auto-tls: false
    
    debug: false
    logger: zap
    log-outputs: [stderr]
    EOF
    
    else
    cat <<EOF >/opt/etcd/cfg/etcd.yml
    #etcd ${ETCD_VERSION}
    name: ${ETCD_NAME}
    data-dir: /opt/etcd/data
    listen-peer-urls: https://${ETCD_IP}:2380
    listen-client-urls: https://${ETCD_IP}:2379,https://127.0.0.1:2379
    
    advertise-client-urls: https://${ETCD_IP}:2379
    initial-advertise-peer-urls: https://${ETCD_IP}:2380
    initial-cluster: ${ETCD_CLUSTER}
    initial-cluster-token: etcd-cluster
    initial-cluster-state: new
    
    client-transport-security:
      cert-file: /opt/etcd/ssl/server.pem
      key-file: /opt/etcd/ssl/server-key.pem
      client-cert-auth: false
      trusted-ca-file: /opt/etcd/ssl/ca.pem
      auto-tls: false
    
    peer-transport-security:
      cert-file: /opt/etcd/ssl/server.pem
      key-file: /opt/etcd/ssl/server-key.pem
      peer-client-cert-auth: false
      trusted-ca-file: /opt/etcd/ssl/ca.pem
      auto-tls: false
    
    debug: false
    log-package-levels: etcdmain=CRITICAL,etcdserver=DEBUG
    log-outputs: default
    EOF
    
    fi
    
    cat <<EOF >/usr/lib/systemd/system/etcd.service
    [Unit]
    Description=Etcd Server
    Documentation=https://github.com/etcd-io/etcd
    Conflicts=etcd.service
    After=network.target
    After=network-online.target
    Wants=network-online.target
    
    [Service]
    Type=notify
    LimitNOFILE=65536
    Restart=on-failure
    RestartSec=5s
    TimeoutStartSec=0
    ExecStart=/opt/etcd/bin/etcd --config-file=/opt/etcd/cfg/etcd.yml
    
    [Install]
    WantedBy=multi-user.target
    EOF
    
    systemctl daemon-reload
    systemctl enable etcd
    systemctl restart etcd
    

    推送到 masters 機器上

    ansible masters -m copy -a "src=./etcd.sh dest=/opt/etcd/bin mode=755"
    

    分別登陸到三台機器上執行腳本文件

    [root@centos7-a bin]# ./etcd.sh etcd01 10.10.10.128 etcd01=https://10.10.10.128:2380,etcd02=https://10.10.10.129:2380,etcd03=https://10.10.10.130:2380
    [root@centos7-b bin]# ./etcd.sh etcd02 10.10.10.129 etcd01=https://10.10.10.128:2380,etcd02=https://10.10.10.129:2380,etcd03=https://10.10.10.130:2380
    [root@centos7-c bin]# ./etcd.sh etcd03 10.10.10.130 etcd01=https://10.10.10.128:2380,etcd02=https://10.10.10.129:2380,etcd03=https://10.10.10.130:2380
    

    驗證集群是否是健康的

    ### 3.4.9
    [root@centos7-a ~]# ETCDCTL_API=3 etcdctl --write-out="table" --cacert=/opt/etcd/ssl/ca.pem --cert=/opt/etcd/ssl/server.pem --key=/opt/etcd/ssl/server-key.pem --endpoints=https://10.10.10.128:2379,https://10.10.10.129:2379,https://10.10.10.130:2379 endpoint health
    +---------------------------+--------+-------------+-------+
    |         ENDPOINT          | HEALTH |    TOOK     | ERROR |
    +---------------------------+--------+-------------+-------+
    | https://10.10.10.128:2379 |   true | 31.126223ms |       |
    | https://10.10.10.129:2379 |   true | 28.698669ms |       |
    | https://10.10.10.130:2379 |   true | 32.508681ms |       |
    +---------------------------+--------+-------------+-------+
    

    查看集群成員

    [root@centos7-a ~]# ETCDCTL_API=3 etcdctl --write-out="table" --cacert=/opt/etcd/ssl/ca.pem --cert=/opt/etcd/ssl/server.pem --key=/opt/etcd/ssl/server-key.pem --endpoints=https://10.10.10.128:2379,https://10.10.10.129:2379,https://10.10.10.130:2379 member list
    +------------------+---------+--------+---------------------------+---------------------------+------------+
    |        ID        | STATUS  |  NAME  |        PEER ADDRS         |       CLIENT ADDRS        | IS LEARNER |
    +------------------+---------+--------+---------------------------+---------------------------+------------+
    | 2cec243d35ad0881 | started | etcd02 | https://10.10.10.129:2380 | https://10.10.10.129:2379 |      false |
    | c6e694d272df93e8 | started | etcd03 | https://10.10.10.130:2380 | https://10.10.10.130:2379 |      false |
    | e9b57a5a8276394a | started | etcd01 | https://10.10.10.128:2380 | https://10.10.10.128:2379 |      false |
    +------------------+---------+--------+---------------------------+---------------------------+------------+
    

    etcdctl創建別名,三台機器分別執行

    vim .bashrc
    alias etcdctl2="ETCDCTL_API=2 etcdctl --ca-file=/opt/etcd/ssl/ca.pem --cert-file=/opt/etcd/ssl/server.pem --key-file=/opt/etcd/ssl/server-key.pem --endpoints=https://10.10.10.128:2379,https://10.10.10.129:2379,https://10.10.10.130:2379"
    alias etcdctl3="ETCDCTL_API=3 etcdctl --cacert=/opt/etcd/ssl/ca.pem --cert=/opt/etcd/ssl/server.pem --key=/opt/etcd/ssl/server-key.pem --endpoints=https://10.10.10.128:2379,https://10.10.10.129:2379,https://10.10.10.130:2379"
    
    source .bashrc
    

    3.3 安裝 k8s 相關組件

    3.3.1 下載二進制安裝包

    [root@centos7-nginx ~]# mkdir k8s-1.18.3 && cd k8s-1.18.3/
    [root@centos7-nginx k8s-1.18.3]# wget https://dl.k8s.io/v1.18.3/kubernetes-server-linux-amd64.tar.gz
    [root@centos7-nginx k8s-1.18.3]# tar xf kubernetes-server-linux-amd64.tar.gz
    [root@centos7-nginx k8s-1.18.3]# cd kubernetes
    [root@centos7-nginx kubernetes]# ll
    總用量 33092
    drwxr-xr-x. 2 root root        6 5月  20 21:32 addons
    -rw-r--r--. 1 root root 32587733 5月  20 21:32 kubernetes-src.tar.gz
    -rw-r--r--. 1 root root  1297746 5月  20 21:32 LICENSES
    drwxr-xr-x. 3 root root       17 5月  20 21:27 server
    [root@centos7-nginx kubernetes]# cd server/bin/
    [root@centos7-nginx bin]# ll
    總用量 1087376
    -rwxr-xr-x. 1 root root  48128000 5月  20 21:32 apiextensions-apiserver
    -rwxr-xr-x. 1 root root  39813120 5月  20 21:32 kubeadm
    -rwxr-xr-x. 1 root root 120668160 5月  20 21:32 kube-apiserver
    -rw-r--r--. 1 root root         8 5月  20 21:27 kube-apiserver.docker_tag
    -rw-------. 1 root root 174558720 5月  20 21:27 kube-apiserver.tar
    -rwxr-xr-x. 1 root root 110059520 5月  20 21:32 kube-controller-manager
    -rw-r--r--. 1 root root         8 5月  20 21:27 kube-controller-manager.docker_tag
    -rw-------. 1 root root 163950080 5月  20 21:27 kube-controller-manager.tar
    -rwxr-xr-x. 1 root root  44032000 5月  20 21:32 kubectl
    -rwxr-xr-x. 1 root root 113283800 5月  20 21:32 kubelet
    -rwxr-xr-x. 1 root root  38379520 5月  20 21:32 kube-proxy
    -rw-r--r--. 1 root root         8 5月  20 21:28 kube-proxy.docker_tag
    -rw-------. 1 root root 119099392 5月  20 21:28 kube-proxy.tar
    -rwxr-xr-x. 1 root root  42950656 5月  20 21:32 kube-scheduler
    -rw-r--r--. 1 root root         8 5月  20 21:27 kube-scheduler.docker_tag
    -rw-------. 1 root root  96841216 5月  20 21:27 kube-scheduler.tar
    -rwxr-xr-x. 1 root root   1687552 5月  20 21:32 mounter
    

    將對應文件拷貝到目標機器上

    # masters
    [root@centos7-nginx bin]# scp kube-apiserver kube-controller-manager kube-scheduler kubectl kubelet kube-proxy root@10.10.10.128:/opt/kubernetes/bin/
    [root@centos7-nginx bin]# scp kube-apiserver kube-controller-manager kube-scheduler kubectl kubelet kube-proxy root@10.10.10.129:/opt/kubernetes/bin/
    [root@centos7-nginx bin]# scp kube-apiserver kube-controller-manager kube-scheduler kubectl kubelet kube-proxy root@10.10.10.130:/opt/kubernetes/bin/
    
    # nodes
    [root@centos7-nginx bin]# scp kubelet kube-proxy root@10.10.10.131:/opt/kubernetes/bin/
    [root@centos7-nginx bin]# scp kubelet kube-proxy root@10.10.10.131:/opt/kubernetes/bin/
    
    # 本機
    [root@centos7-nginx bin]# cp kubectl /usr/local/bin/
    

    3.3.2 創建Node節點kubeconfig文件

    • 創建TLS Bootstrapping Token
    • 創建kubelet kubeconfig
    • 創建kube-proxy kubeconfig
    • 創建admin kubeconfig
    [root@centos7-nginx ~]# cd ~/ssl/
    [root@centos7-nginx ssl]# vim kubeconfig.sh # 修改第10行 KUBE_APISERVER 地址
    [root@centos7-nginx ssl]# bash ./kubeconfig.sh
    Cluster "kubernetes" set.
    User "kubelet-bootstrap" set.
    Context "default" created.
    Switched to context "default".
    Cluster "kubernetes" set.
    User "kube-proxy" set.
    Context "default" created.
    Switched to context "default".
    Cluster "kubernetes" set.
    User "admin" set.
    Context "default" created.
    Switched to context "default".
    

    腳本內容如下:

    # 創建 TLS Bootstrapping Token
    export BOOTSTRAP_TOKEN=$(head -c 16 /dev/urandom | od -An -t x | tr -d ' ')
    cat > token.csv <<EOF
    ${BOOTSTRAP_TOKEN},kubelet-bootstrap,10001,"system:kubelet-bootstrap"
    EOF
    
    #----------------------
    
    # 創建kubelet bootstrapping kubeconfig
    export KUBE_APISERVER="https://lb.5179.top:6443"
    
    # 設置集群參數
    kubectl config set-cluster kubernetes \
      --certificate-authority=./ca.pem \
      --embed-certs=true \
      --server=${KUBE_APISERVER} \
      --kubeconfig=bootstrap.kubeconfig
    
    # 設置客戶端認證參數
    kubectl config set-credentials kubelet-bootstrap \
      --token=${BOOTSTRAP_TOKEN} \
      --kubeconfig=bootstrap.kubeconfig
    
    # 設置上下文參數
    kubectl config set-context default \
      --cluster=kubernetes \
      --user=kubelet-bootstrap \
      --kubeconfig=bootstrap.kubeconfig
    
    # 設置默認上下文
    kubectl config use-context default --kubeconfig=bootstrap.kubeconfig
    
    #----------------------
    
    # 創建kube-proxy kubeconfig文件
    
    kubectl config set-cluster kubernetes \
      --certificate-authority=./ca.pem \
      --embed-certs=true \
      --server=${KUBE_APISERVER} \
      --kubeconfig=kube-proxy.kubeconfig
    
    kubectl config set-credentials kube-proxy \
      --client-certificate=./kube-proxy.pem \
      --client-key=./kube-proxy-key.pem \
      --embed-certs=true \
      --kubeconfig=kube-proxy.kubeconfig
    
    kubectl config set-context default \
      --cluster=kubernetes \
      --user=kube-proxy \
      --kubeconfig=kube-proxy.kubeconfig
    
    kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
    
    #----------------------
    
    # 創建 admin kubeconfig文件
    
      kubectl config set-cluster kubernetes \
        --certificate-authority=./ca.pem \
        --embed-certs=true \
        --server=${KUBE_APISERVER} \
        --kubeconfig=admin.kubeconfig
    
      kubectl config set-credentials admin \
        --client-certificate=./admin.pem \
        --client-key=./admin-key.pem \
        --embed-certs=true \
        --kubeconfig=admin.kubeconfig
    
      kubectl config set-context default \
        --cluster=kubernetes \
        --user=admin \
        --kubeconfig=admin.kubeconfig
    
      kubectl config use-context default --kubeconfig=admin.kubeconfig
    
    

    將文件拷貝至對應位置

    ansible k8s -m copy -a "src=./bootstrap.kubeconfig dest=/opt/kubernetes/cfg"
    ansible k8s -m copy -a "src=./kube-proxy.kubeconfig dest=/opt/kubernetes/cfg"
    ansible k8s -m copy -a "src=./token.csv dest=/opt/kubernetes/cfg"
    

    3.4 安裝 kube-apiserver

    Masters 節點安裝

    此處可以使用 tmux 打開三個終端窗口進行,并行輸入

    也可以在三台機器上分開執行

    [root@centos7-a ~]# mkdir k8s-scripts
    [root@centos7-a k8s-scripts]# vim install-apiserver.sh
    [root@centos7-a k8s-scripts]# IP=`ip addr | grep ens33 | grep inet | awk '{ print $2; }' | sed 's/\/.*$//'|head -1` && echo $IP
    10.10.10.128
    [root@centos7-a k8s-scripts]# bash install-apiserver.sh $IP https://10.10.10.128:2379,https://10.10.10.129:2379,https://10.10.10.130:2379
    

    腳本內容如下:

    #!/bin/bash
    # MASTER_ADDRESS 寫本機
    MASTER_ADDRESS=${1:-"10.10.10.128"}
    ETCD_SERVERS=${2:-"http://127.0.0.1:2379"}
    
    cat <<EOF >/opt/kubernetes/cfg/kube-apiserver
    KUBE_APISERVER_OPTS="--logtostderr=false \\
    --v=2 \\
    --log-dir=/opt/kubernetes/logs/kube-apiserver \\
    --etcd-servers=${ETCD_SERVERS} \\
    --bind-address=0.0.0.0 \\
    --secure-port=6443 \\
    --advertise-address=${MASTER_ADDRESS} \\
    --allow-privileged=true \\
    --service-cluster-ip-range=10.96.0.0/12 \\
    --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction \\
    --authorization-mode=RBAC,Node \\
    --kubelet-https=true \\
    --enable-bootstrap-token-auth=true \\
    --token-auth-file=/opt/kubernetes/cfg/token.csv \\
    --service-node-port-range=30000-50000 \\
    --kubelet-client-certificate=/opt/kubernetes/ssl/server.pem \\
    --kubelet-client-key=/opt/kubernetes/ssl/server-key.pem \\
    --tls-cert-file=/opt/kubernetes/ssl/server.pem  \\
    --tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \\
    --client-ca-file=/opt/kubernetes/ssl/ca.pem \\
    --service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \\
    --etcd-cafile=/opt/kubernetes/ssl/ca.pem \\
    --etcd-certfile=/opt/kubernetes/ssl/server.pem \\
    --etcd-keyfile=/opt/kubernetes/ssl/server-key.pem \\
    --requestheader-client-ca-file=/opt/kubernetes/ssl/ca.pem \\
    --requestheader-extra-headers-prefix=X-Remote-Extra- \\
    --requestheader-group-headers=X-Remote-Group \\
    --requestheader-username-headers=X-Remote-User \\
    --proxy-client-cert-file=/opt/kubernetes/ssl/metrics-server.pem \\
    --proxy-client-key-file=/opt/kubernetes/ssl/metrics-server-key.pem \\
    --runtime-config=api/all=true \\
    --audit-log-maxage=30 \\
    --audit-log-maxbackup=3 \\
    --audit-log-maxsize=100 \\
    --audit-log-truncate-enabled=true \\
    --audit-log-path=/opt/kubernetes/logs/k8s-audit.log"
    EOF
    
    cat <<EOF >/usr/lib/systemd/system/kube-apiserver.service
    [Unit]
    Description=Kubernetes API Server
    Documentation=https://github.com/kubernetes/kubernetes
    
    [Service]
    EnvironmentFile=-/opt/kubernetes/cfg/kube-apiserver
    ExecStart=/opt/kubernetes/bin/kube-apiserver \$KUBE_APISERVER_OPTS
    Restart=on-failure
    
    [Install]
    WantedBy=multi-user.target
    EOF
    
    systemctl daemon-reload
    systemctl enable kube-apiserver
    systemctl restart kube-apiserver
    

    3.5 安裝 kube-scheduler

    Masters 節點安裝

    此處可以使用 tmux 打開三個終端窗口進行,并行輸入,也可以在三台機器上分開執行

    [root@centos7-a ~]# cd k8s-scripts
    [root@centos7-a k8s-scripts]# vim install-scheduler.sh
    [root@centos7-a k8s-scripts]# bash install-scheduler.sh 127.0.0.1
    

    腳本內容如下

    #!/bin/bash
    
    MASTER_ADDRESS=${1:-"127.0.0.1"}
    
    cat <<EOF >/opt/kubernetes/cfg/kube-scheduler
    KUBE_SCHEDULER_OPTS="--logtostderr=false \\
    --v=2 \\
    --log-dir=/opt/kubernetes/logs/kube-scheduler \\
    --master=${MASTER_ADDRESS}:8080 \\
    --address=0.0.0.0 \\
    --leader-elect"
    EOF
    
    cat <<EOF >/usr/lib/systemd/system/kube-scheduler.service
    [Unit]
    Description=Kubernetes Scheduler
    Documentation=https://github.com/kubernetes/kubernetes
    
    [Service]
    EnvironmentFile=-/opt/kubernetes/cfg/kube-scheduler
    ExecStart=/opt/kubernetes/bin/kube-scheduler \$KUBE_SCHEDULER_OPTS
    Restart=on-failure
    
    [Install]
    WantedBy=multi-user.target
    EOF
    
    systemctl daemon-reload
    systemctl enable kube-scheduler
    systemctl restart kube-scheduler
    

    3.6 安裝 kube-controller-manager

    Masters 節點安裝

    此處可以使用 tmux 打開三個終端窗口進行,并行輸入,也可以在三台機器上分開執行

    [root@centos7-a ~]# cd k8s-scripts
    [root@centos7-a k8s-scripts]# vim install-controller-manager.sh
    [root@centos7-a k8s-scripts]# bash install-controller-manager.sh 127.0.0.1
    

    腳本內容如下

    #!/bin/bash
    
    MASTER_ADDRESS=${1:-"127.0.0.1"}
    
    cat <<EOF >/opt/kubernetes/cfg/kube-controller-manager
    KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=false \\
    --v=2 \\
    --log-dir=/opt/kubernetes/logs/kube-controller-manager \\
    --master=${MASTER_ADDRESS}:8080 \\
    --leader-elect=true \\
    --bind-address=0.0.0.0 \\
    --service-cluster-ip-range=10.96.0.0/12 \\
    --cluster-name=kubernetes \\
    --cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \\
    --cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem  \\
    --service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem \\
    --experimental-cluster-signing-duration=87600h0m0s \\
    --feature-gates=RotateKubeletServerCertificate=true \\
    --feature-gates=RotateKubeletClientCertificate=true \\
    --allocate-node-cidrs=true \\
    --cluster-cidr=10.244.0.0/16 \\
    --root-ca-file=/opt/kubernetes/ssl/ca.pem"
    EOF
    
    cat <<EOF >/usr/lib/systemd/system/kube-controller-manager.service
    [Unit]
    Description=Kubernetes Controller Manager
    Documentation=https://github.com/kubernetes/kubernetes
    
    [Service]
    EnvironmentFile=-/opt/kubernetes/cfg/kube-controller-manager
    ExecStart=/opt/kubernetes/bin/kube-controller-manager \$KUBE_CONTROLLER_MANAGER_OPTS
    Restart=on-failure
    
    [Install]
    WantedBy=multi-user.target
    EOF
    
    systemctl daemon-reload
    systemctl enable kube-controller-manager
    systemctl restart kube-controller-manager
    

    3.7 查看組件狀態

    在三台機器上任意一台執行kubectl get cs

    [root@centos7-a k8s-scripts]# kubectl get cs
    NAME                 STATUS    MESSAGE             ERROR
    etcd-1               Healthy   {"health":"true"}
    etcd-2               Healthy   {"health":"true"}
    etcd-0               Healthy   {"health":"true"}
    controller-manager   Healthy   ok
    scheduler            Healthy   ok
    

    3.8 配置kubelet證書自動申請 CSR、審核及自動續期

    3.8.1 節點自動創建 CSR 請求

    節點 kubelet 啟動時自動創建 CSR 請求,將kubelet-bootstrap用戶綁定到系統集群角色 ,這個是為了頒發證書用的權限

    # Bind kubelet-bootstrap user to system cluster roles.
    kubectl create clusterrolebinding kubelet-bootstrap \
      --clusterrole=system:node-bootstrapper \
      --user=kubelet-bootstrap
    
    

    3.8.2 證書審批及自動續期

    1)手動審批腳本(啟動 node 節點 kubelet 之後操作)

    vim k8s-csr-approve.sh
    #!/bin/bash
    CSRS=$(kubectl get csr | awk '{if(NR>1) print $1}')
    for csr in $CSRS; do
    	kubectl certificate approve $csr;
    done
    
    1. 自動審批及續期

    創建自動批准相關 CSR 請求的 ClusterRole

    [root@centos7-a ~]# mkdir yaml
    [root@centos7-a ~]# cd yaml/
    [root@centos7-a yaml]# vim tls-instructs-csr.yaml
    kind: ClusterRole
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
      name: system:certificates.k8s.io:certificatesigningrequests:selfnodeserver
    rules:
    - apiGroups: ["certificates.k8s.io"]
      resources: ["certificatesigningrequests/selfnodeserver"]
      verbs: ["create"]
    
    [root@centos7-a yaml]# kubectl apply -f tls-instructs-csr.yaml
    

    自動批准 kubelet-bootstrap 用戶 TLS bootstrapping 首次申請證書的 CSR 請求

    kubectl create clusterrolebinding node-client-auto-approve-csr --clusterrole=system:certificates.k8s.io:certificatesigningrequests:nodeclient --user=kubelet-bootstrap
    

    自動批准 system:nodes 組用戶更新 kubelet 自身與 apiserver 通訊證書的 CSR 請求

    kubectl create clusterrolebinding node-client-auto-renew-crt --clusterrole=system:certificates.k8s.io:certificatesigningrequests:selfnodeclient --group=system:nodes
    

    自動批准 system:nodes 組用戶更新 kubelet 10250 api 端口證書的 CSR 請求

    kubectl create clusterrolebinding node-server-auto-renew-crt --clusterrole=system:certificates.k8s.io:certificatesigningrequests:selfnodeserver --group=system:nodes
    

    自動獲簽后的狀態如下:

    [root@centos7-a kubelet]# kubectl get csr
    NAME        AGE     SIGNERNAME                                    REQUESTOR           CONDITION
    csr-44lt8   4m10s   kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Approved,Issued
    csr-45njg   0s      kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Approved,Issued
    csr-nsbc9   4m9s    kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Approved,Issued
    csr-vk64f   4m9s    kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Approved,Issued
    csr-wftvq   59s     kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Approved,Issued
    

    3.9 安裝 kube-proxy

    拷貝對應包至所有節點

    [root@centos7-nginx ~]# cd k8s-1.18.3/kubernetes/server/bin/
    [root@centos7-nginx bin]# ansible k8s -m copy -a "src=./kube-proxy dest=/opt/kubernetes/bin mode=755"
    

    此處可以使用 tmux 打開五個終端窗口進行,并行輸入,也可以在五台機器上分開執行

    [root@centos7-a ~]# cd k8s-scripts
    [root@centos7-a k8s-scripts]# vim install-proxy.sh
    [root@centos7-a k8s-scripts]# bash install-proxy.sh ${HOSTNAME}
    

    腳本內容如下

    #!/bin/bash
    
    HOSTNAME=${1:-"`hostname`"}
    
    cat <<EOF >/opt/kubernetes/cfg/kube-proxy.conf
    KUBE_PROXY_OPTS="--logtostderr=false \\
    --v=2 \\
    --log-dir=/opt/kubernetes/logs/kube-proxy \\
    --config=/opt/kubernetes/cfg/kube-proxy-config.yml"
    EOF
    
    cat <<EOF >/opt/kubernetes/cfg/kube-proxy-config.yml
    kind: KubeProxyConfiguration
    apiVersion: kubeproxy.config.k8s.io/v1alpha1
    address: 0.0.0.0 # 監聽地址
    metricsBindAddress: 0.0.0.0:10249 # 監控指標地址,監控獲取相關信息 就從這裏獲取
    clientConnection:
      kubeconfig: /opt/kubernetes/cfg/kube-proxy.kubeconfig # 讀取配置文件
    hostnameOverride: ${HOSTNAME} # 註冊到k8s的節點名稱唯一
    clusterCIDR: 10.244.0.0/16
    mode: iptables # 使用iptables模式
    
    # 使用 ipvs 模式
    #mode: ipvs # ipvs 模式
    #ipvs:
    #  scheduler: "rr"
    #iptables:
    #  masqueradeAll: true
    EOF
    
    cat <<EOF >/usr/lib/systemd/system/kube-proxy.service
    [Unit]
    Description=Kubernetes Proxy
    After=network.target
    
    [Service]
    EnvironmentFile=-/opt/kubernetes/cfg/kube-proxy.conf
    ExecStart=/opt/kubernetes/bin/kube-proxy \$KUBE_PROXY_OPTS
    Restart=on-failure
    
    [Install]
    WantedBy=multi-user.target
    EOF
    
    systemctl daemon-reload
    systemctl enable kube-proxy
    systemctl restart kube-proxy
    

    3.10 安裝 kubelet

    拷貝對應包至所有節點

    [root@centos7-nginx ~]# cd k8s-1.18.3/kubernetes/server/bin/
    [root@centos7-nginx bin]# ansible k8s -m copy -a "src=./kubelet dest=/opt/kubernetes/bin mode=755"
    

    此處可以使用 tmux 打開五個終端窗口進行,并行輸入,也可以在五台機器上分開執行

    [root@centos7-a ~]# cd k8s-scripts
    [root@centos7-a k8s-scripts]# vim install-kubelet.sh
    [root@centos7-a k8s-scripts]# bash install-kubelet.sh 10.96.0.10 ${HOSTNAME} cluster.local
    

    腳本內容如下

    #!/bin/bash
    
    DNS_SERVER_IP=${1:-"10.96.0.10"}
    HOSTNAME=${2:-"`hostname`"}
    CLUETERDOMAIN=${3:-"cluster.local"}
    
    cat <<EOF >/opt/kubernetes/cfg/kubelet.conf
    KUBELET_OPTS="--logtostderr=false \\
    --v=2 \\
    --log-dir=/opt/kubernetes/logs/kubelet \\
    --hostname-override=${HOSTNAME} \\
    --kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \\
    --bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \\
    --config=/opt/kubernetes/cfg/kubelet-config.yml \\
    --cert-dir=/opt/kubernetes/ssl \\
    --network-plugin=cni \\
    --cni-conf-dir=/etc/cni/net.d \\
    --cni-bin-dir=/opt/cni/bin \\
    --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0 \\
    --system-reserved=memory=300Mi \\
    --kube-reserved=memory=400Mi"
    EOF
    
    cat <<EOF >/opt/kubernetes/cfg/kubelet-config.yml
    kind: KubeletConfiguration # 使用對象
    apiVersion: kubelet.config.k8s.io/v1beta1 # api版本
    address: 0.0.0.0 # 監聽地址
    port: 10250 # 當前kubelet的端口
    readOnlyPort: 10255 # kubelet暴露的端口
    cgroupDriver: cgroupfs # 驅動,要與docker info显示的驅動一致
    clusterDNS:
      - ${DNS_SERVER_IP}
    clusterDomain: ${CLUETERDOMAIN}  # 集群域
    failSwapOn: false # 關閉swap
    
    # 身份驗證
    authentication:
      anonymous:
        enabled: false
      webhook:
        cacheTTL: 2m0s
        enabled: true
      x509:
        clientCAFile: /opt/kubernetes/ssl/ca.pem
    
    # 授權
    authorization:
      mode: Webhook
      webhook:
        cacheAuthorizedTTL: 5m0s
        cacheUnauthorizedTTL: 30s
    
    # Node 資源保留
    evictionHard:
      imagefs.available: 15%
      memory.available: 300Mi
      nodefs.available: 10%
      nodefs.inodesFree: 5%
    evictionPressureTransitionPeriod: 5m0s
    
    # 鏡像刪除策略
    imageGCHighThresholdPercent: 85
    imageGCLowThresholdPercent: 80
    imageMinimumGCAge: 2m0s
    
    # 旋轉證書
    rotateCertificates: true # 旋轉kubelet client 證書
    featureGates:
      RotateKubeletServerCertificate: true
      RotateKubeletClientCertificate: true
    
    maxOpenFiles: 1000000
    maxPods: 110
    EOF
    
    cat <<EOF >/usr/lib/systemd/system/kubelet.service
    [Unit]
    Description=Kubernetes Kubelet
    After=docker.service
    Requires=docker.service
    
    [Service]
    EnvironmentFile=-/opt/kubernetes/cfg/kubelet.conf
    ExecStart=/opt/kubernetes/bin/kubelet \$KUBELET_OPTS
    Restart=on-failure
    KillMode=process
    
    [Install]
    WantedBy=multi-user.target
    EOF
    
    systemctl daemon-reload
    systemctl enable kubelet
    systemctl restart kubelet
    

    3.11 查看節點個數

    等待一段時間后出現

    [root@centos7-a ~]# kubectl get nodes
    NAME        STATUS   ROLES    AGE     VERSION
    centos7-a   NotReady    <none>   7m   v1.18.3
    centos7-b   NotReady    <none>   6m   v1.18.3
    centos7-c   NotReady    <none>   6m   v1.18.3
    centos7-d   NotReady    <none>   6m   v1.18.3
    centos7-e   NotReady    <none>   5m   v1.18.3
    

    3.12 安裝網絡插件

    3.12.1 安裝 flannel

    [root@centos7-nginx ~]# mkdir flannel
    [root@centos7-nginx flannel]# wget https://github.com/coreos/flannel/releases/download/v0.12.0/flannel-v0.12.0-linux-amd64.tar.gz
    [root@centos7-nginx flannel]# tar xf flannel-v0.12.0-linux-amd64.tar.gz
    [root@centos7-nginx flannel]# ll
    總用量 43792
    -rwxr-xr-x. 1 lyj  lyj  35253112 3月  13 08:01 flanneld
    -rw-r--r--. 1 root root  9565406 6月  16 19:41 flannel-v0.12.0-linux-amd64.tar.gz
    -rwxr-xr-x. 1 lyj  lyj      2139 5月  29 2019 mk-docker-opts.sh
    -rw-r--r--. 1 lyj  lyj      4300 5月  29 2019 README.md
    [root@centos7-nginx flannel]# vim remove-docker0.sh
    #!/bin/bash
    
    # Copyright 2014 The Kubernetes Authors All rights reserved.
    #
    # Licensed under the Apache License, Version 2.0 (the "License");
    # you may not use this file except in compliance with the License.
    # You may obtain a copy of the License at
    #
    #     http://www.apache.org/licenses/LICENSE-2.0
    #
    # Unless required by applicable law or agreed to in writing, software
    # distributed under the License is distributed on an "AS IS" BASIS,
    # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    # See the License for the specific language governing permissions and
    # limitations under the License.
    
    # Delete default docker bridge, so that docker can start with flannel network.
    
    # exit on any error
    set -e
    
    rc=0
    ip link show docker0 >/dev/null 2>&1 || rc="$?"
    if [[ "$rc" -eq "0" ]]; then
      ip link set dev docker0 down
      ip link delete docker0
    fi
    

    將包拷貝至所有主機對應位置

    [root@centos7-nginx flannel]# ansible k8s -m copy -a "src=./flanneld dest=/opt/kubernetes/bin mode=755"
    [root@centos7-nginx flannel]# ansible k8s -m copy -a "src=./mk-docker-opts.sh dest=/opt/kubernetes/bin mode=755"
    [root@centos7-nginx flannel]# ansible k8s -m copy -a "src=./remove-docker0.sh dest=/opt/kubernetes/bin mode=755"
    

    準備啟動腳本

    [root@centos7-nginx scripts]# vim install-flannel.sh
    [root@centos7-nginx scripts]# bash install-flannel.sh 
    [root@centos7-nginx scripts]# ansible k8s -m script -a "./install-flannel.sh https://10.10.10.128:2379,https://10.10.10.129:2379,https://10.10.10.130:2379"
    

    腳本內容如下:

    #!/bin/bash
    ETCD_ENDPOINTS=${1:-'https://127.0.0.1:2379'}
    
    cat >/opt/kubernetes/cfg/flanneld <<EOF
    FLANNEL_OPTIONS="--etcd-endpoints=${ETCD_ENDPOINTS} \
    -etcd-cafile=/opt/kubernetes/ssl/ca.pem \
    -etcd-certfile=/opt/kubernetes/ssl/server.pem \
    -etcd-keyfile=/opt/kubernetes/ssl/server-key.pem"
    EOF
    
    cat >/usr/lib/systemd/system/flanneld.service <<EOF
    [Unit]
    Description=Flanneld Overlay address etcd agent
    After=network-online.target network.target
    Before=docker.service
    
    [Service]
    Type=notify
    EnvironmentFile=/opt/kubernetes/cfg/flanneld
    #ExecStartPre=/opt/kubernetes/bin/remove-docker0.sh
    ExecStart=/opt/kubernetes/bin/flanneld --ip-masq \$FLANNEL_OPTIONS
    #ExecStartPost=/opt/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env
    Restart=on-failure
    
    [Install]
    WantedBy=multi-user.target
    EOF
    
    systemctl daemon-reload
    systemctl enable flanneld
    systemctl restart flanneld
    

    將 pod 網段信息寫入 etcd 中

    登陸到任意一台 master 節點上

    [root@centos7-a ~]# cd k8s-scripts/
    [root@centos7-a k8s-scripts]# vim install-flannel-to-etcd.sh
    [root@centos7-a k8s-scripts]# bash install-flannel-to-etcd.sh https://10.10.10.128:2379,https://10.10.10.129:2379,https://10.10.10.130:2379 10.244.0.0/16 vxlan
    

    腳本內容如下

    #!/bin/bash
    # bash install-flannel-to-etcd.sh https://10.10.10.128:2379,https://10.10.10.129:2379,https://10.10.10.130:2379 10.244.0.0/16 vxlan
    
    ETCD_ENDPOINTS=${1:-'https://127.0.0.1:2379'}
    NETWORK=${2:-'10.244.0.0/16'}
    NETWORK_MODE=${3:-'vxlan'}
    
    ETCDCTL_API=2 etcdctl --ca-file=/opt/etcd/ssl/ca.pem --cert-file=/opt/etcd/ssl/server.pem --key-file=/opt/etcd/ssl/server-key.pem --endpoints=${ETCD_ENDPOINTS} set /coreos.com/network/config   '{"Network": '\"$NETWORK\"', "Backend": {"Type": '\"${NETWORK_MODE}\"'}}'
    
    #ETCDCTL_API=3 etcdctl --cacert=/opt/etcd/ssl/ca.pem --cert=/opt/etcd/ssl/server.pem --key=/opt/etcd/ssl/server-key.pem --endpoints=${ETCD_ENDPOINTS} put /coreos.com/network/config -- '{"Network": "10.244.0.0/16", "Backend": {"Type": "vxlan"}}'
    

    由於flannel 使用的是v2版本的 etcd,所以此處 etcdctl 使用 v2 的 API

    3.12.2 安裝 cni-plugin

    下載 cni 插件

    [root@centos7-nginx ~]# mkdir cni
    [root@centos7-nginx ~]# cd cni
    [root@centos7-nginx cni]# wget https://github.com/containernetworking/plugins/releases/download/v0.8.6/cni-plugins-linux-amd64-v0.8.6.tgz
    [root@centos7-nginx cni]# tar xf cni-plugins-linux-amd64-v0.8.6.tgz
    [root@centos7-nginx cni]# ll
    總用量 106512
    -rwxr-xr-x. 1 root root  4159518 5月  14 03:50 bandwidth
    -rwxr-xr-x. 1 root root  4671647 5月  14 03:50 bridge
    -rw-r--r--. 1 root root 36878412 6月  17 20:07 cni-plugins-linux-amd64-v0.8.6.tgz
    -rwxr-xr-x. 1 root root 12124326 5月  14 03:50 dhcp
    -rwxr-xr-x. 1 root root  5945760 5月  14 03:50 firewall
    -rwxr-xr-x. 1 root root  3069556 5月  14 03:50 flannel
    -rwxr-xr-x. 1 root root  4174394 5月  14 03:50 host-device
    -rwxr-xr-x. 1 root root  3614480 5月  14 03:50 host-local
    -rwxr-xr-x. 1 root root  4314598 5月  14 03:50 ipvlan
    -rwxr-xr-x. 1 root root  3209463 5月  14 03:50 loopback
    -rwxr-xr-x. 1 root root  4389622 5月  14 03:50 macvlan
    -rwxr-xr-x. 1 root root  3939867 5月  14 03:50 portmap
    -rwxr-xr-x. 1 root root  4590277 5月  14 03:50 ptp
    -rwxr-xr-x. 1 root root  3392826 5月  14 03:50 sbr
    -rwxr-xr-x. 1 root root  2885430 5月  14 03:50 static
    -rwxr-xr-x. 1 root root  3356587 5月  14 03:50 tuning
    -rwxr-xr-x. 1 root root  4314446 5月  14 03:50 vlan
    [root@centos7-nginx cni]# cd ..
    [root@centos7-nginx ~]# ansible k8s -m copy -a "src=./cni/ dest=/opt/cni/bin mode=755"
    

    創建 cni 配置文件

    [root@centos7-nginx scripts]# vim install-cni.sh
    [root@centos7-nginx scripts]# ansible k8s -m script -a "./install-cni.sh"
    

    腳本內容如下:

    #!/bin/bash
    mkdir /etc/cni/net.d/ -pv
    cat <<EOF > /etc/cni/net.d/10-flannel.conflist
    {
      "name": "cbr0",
      "cniVersion": "0.3.1",
      "plugins": [
        {
          "type": "flannel",
          "delegate": {
            "hairpinMode": true,
            "isDefaultGateway": true
          }
        },
        {
          "type": "portmap",
          "capabilities": {
            "portMappings": true
          }
        }
      ]
    }
    EOF
    

    3.13 查看 node 狀態

    [root@centos7-c cfg]# kubectl get nodes
    NAME           STATUS   ROLES    AGE   VERSION
    10.10.10.128   Ready    <none>   1h   v1.18.3
    10.10.10.129   Ready    <none>   1h   v1.18.3
    10.10.10.130   Ready    <none>   1h   v1.18.3
    10.10.10.131   Ready    <none>   1h   v1.18.3
    10.10.10.132   Ready    <none>   1h   v1.18.3
    

    3.14 安裝 coredns

    注意:k8s 與 coredns 的版本對應關係

    https://github.com/coredns/deployment/blob/master/kubernetes/CoreDNS-k8s_version.md

    安裝 dns 插件

    kubectl apply -f coredns.yaml
    

    文件內容如下

    cat coredns.yaml # 注意修改clusterIP 和 鏡像版本1.6.7

    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: coredns
      namespace: kube-system
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRole
    metadata:
      labels:
        kubernetes.io/bootstrapping: rbac-defaults
      name: system:coredns
    rules:
    - apiGroups:
      - ""
      resources:
      - endpoints
      - services
      - pods
      - namespaces
      verbs:
      - list
      - watch
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding
    metadata:
      annotations:
        rbac.authorization.kubernetes.io/autoupdate: "true"
      labels:
        kubernetes.io/bootstrapping: rbac-defaults
      name: system:coredns
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: system:coredns
    subjects:
    - kind: ServiceAccount
      name: coredns
      namespace: kube-system
    ---
    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: coredns
      namespace: kube-system
    data:
      Corefile: |
        .:53 {
            errors
            health {
              lameduck 5s
            }
            ready
            kubernetes cluster.local in-addr.arpa ip6.arpa {
              fallthrough in-addr.arpa ip6.arpa
            }
            prometheus :9153
            forward . /etc/resolv.conf
            cache 30
            loop
            reload
            loadbalance
        }
    ---
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: coredns
      namespace: kube-system
      labels:
        k8s-app: kube-dns
        kubernetes.io/name: "CoreDNS"
    spec:
      # replicas: not specified here:
      # 1. Default is 1.
      # 2. Will be tuned in real time if DNS horizontal auto-scaling is turned on.
      replicas: 2
      strategy:
        type: RollingUpdate
        rollingUpdate:
          maxUnavailable: 1
      selector:
        matchLabels:
          k8s-app: kube-dns
      template:
        metadata:
          labels:
            k8s-app: kube-dns
        spec:
          priorityClassName: system-cluster-critical
          serviceAccountName: coredns
          tolerations:
            - key: "CriticalAddonsOnly"
              operator: "Exists"
          nodeSelector:
            kubernetes.io/os: linux
          affinity:
             podAntiAffinity:
               preferredDuringSchedulingIgnoredDuringExecution:
               - weight: 100
                 podAffinityTerm:
                   labelSelector:
                     matchExpressions:
                       - key: k8s-app
                         operator: In
                         values: ["kube-dns"]
                   topologyKey: kubernetes.io/hostname
          containers:
          - name: coredns
            image: coredns/coredns:1.6.7
            imagePullPolicy: IfNotPresent
            resources:
              limits:
                memory: 170Mi
              requests:
                cpu: 100m
                memory: 70Mi
            args: [ "-conf", "/etc/coredns/Corefile" ]
            volumeMounts:
            - name: config-volume
              mountPath: /etc/coredns
              readOnly: true
            ports:
            - containerPort: 53
              name: dns
              protocol: UDP
            - containerPort: 53
              name: dns-tcp
              protocol: TCP
            - containerPort: 9153
              name: metrics
              protocol: TCP
            securityContext:
              allowPrivilegeEscalation: false
              capabilities:
                add:
                - NET_BIND_SERVICE
                drop:
                - all
              readOnlyRootFilesystem: true
            livenessProbe:
              httpGet:
                path: /health
                port: 8080
                scheme: HTTP
              initialDelaySeconds: 60
              timeoutSeconds: 5
              successThreshold: 1
              failureThreshold: 5
            readinessProbe:
              httpGet:
                path: /ready
                port: 8181
                scheme: HTTP
          dnsPolicy: Default
          volumes:
            - name: config-volume
              configMap:
                name: coredns
                items:
                - key: Corefile
                  path: Corefile
    ---
    apiVersion: v1
    kind: Service
    metadata:
      name: kube-dns
      namespace: kube-system
      annotations:
        prometheus.io/port: "9153"
        prometheus.io/scrape: "true"
      labels:
        k8s-app: kube-dns
        kubernetes.io/cluster-service: "true"
        kubernetes.io/name: "CoreDNS"
    spec:
      selector:
        k8s-app: kube-dns
      clusterIP: 10.96.0.10
      ports:
      - name: dns
        port: 53
        protocol: UDP
      - name: dns-tcp
        port: 53
        protocol: TCP
      - name: metrics
        port: 9153
        protocol: TCP
    

    驗證是否可以正常運行

    # 先創建一個 busybox 容器作為客戶端
    [root@centos7-nginx ~]# kubectl create -f https://k8s.io/examples/admin/dns/busybox.yaml
    # 解析 kubernetes
    [root@centos7-nginx ~]# kubectl exec -it busybox -- nslookup kubernetes
    Server:    10.96.0.10
    Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
    
    Name:      kubernetes
    Address 1: 10.96.0.1 kubernetes.default.svc.cluster.local
    [root@centos7-nginx ~]#
    

    3.15 安裝 metrics-server

    項目地址:https://github.com/kubernetes-sigs/metrics-server

    按照說明執行如下命令即可,需要根據自身集群狀態進行修改,比如,鏡像地址、資源限制…

    kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.3.6/components.yaml
    

    將文件下載到本地

    [root@centos7-nginx scripts]# wget https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.3.6/components.yaml
    

    修改內容:修改鏡像地址,添加資源限制和相關命令

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: metrics-server
    spec:
      template:
        spec:
          containers:
          - name: metrics-server
            image: registry.cn-beijing.aliyuncs.com/liyongjian5179/metrics-server-amd64:v0.3.6
            imagePullPolicy: IfNotPresent
            resources:
              limits:
                cpu: 400m
                memory: 512Mi
              requests:
                cpu: 50m
                memory: 50Mi
            command:
            - /metrics-server
            - --kubelet-insecure-tls
            - --kubelet-preferred-address-types=InternalIP
    

    根據您的群集設置,您可能還需要更改傳遞給Metrics Server容器的標誌。最有用的標誌:

    • --kubelet-preferred-address-types -確定連接到特定節點的地址時使用的節點地址類型的優先級(default [Hostname,InternalDNS,InternalIP,ExternalDNS,ExternalIP])
    • --kubelet-insecure-tls-不要驗證Kubelets提供的服務證書的CA。僅用於測試目的。
    • --requestheader-client-ca-file -指定根證書捆綁包,以驗證傳入請求上的客戶端證書。

    執行該文件

    [root@centos7-nginx scripts]# kubectl apply -f components.yaml
    clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created
    clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created
    rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created
    apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created
    serviceaccount/metrics-server created
    deployment.apps/metrics-server created
    service/metrics-server created
    clusterrole.rbac.authorization.k8s.io/system:metrics-server created
    clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created
    

    等待一段時間即可查看效果

    [root@centos7-nginx scripts]# kubectl top nodes
    NAME        CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%
    centos7-a   159m         15%    1069Mi          62%
    centos7-b   158m         15%    1101Mi          64%
    centos7-c   168m         16%    1153Mi          67%
    centos7-d   48m          4%     657Mi           38%
    centos7-e   45m          4%     440Mi           50%
    [root@centos7-nginx scripts]# kubectl top pods -A
    NAMESPACE     NAME                              CPU(cores)   MEMORY(bytes)
    kube-system   coredns-6fdfb45d56-79jhl          5m           12Mi
    kube-system   coredns-6fdfb45d56-pvnzt          3m           13Mi
    kube-system   metrics-server-5f8fdf59b9-8chz8   1m           11Mi
    kube-system   tiller-deploy-6b75d7dccd-r6sz2    2m           6Mi
    

    完整文件內容如下

    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRole
    metadata:
      name: system:aggregated-metrics-reader
      labels:
        rbac.authorization.k8s.io/aggregate-to-view: "true"
        rbac.authorization.k8s.io/aggregate-to-edit: "true"
        rbac.authorization.k8s.io/aggregate-to-admin: "true"
    rules:
    - apiGroups: ["metrics.k8s.io"]
      resources: ["pods", "nodes"]
      verbs: ["get", "list", "watch"]
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding
    metadata:
      name: metrics-server:system:auth-delegator
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: system:auth-delegator
    subjects:
    - kind: ServiceAccount
      name: metrics-server
      namespace: kube-system
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: RoleBinding
    metadata:
      name: metrics-server-auth-reader
      namespace: kube-system
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: Role
      name: extension-apiserver-authentication-reader
    subjects:
    - kind: ServiceAccount
      name: metrics-server
      namespace: kube-system
    ---
    apiVersion: apiregistration.k8s.io/v1beta1
    kind: APIService
    metadata:
      name: v1beta1.metrics.k8s.io
    spec:
      service:
        name: metrics-server
        namespace: kube-system
      group: metrics.k8s.io
      version: v1beta1
      insecureSkipTLSVerify: true
      groupPriorityMinimum: 100
      versionPriority: 100
    ---
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: metrics-server
      namespace: kube-system
    ---
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: metrics-server
      namespace: kube-system
      labels:
        k8s-app: metrics-server
    spec:
      selector:
        matchLabels:
          k8s-app: metrics-server
      template:
        metadata:
          name: metrics-server
          labels:
            k8s-app: metrics-server
        spec:
          serviceAccountName: metrics-server
          volumes:
          # mount in tmp so we can safely use from-scratch images and/or read-only containers
          - name: tmp-dir
            emptyDir: {}
          containers:
          - name: metrics-server
            image: registry.cn-beijing.aliyuncs.com/liyongjian5179/metrics-server-amd64:v0.3.6
            imagePullPolicy: IfNotPresent
            resources:
              limits:
                cpu: 400m
                memory: 512Mi
              requests:
                cpu: 50m
                memory: 50Mi
            command:
            - /metrics-server
            - --kubelet-insecure-tls
            - --kubelet-preferred-address-types=InternalIP
            args:
              - --cert-dir=/tmp
              - --secure-port=4443
            ports:
            - name: main-port
              containerPort: 4443
              protocol: TCP
            securityContext:
              readOnlyRootFilesystem: true
              runAsNonRoot: true
              runAsUser: 1000
            volumeMounts:
            - name: tmp-dir
              mountPath: /tmp
          nodeSelector:
            kubernetes.io/os: linux
            kubernetes.io/arch: "amd64"
    ---
    apiVersion: v1
    kind: Service
    metadata:
      name: metrics-server
      namespace: kube-system
      labels:
        kubernetes.io/name: "Metrics-server"
        kubernetes.io/cluster-service: "true"
    spec:
      selector:
        k8s-app: metrics-server
      ports:
      - port: 443
        protocol: TCP
        targetPort: main-port
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRole
    metadata:
      name: system:metrics-server
    rules:
    - apiGroups:
      - ""
      resources:
      - pods
      - nodes
      - nodes/stats
      - namespaces
      - configmaps
      verbs:
      - get
      - list
      - watch
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding
    metadata:
      name: system:metrics-server
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: system:metrics-server
    subjects:
    - kind: ServiceAccount
      name: metrics-server
      namespace: kube-system
    

    3.16 安裝 ingress

    3.16.1 LB 方案

    採用裸金屬服務器的方案:https://kubernetes.github.io/ingress-nginx/deploy/#bare-metal

    可選NodePort或者LoadBalancer,默認是 NodePort 的方案

    在雲上的環境可以使用現成的 LB的方案:

    比如阿里雲Internal load balancer示例,可以通過註解的方式

    [...]
    metadata:
      annotations:  
        service.beta.kubernetes.io/alibaba-cloud-loadbalancer-address-type: "intranet"
    [...]
    

    裸金屬服務器上可選方案:

    1)純軟件解決方案:MetalLB(https://metallb.universe.tf/)

    該項目發佈於 2017 年底,當前處於 Beta 階段。

    MetalLB支持兩種聲明模式:

    • Layer 2模式:ARP/NDP
    • BGP模式

    Layer 2 模式

    Layer 2模式下,每個service會有集群中的一個node來負責。當服務客戶端發起ARP解析的時候,對應的node會響應該ARP請求,之後,該service的流量都會指向該node(看上去該node上有多個地址)。

    Layer 2模式並不是真正的負載均衡,因為流量都會先經過1個node后,再通過kube-proxy轉給多個end points。如果該node故障,MetalLB會遷移 IP到另一個node,並重新發送免費ARP告知客戶端遷移。現代操作系統基本都能正確處理免費ARP,因此failover不會產生太大問題。

    Layer 2模式更為通用,不需要用戶有額外的設備;但由於Layer 2模式使用ARP/ND,地址池分配需要跟客戶端在同一子網,地址分配略為繁瑣。

    BGP模式

    BGP模式下,集群中所有node都會跟上聯路由器建立BGP連接,並且會告知路由器應該如何轉發service的流量。

    BGP模式是真正的LoadBalancer。

    2)通過NodePort

    使用`NodePort`有一些局限性
    
    • Source IP address

    默認情況下,NodePort類型的服務執行源地址轉換。這意味着HTTP請求的源IP始終是從NGINX側接收到該請求的Kubernetes節點的IP地址。

    建議在NodePort設置中保留源IP的方法是將ingress-nginxServicespecexternalTrafficPolicy字段的值設置為Local,如下面的例子:

    kind: Service
    apiVersion: v1
    metadata:
      name: ingress-nginx
      namespace: ingress-nginx
      labels:
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/part-of: ingress-nginx
      annotations:
        # by default the type is elb (classic load balancer).
        service.beta.kubernetes.io/aws-load-balancer-type: nlb
    spec:
      # this setting is t make sure the source IP address is preserved.
      externalTrafficPolicy: Local
      type: LoadBalancer
      selector:
        app.kubernetes.io/name: ingress-nginx
      ports:
      - name: http
        port: 80
        targetPort: http
      - name: https
        port: 443
        targetPort: https
    

    注意:此設置有效地丟棄了發送到未運行NGINX Ingress控制器任何實例的Kubernetes節點的數據包。考慮將NGINX Pod分配給特定節點,以控制應調度或不調度NGINX Ingress控制器的節點,可以通過nodeSelector實現。如果有三台機器,但是只有兩個 nginx 的 replica,分別部署在 node-2和 node-3,那麼當請求到 node-1 時,會因為在這台機器上沒有運行 nginx 的 replica 而被丟棄。

    給對應節點打標籤

    [root@centos7-nginx ~]# kubectl label nodes centos7-d lb-type=nginx
    node/centos7-d labeled
    [root@centos7-nginx ~]# kubectl label nodes centos7-e lb-type=nginx
    node/centos7-e labeled
    

    3.16.2 安裝

    本次實驗採用默認的方式:

    kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/baremetal/deploy.yaml
    

    如果需要進行修改,先下載到本地

    [root@centos7-nginx yaml]# wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/baremetal/deploy.yaml
    [root@centos7-nginx yaml]# vim deploy.yaml
    [root@centos7-nginx yaml]# kubectl apply -f deploy.yaml
    namespace/ingress-nginx created
    serviceaccount/ingress-nginx created
    configmap/ingress-nginx-controller created
    clusterrole.rbac.authorization.k8s.io/ingress-nginx created
    clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx created
    role.rbac.authorization.k8s.io/ingress-nginx created
    rolebinding.rbac.authorization.k8s.io/ingress-nginx created
    service/ingress-nginx-controller-admission created
    service/ingress-nginx-controller created
    deployment.apps/ingress-nginx-controller created
    validatingwebhookconfiguration.admissionregistration.k8s.io/ingress-nginx-admission created
    clusterrole.rbac.authorization.k8s.io/ingress-nginx-admission created
    clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created
    job.batch/ingress-nginx-admission-create created
    job.batch/ingress-nginx-admission-patch created
    role.rbac.authorization.k8s.io/ingress-nginx-admission created
    rolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created
    serviceaccount/ingress-nginx-admission created
    

    也可以先跑起來,在修改

    [root@centos7-nginx ~]# kubectl edit deploy ingress-nginx-controller -n ingress-nginx
    ...
    spec:
      progressDeadlineSeconds: 600
      replicas: 2  #----> 修改為 2 實現高可用
    ...
      template:
    ...
        spec:
          nodeSelector:  #----> 增加節點選擇器
            lb-type: nginx #----> 匹配標籤
    

    或者使用

    [root@centos7-nginx yaml]# kubectl -n ingress-nginx patch deployment ingress-nginx-controller -p '{"spec": {"template": {"spec": {"nodeSelector": {"lb-type": "nginx"}}}}}'
    deployment.apps/ingress-nginx-controller patched
    
    [root@centos7-nginx yaml]# kubectl -n ingress-nginx scale --replicas=2 deployment/ingress-nginx-controller
    deployment.apps/ingress-nginx-controller scaled
    

    查看 svc 狀態可以看到端口已經分配

    [root@centos7-nginx ~]# kubectl get svc -n ingress-nginx
    NAME                                 TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                      AGE
    ingress-nginx-controller             NodePort    10.101.121.120   <none>        80:36459/TCP,443:33171/TCP   43m
    ingress-nginx-controller-admission   ClusterIP   10.111.108.89    <none>        443/TCP                      43m
    

    所有機器上的端口也已經開啟,為了防止請求被丟棄,建議將代理后的節點 ip 固定在已經打了lb-type=nginx的節點

    [root@centos7-a ~]# netstat -ntpl |grep proxy
    tcp        0      0 0.0.0.0:36459           0.0.0.0:*               LISTEN      69169/kube-proxy
    tcp        0      0 0.0.0.0:33171           0.0.0.0:*               LISTEN      69169/kube-proxy
    ...
    [root@centos7-d ~]# netstat -ntpl |grep proxy
    tcp        0      0 0.0.0.0:36459           0.0.0.0:*               LISTEN      84181/kube-proxy
    tcp        0      0 0.0.0.0:33171           0.0.0.0:*               LISTEN      84181/kube-proxy
    [root@centos7-e ~]# netstat -ntpl |grep proxy
    tcp        0      0 0.0.0.0:36459           0.0.0.0:*               LISTEN      74881/kube-proxy
    tcp        0      0 0.0.0.0:33171           0.0.0.0:*               LISTEN      74881/kube-proxy
    

    3.16.3 驗證

    # 創建一個應用
    [root@centos7-nginx ~]# kubectl create deployment nginx-dns --image=nginx
    deployment.apps/nginx-dns created
    # 創建 svc
    [root@centos7-nginx ~]# kubectl expose deployment nginx-dns --port=80
    service/nginx-dns exposed
    [root@centos7-nginx ~]# kubectl get pods
    NAME                         READY   STATUS    RESTARTS   AGE
    busybox                      1/1     Running   29         29h
    nginx-dns-5c6b6b99df-qvnjh   1/1     Running   0          13s
    [root@centos7-nginx ~]# kubectl get svc
    NAME         TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)   AGE
    kubernetes   ClusterIP   10.96.0.1      <none>        443/TCP   3d5h
    nginx-dns    ClusterIP   10.108.88.75   <none>        80/TCP    10s
    # 創建 ingress 文件並執行
    [root@centos7-nginx yaml]# vim ingress.yaml
    [root@centos7-nginx yaml]# cat ingress.yaml
    apiVersion: extensions/v1beta1
    kind: Ingress
    metadata:
      name: ingress-nginx-dns
      namespace: default
      annotations:
        kubernetes.io/ingress.class: "nginx"
    spec:
      rules:
      - host: ng.5179.top
        http:
          paths:
          - path: /
            backend:
              serviceName: nginx-dns
              servicePort: 80
    [root@centos7-nginx yaml]# kubectl apply -f ingress.yaml
    ingress.extensions/ingress-nginx-dns created
    [root@centos7-nginx yaml]# kubectl get ingress
    NAME                CLASS    HOSTS         ADDRESS   PORTS   AGE
    ingress-nginx-dns   <none>   ng.5179.top             80      9s
    

    先將日誌刷起來

    [root@centos7-nginx yaml]# kubectl get pods
    NAME                         READY   STATUS    RESTARTS   AGE
    busybox                      1/1     Running   30         30h
    nginx-dns-5c6b6b99df-qvnjh   1/1     Running   0          28m
    [root@centos7-nginx yaml]# kubectl logs -f nginx-dns-5c6b6b99df-qvnjh
    /docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
    /docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
    /docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
    10-listen-on-ipv6-by-default.sh: Getting the checksum of /etc/nginx/conf.d/default.conf
    10-listen-on-ipv6-by-default.sh: Enabled listen on IPv6 in /etc/nginx/conf.d/default.conf
    /docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
    /docker-entrypoint.sh: Configuration complete; ready for start up
    10.244.3.123 - - [20/Jun/2020:12:58:20 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.29.0" "10.244.4.0"
    

    後端 Pod 中 nginx 的日誌格式為

        log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                          '$status $body_bytes_sent "$http_referer" '
                          '"$http_user_agent" "$http_x_forwarded_for"';
    

    另起一個終端進行訪問

    [root@centos7-a ~]# curl -H 'Host:ng.5179.top' http://10.10.10.132:36459 -I
    HTTP/1.1 200 OK
    Server: nginx/1.19.0
    Date: Sat, 20 Jun 2020 12:58:27 GMT
    Content-Type: text/html
    Content-Length: 612
    Connection: keep-alive
    Vary: Accept-Encoding
    Last-Modified: Tue, 26 May 2020 15:00:20 GMT
    ETag: "5ecd2f04-264"
    Accept-Ranges: bytes
    

    可以看到日誌10.244.3.123 - - [20/Jun/2020:12:58:20 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.29.0" "10.244.4.0"

    然後我們可以配置前端的 LB

    [root@centos7-nginx conf.d]# vim ng.conf
    [root@centos7-nginx conf.d]# cat ng.conf
    upstream nginx-dns{
            ip_hash;
            server 10.10.10.131:36459 ;
            server 10.10.10.132:36459;
       }
    
    server {
        listen       80;
        server_name  ng.5179.top;
    
        #access_log  logs/host.access.log  main;
    
        location / {
            root   html;
            proxy_pass http://nginx-dns;
    	    proxy_set_header Host $host;
    	    proxy_set_header X-Forwarded-For $remote_addr;
            index  index.html index.htm;
        }
    }
    # 添加內部解析
    [root@centos7-nginx conf.d]# vim /etc/hosts
    [root@centos7-nginx conf.d]# cat /etc/hosts
    127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
    ::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
    10.10.10.127 centos7-nginx lb.5179.top ng.5179.top
    10.10.10.128 centos7-a
    10.10.10.129 centos7-b
    10.10.10.130 centos7-c
    10.10.10.131 centos7-d
    10.10.10.132 centos7-e
    # 重啟 nginx
    [root@centos7-nginx conf.d]# nginx -t
    nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
    nginx: configuration file /etc/nginx/nginx.conf test is successful
    [root@centos7-nginx conf.d]# nginx -s reload
    

    訪問該域名

    [root@centos7-nginx conf.d]# curl http://ng.5179.top -I
    HTTP/1.1 200 OK
    Server: nginx/1.16.1
    Date: Sat, 20 Jun 2020 13:07:38 GMT
    Content-Type: text/html
    Content-Length: 612
    Connection: keep-alive
    Vary: Accept-Encoding
    Last-Modified: Tue, 26 May 2020 15:00:20 GMT
    ETag: "5ecd2f04-264"
    Accept-Ranges: bytes
    

    後端也能正常收到日誌

    10.244.4.17 - - [20/Jun/2020:13:22:11 +0000] "HEAD / HTTP/1.1" 200 0 "-" "curl/7.29.0" "10.244.4.1

    $remote_addr —> 10.244.4.17:為某一台 ingress-nginx 的 nginx_IP

    $http_x_forwarded_for —> 10.244.4.1:為節點上的 cni0 網卡 IP

    [root@centos7-nginx conf.d]# kubectl get pods -n ingress-nginx -o wide
    NAME                                        READY   STATUS      RESTARTS   AGE    IP             NODE        NOMINATED NODE   READINESS GATES
    ingress-nginx-admission-create-tqp5w        0/1     Completed   0          112m   10.244.3.119   centos7-d   <none>           <none>
    ingress-nginx-admission-patch-78jmf         0/1     Completed   0          112m   10.244.3.120   centos7-d   <none>           <none>
    ingress-nginx-controller-5946fd499c-6cx4x   1/1     Running     0          11m    10.244.3.125   centos7-d   <none>           <none>
    ingress-nginx-controller-5946fd499c-khjdn   1/1     Running     0          11m    10.244.4.17    centos7-e   <none>           <none>
    

    修改 ingress-nginx-controller 的 svc

    [root@centos7-nginx conf.d]# kubectl get svc -n ingress-nginx
    NAME                                 TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                      AGE
    ingress-nginx-controller             NodePort    10.101.121.120   <none>        80:36459/TCP,443:33171/TCP   97m
    ingress-nginx-controller-admission   ClusterIP   10.111.108.89    <none>        443/TCP                      97m
    [root@centos7-nginx conf.d]# kubectl edit svc ingress-nginx-controller -n ingress-nginx
    ...
    spec:
      clusterIP: 10.101.121.120
      externalTrafficPolicy: Cluster  #---> 修改為 Local
    ...  
      service/ingress-nginx-controller edited
    
    
    

    再次訪問

    [root@centos7-nginx conf.d]# curl http://ng.5179.top -I
    HTTP/1.1 200 OK
    Server: nginx/1.16.1
    Date: Sat, 20 Jun 2020 13:28:05 GMT
    Content-Type: text/html
    Content-Length: 612
    Connection: keep-alive
    Vary: Accept-Encoding
    Last-Modified: Tue, 26 May 2020 15:00:20 GMT
    ETag: "5ecd2f04-264"
    Accept-Ranges: bytes
    # 查看本機網卡 IP
    [root@centos7-nginx conf.d]# ip addr show ens33
    2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
        link/ether 00:0c:29:38:d4:e3 brd ff:ff:ff:ff:ff:ff
        inet 10.10.10.127/24 brd 10.10.10.255 scope global ens33
           valid_lft forever preferred_lft forever
        inet6 fe80::20c:29ff:fe38:d4e3/64 scope link
           valid_lft forever preferred_lft forever
    

    nginx的日誌($http_x_forwarded_for)已經記錄了客戶端的真實IP

    10.244.4.17 - - [20/Jun/2020:13:28:05 +0000] "HEAD / HTTP/1.1" 200 0 "-" "curl/7.29.0" "10.10.10.127"
    

    3.16.4 運行多個 ingress

    注意:如果要運行多個 ingress ,一個服務於公共流量,一個服務於“內部”流量。為此,必須將選項–ingress-class更改為控制器定義內群集的唯一值。

    spec:
      template:
         spec:
           containers:
             - name: nginx-ingress-internal-controller
               args:
                 - /nginx-ingress-controller
                 - '--election-id=ingress-controller-leader-internal'
                 - '--ingress-class=nginx-internal'
                 - '--configmap=ingress/nginx-ingress-internal-controller'
    

    需要創建單獨的ConfigmapServiceDeployment的文件,其他與默認安裝的 ingress 共用即可

    ---
    # Source: ingress-nginx/templates/controller-configmap.yaml
    apiVersion: v1
    kind: ConfigMap
    metadata:
      labels:
        helm.sh/chart: ingress-nginx-2.4.0
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/instance: ingress-nginx
        app.kubernetes.io/version: 0.33.0
        app.kubernetes.io/managed-by: Helm
        app.kubernetes.io/component: controller
      name: ingress-nginx-internal-controller   # 修改名字
      namespace: ingress-nginx
    data:
    ---
    # Source: ingress-nginx/templates/controller-service.yaml
    apiVersion: v1
    kind: Service
    metadata:
      labels:
        helm.sh/chart: ingress-nginx-2.4.0
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/instance: ingress-nginx
        app.kubernetes.io/version: 0.33.0
        app.kubernetes.io/managed-by: Helm
        app.kubernetes.io/component: controller
      name: ingress-nginx-internal-controller   # 修改名字
      namespace: ingress-nginx
    spec:
      type: NodePort
      ports:
        - name: http
          port: 80
          protocol: TCP
          targetPort: http
        - name: https
          port: 443
          protocol: TCP
          targetPort: https
      selector:
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/instance: ingress-nginx
        app.kubernetes.io/component: controller
    ---
    # Source: ingress-nginx/templates/controller-deployment.yaml
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      labels:
        helm.sh/chart: ingress-nginx-2.4.0
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/instance: ingress-nginx
        app.kubernetes.io/version: 0.33.0
        app.kubernetes.io/managed-by: Helm
        app.kubernetes.io/component: controller
      name: ingress-nginx-internal-controller      # 修改名字
      namespace: ingress-nginx
    spec:
      selector:
        matchLabels:
          app.kubernetes.io/name: ingress-nginx
          app.kubernetes.io/instance: ingress-nginx
          app.kubernetes.io/component: controller
      revisionHistoryLimit: 10
      minReadySeconds: 0
      template:
        metadata:
          labels:
            app.kubernetes.io/name: ingress-nginx
            app.kubernetes.io/instance: ingress-nginx
            app.kubernetes.io/component: controller
        spec:
          dnsPolicy: ClusterFirst
          containers:
            - name: controller
              #image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.33.0
              image: registry.cn-beijing.aliyuncs.com/liyongjian5179/nginx-ingress-controller:0.33.0
              imagePullPolicy: IfNotPresent
              lifecycle:
                preStop:
                  exec:
                    command:
                      - /wait-shutdown
              args:
                - /nginx-ingress-controller
                - --election-id=ingress-controller-leader-internal    
                - --ingress-class=nginx-internal
                - --configmap=ingress-nginx/ingress-nginx-internal-controller
                - --validating-webhook=:8443
                - --validating-webhook-certificate=/usr/local/certificates/cert
                - --validating-webhook-key=/usr/local/certificates/key
              securityContext:
                capabilities:
                  drop:
                    - ALL
                  add:
                    - NET_BIND_SERVICE
                runAsUser: 101
                allowPrivilegeEscalation: true
              env:
                - name: POD_NAME
                  valueFrom:
                    fieldRef:
                      fieldPath: metadata.name
                - name: POD_NAMESPACE
                  valueFrom:
                    fieldRef:
                      fieldPath: metadata.namespace
              livenessProbe:
                httpGet:
                  path: /healthz
                  port: 10254
                  scheme: HTTP
                initialDelaySeconds: 10
                periodSeconds: 10
                timeoutSeconds: 1
                successThreshold: 1
                failureThreshold: 3
              readinessProbe:
                httpGet:
                  path: /healthz
                  port: 10254
                  scheme: HTTP
                initialDelaySeconds: 10
                periodSeconds: 10
                timeoutSeconds: 1
                successThreshold: 1
                failureThreshold: 3
              ports:
                - name: http
                  containerPort: 80
                  protocol: TCP
                - name: https
                  containerPort: 443
                  protocol: TCP
                - name: webhook
                  containerPort: 8443
                  protocol: TCP
              volumeMounts:
                - name: webhook-cert
                  mountPath: /usr/local/certificates/
                  readOnly: true
              resources:
                requests:
                  cpu: 100m
                  memory: 90Mi
          serviceAccountName: ingress-nginx
          terminationGracePeriodSeconds: 300
          volumes:
            - name: webhook-cert
              secret:
                secretName: ingress-nginx-admission
    

    然後執行即可,然後還需要在原配置文件中的 Role中添加一行信息

    # Source: ingress-nginx/templates/controller-role.yaml
    apiVersion: rbac.authorization.k8s.io/v1
    kind: Role
    metadata:
    ...
      name: ingress-nginx
      namespace: ingress-nginx
    rules:
    ...
      - apiGroups:
          - ''
        resources:
          - configmaps
        resourceNames:
          # Defaults to "<election-id>-<ingress-class>"
          # Here: "<ingress-controller-leader>-<nginx>"
          # This has to be adapted if you change either parameter
          # when launching the nginx-ingress-controller.
          - ingress-controller-leader-nginx
          - ingress-controller-leader-internal-nginx-internal #此處要增加一行,如果不加,會出現下面的報錯
        verbs:
          - get
          - update
    

    上述所說,如果不添加,ingress-controller 的 nginx 會出現這個報錯信息

    E0621 08:25:07.531202       6 leaderelection.go:356] Failed to update lock: configmaps "ingress-controller-leader-internal-nginx-internal" is forbidden: User "system:serviceaccount:ingress-nginx:ingress-nginx" cannot update resource "configmaps" in API group "" in the namespace "ingress-nginx"
    

    然後修改 ingress 文件

    
    apiVersion: extensions/v1beta1
    kind: Ingress
    metadata:
      name: nginx
      annotations:
        # 注意這裏要設置為您前面配置的 INGRESS_CLASS,比如:nginx-internal
        kubernetes.io/ingress.class: "<YOUR_INGRESS_CLASS>"
    

    示例:

    [root@centos7-nginx yaml]# kubectl apply -f ingress-internal.yaml
    ingress.extensions/ingress-nginx-dns-internal created
    [root@centos7-nginx yaml]# cat ingress-internal.yaml
    apiVersion: extensions/v1beta1
    kind: Ingress
    metadata:
      name: ingress-nginx-dns-internal
    #  namespace: default
      annotations:
        kubernetes.io/ingress.class: "nginx-internal"
    spec:
      rules:
      - host: ng-inter.5179.top
        http:
          paths:
          - path: /
            backend:
              serviceName: nginx-dns
              servicePort: 80
    [root@centos7-nginx yaml]# kubectl get ingress
    NAME                         CLASS    HOSTS               ADDRESS                     PORTS   AGE
    ingress-nginx-dns            <none>   ng.5179.top         10.10.10.131                80      47m
    ingress-nginx-dns-internal   <none>   ng-inter.5179.top   10.10.10.131,10.10.10.132   80      32s
    

    在 nginx 的配置文件中增加

    [root@centos7-nginx yaml]# cat /etc/nginx/conf.d/ng.conf
    upstream nginx-dns{
            ip_hash;
            server 10.10.10.131:31511;
            server 10.10.10.132:31511;
       }
    upstream nginx-dns-inter{
            ip_hash;
            server 10.10.10.131:40377;
            server 10.10.10.132:40377;
       }
    
    server {
        listen       80;
        server_name  ng.5179.top;
    
        #access_log  logs/host.access.log  main;
    
        location / {
            root   html;
            proxy_pass http://nginx-dns;
    	proxy_set_header Host $host;
    	proxy_set_header X-Forwarded-For $remote_addr;
            index  index.html index.htm;
        }
    }
    server {
        listen       80;
        server_name  ng-inter.5179.top;
    
        #access_log  logs/host.access.log  main;
    
        location / {
            root   html;
            proxy_pass http://nginx-dns-inter/;
            proxy_set_header Host $host;
            proxy_set_header X-Forwarded-For $remote_addr;
            index  index.html index.htm;
        }
    }
    

    重啟后添加本地解析,然後訪問即可

    [root@centos7-nginx yaml]# curl http://ng-inter.5179.top -I
    HTTP/1.1 200 OK
    Server: nginx/1.16.1
    Date: Sun, 21 Jun 2020 09:07:12 GMT
    Content-Type: text/html
    Content-Length: 612
    Connection: keep-alive
    Vary: Accept-Encoding
    Last-Modified: Tue, 26 May 2020 15:00:20 GMT
    ETag: "5ecd2f04-264"
    Accept-Ranges: bytes
    
    [root@centos7-nginx yaml]# curl http://ng.5179.top -I
    HTTP/1.1 200 OK
    Server: nginx/1.16.1
    Date: Sun, 21 Jun 2020 09:07:17 GMT
    Content-Type: text/html
    Content-Length: 612
    Connection: keep-alive
    Vary: Accept-Encoding
    Last-Modified: Tue, 26 May 2020 15:00:20 GMT
    ETag: "5ecd2f04-264"
    Accept-Ranges: bytes
    

    3.17 安裝 prometheus-operator

    3.17.1 下載安裝

    使用 prometheus-operator 進行安裝.

    地址如下https://github.com/coreos/kube-prometheus

    根據 Readme.md 進行版本的選擇,本次 k8s 安裝的是 1.18 ,所以 prometheus 選的分支為 release-0.5

    git clone https://github.com/coreos/kube-prometheus.git -b release-0.5
    cd kube-prometheus
    # Create the namespace and CRDs, and then wait for them to be availble before creating the remaining resources
    kubectl create -f manifests/setup
    until kubectl get servicemonitors --all-namespaces ; do date; sleep 1; echo ""; done
    kubectl create -f manifests/
    

    為了遠程訪問方便,創建了 ingress

    [root@centos7-a ingress]# cat ingress-grafana.yaml
    apiVersion: extensions/v1beta1
    kind: Ingress
    metadata:
      name: ingress-grafana
      namespace: monitoring
      annotations:
        kubernetes.io/ingress.class: "nginx"
    spec:
      rules:
      - host: grafana.5179.top
        http:
          paths:
          - path: /
            backend:
              serviceName: grafana
              servicePort: 3000
              
    [root@centos7-a ingress]# cat ingress-prometheus.yaml
    apiVersion: extensions/v1beta1
    kind: Ingress
    metadata:
      name: ingress-prometheus
      namespace: monitoring
      annotations:
        kubernetes.io/ingress.class: "nginx"
    spec:
      rules:
      - host: prometheus.5179.top
        http:
          paths:
          - path: /
            backend:
              serviceName: prometheus-k8s
              servicePort: 9090
    
    [root@centos7-a ingress]# cat ingress-alertmanager.yaml
    apiVersion: extensions/v1beta1
    kind: Ingress
    metadata:
      name: ingress-alertmanager
      namespace: monitoring
      annotations:
        kubernetes.io/ingress.class: "nginx"
    spec:
      rules:
      - host: alertmanager.5179.top
        http:
          paths:
          - path: /
            backend:
              serviceName: alertmanager-main
              servicePort: 9093
    

    查看 ingress

    [root@centos7-a ingress]# kubectl get ingress -A
    NAMESPACE    NAME                   CLASS    HOSTS                   ADDRESS         PORTS   AGE
    monitoring   ingress-alertmanager   <none>   alertmanager.5179.top   10.10.10.129   80      3m6s
    monitoring   ingress-grafana        <none>   grafana.5179.top        10.10.10.129   80      3m6s
    monitoring   ingress-prometheus     <none>   prometheus.5179.top     10.10.10.129   80      3m6s
    

    3.17.2 遇到的坑

    1) kube-schedulerkube-controller-manager 的target 為 0/0

    二進制部署k8s管理組件和新版本 kubeadm 部署的都會發現在prometheus status 下的 target 頁面上發現kube-controller-managerkube-scheduler的 target 為0/0。因為 serviceMonitor是根據 label 去選取 svc的,可以查看對應的serviceMonitor是選取的ns範圍是kube-system

    解決辦法:

    查看endpoint 兩者的endpoint為 none

    [root@centos7-a kube-prometheus]# kubectl get endpoints -n kube-system
    NAME                      ENDPOINTS                                                               AGE
    kube-controller-manager   <none>                                                                  7m35s
    kube-dns                  10.244.43.2:53,10.244.62.2:53,10.244.43.2:9153 + 3 more...              4m10s
    kube-scheduler            <none>                                                                  7m31s
    kubelet                   10.10.10.129:4194,10.10.10.132:4194,10.10.10.128:4194 + 12 more...   22s
    

    查看兩者的端口

    [root@centos7-a kube-prometheus]# ss -tnlp| grep scheduler
    LISTEN     0      32768       :::10251                   :::*                   users:(("kube-scheduler",pid=60128,fd=5))
    LISTEN     0      32768       :::10259                   :::*                   users:(("kube-scheduler",pid=60128,fd=7))
    [root@centos7-a kube-prometheus]# ss -tnlp| grep contro
    LISTEN     0      32768       :::10252                   :::*                   users:(("kube-controller",pid=59695,fd=6))
    LISTEN     0      32768       :::10257                   :::*                   users:(("kube-controller",pid=59695,fd=7))
    

    創建文件並執行

    [root@centos7-a yaml]# cat schedulerandcontroller-ep-svc.yaml
    # cat kube-scheduer-service.yaml
    apiVersion: v1
    kind: Service
    metadata:
      labels:
        k8s-app: kube-scheduler
      name: kube-scheduler
      namespace: kube-system
    spec:
      clusterIP: None
      ports:
      - name: https-metrics
        port: 10259
        protocol: TCP
        targetPort: 10259
      - name: http-metrics
        port: 10251
        protocol: TCP
        targetPort: 10251
      type: ClusterIP
    
    ---
    # cat kube-controller-manager-service.yaml
    apiVersion: v1
    kind: Service
    metadata:
      labels:
        k8s-app: kube-controller-manager
      name: kube-controller-manager
      namespace: kube-system
    spec:
      clusterIP: None
      ports:
      - name: https-metrics
        port: 10257
        protocol: TCP
        targetPort: 10257
      - name: http-metrics
        port: 10252
        protocol: TCP
        targetPort: 10252
      type: ClusterIP
    
    ---
    # cat ep-controller-manager.yaml
    apiVersion: v1
    kind: Endpoints
    metadata:
      labels:
        k8s-app: kube-controller-manager
      name: kube-controller-manager
      namespace: kube-system
      annotations:
        prometheus.io/scrape: 'true'
    subsets:
    - addresses:
      - ip: 10.10.10.128
        targetRef:
          kind: Node
          name: 10.10.10.128
      - ip: 10.10.10.129
        targetRef:
          kind: Node
          name: 10.10.10.129
      - ip: 10.10.10.130
        targetRef:
          kind: Node
          name: 10.10.10.130
      ports:
      - name: http-metrics
        port: 10252
        protocol: TCP
      - name: https-metrics
        port: 10257
        protocol: TCP
    ---
    # cat ep-scheduler.yaml
    apiVersion: v1
    kind: Endpoints
    metadata:
      labels:
        k8s-app: kube-scheduler
      name: kube-scheduler
      namespace: kube-system
      annotations:
        prometheus.io/scrape: 'true'
    subsets:
    - addresses:
      - ip: 10.10.10.128
        targetRef:
          kind: Node
          name: 10.10.10.128
      - ip: 10.10.10.129
        targetRef:
          kind: Node
          name: 10.10.10.129
      - ip: 10.10.10.130
        targetRef:
          kind: Node
          name: 10.10.10.130
      ports:
      - name: http-metrics
        port: 10251
        protocol: TCP
      - name: https-metrics
        port: 10259
        protocol: TCP
    
    2) node-exporter的 target 显示(3/5)

    有兩個有問題的 Node,同時查看 kubectl top node 也發現問題,節點數據看不到

    [root@centos7--a kube-prometheus]# kubectl top nodes
    NAME            CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%
    10.10.10.129   110m         5%     1360Mi          36%
    10.10.10.128   114m         5%     1569Mi          42%
    10.10.10.130   101m         5%     1342Mi          36%
    10.10.10.132   <unknown>                           <unknown>               <unknown>               <unknown>
    10.10.10.131    <unknown>                           <unknown>               <unknown>               <unknown>
    

    解決辦法:

    查看問題節點所對應的 Pod

    [root@centos7--a kube-prometheus]# kubectl get pods  -o custom-columns='NAME:metadata.name,NODE:spec.nodeName'  -n monitoring |grep node
    node-exporter-2fqt5                    10.10.10.130
    node-exporter-fxqxb                    10.10.10.129
    node-exporter-pbq28                    10.10.10.132
    node-exporter-tvw5j                    10.10.10.128
    node-exporter-znp6k                    10.10.10.131
    

    查看日誌

    [root@centos7--a kube-prometheus]# kubectl logs -f node-exporter-znp6k -n monitoring -c kube-rbac-proxy
    I0627 02:58:01.947861   53400 main.go:213] Generating self signed cert as no cert is provided
    I0627 02:58:44.246733   53400 main.go:243] Starting TCP socket on [10.10.10.131]:9100
    I0627 02:58:44.346251   53400 main.go:250] Listening securely on [10.10.10.131]:9100
    E0627 02:59:27.246742   53400 webhook.go:106] Failed to make webhook authenticator request: Post https://10.96.0.1:443/apis/authentication.k8s.io/v1beta1/tokenreviews: dial tcp 10.96.0.1:443: i/o timeout
    E0627 02:59:27.247585   53400 proxy.go:67] Unable to authenticate the request due to an error: Post https://10.96.0.1:443/apis/authentication.k8s.io/v1beta1/tokenreviews: dial tcp 10.96.0.1:443: i/o timeout
    E0627 02:59:42.160199   53400 webhook.go:106] Failed to make webhook authenticator request: Post https://10.96.0.1:443/apis/authentication.k8s.io/v1beta1/tokenreviews: dial tcp 10.96.0.1:443: i/o timeout
    

    一直在報連接 10.96.0.1:443 超時,像是 kubernetes 在回包的時候,無法建立連接,

    兩種解決辦法:

    1. 在問題節點加入一條防火牆命令(不推薦)
    iptables -t nat -I POSTROUTING -s  10.96.0.0/12 -j MASQUERADE
    
    1. 修改 kube-proxy 配置文件,改成正確的 cluster-CIDR (推薦)

    再次查看已經正常了

    [root@centos7--a kube-prometheus]# kubectl top nodes
    NAME            CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%
    10.10.10.129   109m         5%     1362Mi          36%
    10.10.10.132   65m          3%     1118Mi          30%
    10.10.10.128   175m         8%     1581Mi          42%
    10.10.10.130   118m         5%     1344Mi          36%
    10.10.10.131    60m          3%     829Mi           22%
    

    實際排查后發現,是 kube-proxy 的 clusterCIDR寫成了 service 的網段。

    clusterCIDR: kube-proxy 根據 –cluster-cidr 判斷集群內部和外部流量,指定 –cluster-cidr 或 –masquerade-all選項后 kube-proxy 才會對訪問 Service IP 的請求做 SNAT;

    3.17.3 監控 etcd

    除了 Kubernetes 集群中的一些資源對象、節點以及組件需要監控,有的時候我們可能還需要根據實際的業務需求去添加自定義的監控項,添加一個自定義監控的步驟如下:

    • 第一步建立一個 ServiceMonitor 對象,用於 Prometheus 添加監控項
    • 第二步為 ServiceMonitor 對象關聯 metrics 數據接口的一個 Service 對象
    • 第三步確保 Service 對象可以正確獲取到 metrics 數據

    對於 etcd 集群一般情況下,為了安全都會開啟 https 證書認證的方式,所以要想讓 Prometheus 訪問到 etcd 集群的監控數據,就需要提供相應的證書校驗。

    首先我們將需要使用到的證書通過 secret 對象保存到集群中去:(在 etcd 運行的節點)

    [root@centos7--a ssl]# pwd
    /opt/etcd/ssl
    [root@centos7--a ssl]# kubectl -n monitoring create secret generic etcd-certs --from-file=/opt/kubernetes/ssl/server.pem --from-file=/opt/kubernetes/ssl/server-key.pem --from-file=/opt/kubernetes/ssl/ca.pem
    secret/etcd-certs created
    

    Prometheus配置文件,將上面創建的 etcd-certs 對象配置到 prometheus 資源對象中

    [root@centos7--a manifests]# pwd
    /root/kube-prometheus/manifests
    [root@centos7--a manifests]# vim prometheus-prometheus.yaml
      replicas: 2
      secrets:
        - etcd-certs
    [root@centos7--a manifests]# kubectl apply -f prometheus-prometheus.yaml
    prometheus.monitoring.coreos.com/k8s configured
    

    進入 pod 內查看證書是否存在

    #等到pod重啟后,進入pod查看是否可以看到證書
    [root@centos7--a kube-prometheus]# kubectl exec -it prometheus-k8s-0  -n monitoring  -- sh
    Defaulting container name to prometheus.
    Use 'kubectl describe pod/prometheus-k8s-0 -n monitoring' to see all of the containers in this pod.
    /prometheus $ ls /etc/prometheus/secrets/
    etcd-certs
    /prometheus $ ls /etc/prometheus/secrets/ -l
    total 0
    drwxrwsrwt    3 root     2000           140 Jun 27 04:59 etcd-certs
    /prometheus $ ls /etc/prometheus/secrets/etcd-certs/ -l
    total 0
    lrwxrwxrwx    1 root     root            13 Jun 27 04:59 ca.pem -> ..data/ca.pem
    lrwxrwxrwx    1 root     root            21 Jun 27 04:59 server-key.pem -> ..data/server-key.pem
    lrwxrwxrwx    1 root     root            17 Jun 27 04:59 server.pem -> ..data/server.pem
    

    創建 ServiceMonitor

    現在 Prometheus 訪問 etcd 集群的證書已經準備好了,接下來創建 ServiceMonitor 對象即可(prometheus-serviceMonitorEtcd.yaml)

    $ vim prometheus-serviceMonitorEtcd.yaml
    apiVersion: monitoring.coreos.com/v1
    kind: ServiceMonitor
    metadata:
      name: etcd-k8s
      namespace: monitoring
      labels:
        k8s-app: etcd-k8s
    spec:
      jobLabel: k8s-app
      endpoints:
      - port: port
        interval: 30s
        scheme: https
        tlsConfig:
          caFile: /etc/prometheus/secrets/etcd-certs/ca.pem
          certFile: /etc/prometheus/secrets/etcd-certs/server.pem
          keyFile: /etc/prometheus/secrets/etcd-certs/server-key.pem
          insecureSkipVerify: true
      selector:
        matchLabels:
          k8s-app: etcd
      namespaceSelector:
        matchNames:
        - kube-system
    
    $ kubectl apply -f prometheus-serviceMonitorEtcd.yaml    
    
    

    上面我們在 monitoring 命名空間下面創建了名為 etcd-k8s 的 ServiceMonitor
    對象,匹配 kube-system 這個命名空間下面的具有 k8s-app=etcd 這個 label 標籤的
    Service,jobLabel 表示用於檢索 job 任務名稱的標籤,和前面不太一樣的地方是 endpoints 屬性的寫法,配置上訪問
    etcd 的相關證書,endpoints 屬性下面可以配置很多抓取的參數,比如 relabel、proxyUrl,tlsConfig
    表示用於配置抓取監控數據端點的 tls 認證,由於證書 serverName 和 etcd 中籤發的可能不匹配,所以加上了
    insecureSkipVerify=true

    創建 Service

    ServiceMonitor 創建完成了,但是現在還沒有關聯的對應的 Service 對象,所以需要我們去手動創建一個 Service 對象(prometheus-etcdService.yaml):

    $ vim prometheus-etcdService.yaml
    apiVersion: v1
    kind: Service
    metadata:
      name: etcd-k8s
      namespace: kube-system
      labels:
        k8s-app: etcd
    spec:
      type: ClusterIP
      clusterIP: None
      ports:
      - name: port
        port: 2379
        protocol: TCP
    
    ---
    apiVersion: v1
    kind: Endpoints
    metadata:
      name: etcd-k8s
      namespace: kube-system
      labels:
        k8s-app: etcd
    subsets:
    - addresses:
      - ip: 10.10.10.128
      - ip: 10.10.10.129
      - ip: 10.10.10.130    
      ports:
      - name: port
        port: 2379
        protocol: TCP
    
    $ kubectl apply -f prometheus-etcdService.yaml
    
    

    等待一會兒就可以看到 target 已經包含了

    數據採集到后,可以在 grafana 中導入編號為3070的 dashboard,獲取到 etcd 的監控圖表。

    3.18 為遠端 kubectl 準備管理員證書

    [root@centos7-nginx scripts]# cd ssl/
    [root@centos7-nginx ssl]# cat admin.kubeconfig > ~/.kube/config
    [root@centos7-nginx ssl]# vim ~/.kube/config
    [root@centos7-nginx ssl]# kubectl get cs
    NAME                 STATUS    MESSAGE             ERROR
    scheduler            Healthy   ok
    controller-manager   Healthy   ok
    etcd-2               Healthy   {"health":"true"}
    etcd-0               Healthy   {"health":"true"}
    etcd-1               Healthy   {"health":"true"}
    

    3.19 給節點打上角色標籤

    默認裝完在角色這列显示 <none>

    [root@centos7-nginx ~]# kubectl get nodes
    NAME        STATUS   ROLES    AGE   VERSION
    centos7-a   Ready    <none>   32h   v1.18.3
    centos7-b   Ready    <none>   32h   v1.18.3
    centos7-c   Ready    <none>   32h   v1.18.3
    centos7-d   Ready    <none>   21m   v1.18.3
    centos7-e   Ready    <none>   20m   v1.18.3
    

    執行如下命令即可:

    [root@centos7-nginx ~]# kubectl label nodes centos7-a node-role.kubernetes.io/master=
    node/centos7-a labeled
    [root@centos7-nginx ~]# kubectl label nodes centos7-b node-role.kubernetes.io/master=
    node/centos7-b labeled
    [root@centos7-nginx ~]# kubectl label nodes centos7-c node-role.kubernetes.io/master=
    node/centos7-c labeled
    [root@centos7-nginx ~]# kubectl label nodes centos7-d node-role.kubernetes.io/node=
    node/centos7-d labeled
    [root@centos7-nginx ~]# kubectl label nodes centos7-e node-role.kubernetes.io/node=
    node/centos7-e labeled
    

    再次查看

    [root@centos7-nginx ~]# kubectl get nodes
    NAME        STATUS   ROLES    AGE   VERSION
    centos7-a   Ready    master   32h   v1.18.3
    centos7-b   Ready    master   32h   v1.18.3
    centos7-c   Ready    master   32h   v1.18.3
    centos7-d   Ready    node     23m   v1.18.3
    centos7-e   Ready    node     22m   v1.18.3
    

    3.20 測試在節點上執行維護工作

    驅逐並使節點不可調度

    [root@centos7-nginx scripts]# kubectl drain centos7-d --ignore-daemonsets=true --delete-local-data=true --force=true
    node/centos7-d cordoned
    evicting pod kube-system/coredns-6fdfb45d56-pvnzt
    pod/coredns-6fdfb45d56-pvnzt evicted
    node/centos7-d evicted
    [root@centos7-nginx scripts]# kubectl get nodes
    NAME        STATUS                     ROLES    AGE   VERSION
    centos7-a   Ready                      master   47h   v1.18.3
    centos7-b   Ready                      master   47h   v1.18.3
    centos7-c   Ready                      master   47h   v1.18.3
    centos7-d   Ready,SchedulingDisabled   node     15h   v1.18.3
    centos7-e   Ready                      node     15h   v1.18.3
    

    重新使節點可調度:

    [root@centos7-nginx scripts]# kubectl uncordon centos7-d
    node/centos7-d uncordoned
    [root@centos7-nginx scripts]# kubectl get nodes
    NAME        STATUS   ROLES    AGE   VERSION
    centos7-a   Ready    master   47h   v1.18.3
    centos7-b   Ready    master   47h   v1.18.3
    centos7-c   Ready    master   47h   v1.18.3
    centos7-d   Ready    node     15h   v1.18.3
    centos7-e   Ready    node     15h   v1.18.3
    

    3.21 使 master 節點不運行pod

    master節點最好是不要作為node使用,也不推薦做node節點,

    在該集群中需要打下標籤node-role.kubernetes.io/master=:NoSchedule才能實現

    [root@centos7-nginx scripts]# kubectl taint nodes centos7-a  node-role.kubernetes.io/master=:NoSchedule
    node/centos7-a tainted
    [root@centos7-nginx scripts]# kubectl taint nodes centos7-b  node-role.kubernetes.io/master=:NoSchedule
    node/centos7-b tainted
    [root@centos7-nginx scripts]# kubectl taint nodes centos7-c  node-role.kubernetes.io/master=:NoSchedule
    node/centos7-c tainted
    

    部署一個 nginx 的 deploy 進行驗證

    # 創建一個 deployment
    [root@centos7-nginx scripts]# kubectl create deployment nginx-dns --image=nginx
    deployment.apps/nginx-dns created
    # 修改副本數為 3
    [root@centos7-nginx scripts]# kubectl patch deployment nginx-dns -p '{"spec":{"replicas":3}}'
    deployment.apps/nginx-dns patched
    # 查看位置分佈
    [root@centos7-nginx scripts]# kubectl get pods -o wide
    NAME                         READY   STATUS              RESTARTS   AGE    IP             NODE        NOMINATED NODE   READINESS GATES
    busybox                      1/1     Running             0          14m    10.244.3.113   centos7-d   <none>           <none>
    nginx-dns-5c6b6b99df-6k4qv   1/1     Running             0          2m8s   10.244.3.116   centos7-d   <none>           <none>
    nginx-dns-5c6b6b99df-88lcr   0/1     ContainerCreating   0          6s     <none>         centos7-d   <none>           <none>
    nginx-dns-5c6b6b99df-c2nnc   0/1     ContainerCreating   0          6s     <none>         centos7-e   <none>           <none>
    

    如果需要把master當node:

    kubectl taint nodes centos7-a node-role.kubernetes.io/master-
    

    4 FAQ

    4.1 解決無法查詢pods日誌問題

    [root@centos7-b cfg]# kubectl exec -it nginx -- bash
    error: unable to upgrade connection: Forbidden (user=kubernetes, verb=create, resource=nodes, subresource=proxy)
    [root@centos7-b cfg]# kubectl logs -f nginx
    Error from server (Forbidden): Forbidden (user=kubernetes, verb=get, resource=nodes, subresource=proxy) ( pods/log nginx)
    
    $ vim ~/yaml/apiserver-to-kubelet-rbac.yml
    
    kind: ClusterRoleBinding
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
      name: kubelet-api-admin
    subjects:
    - kind: User
      name: kubernetes
      apiGroup: rbac.authorization.k8s.io
    roleRef:
      kind: ClusterRole
      name: system:kubelet-api-admin
      apiGroup: rbac.authorization.k8s.io
    
    # 應用
    $ kubectl apply -f ~/yaml/apiserver-to-kubelet-rbac.yml
    
    [root@centos7-a ~]# kubectl logs -f nginx
    /docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
    /docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
    /docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
    10-listen-on-ipv6-by-default.sh: Getting the checksum of /etc/nginx/conf.d/default.conf
    10-listen-on-ipv6-by-default.sh: Enabled listen on IPv6 in /etc/nginx/conf.d/default.conf
    /docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
    /docker-entrypoint.sh: Configuration complete; ready for start up
    10.244.2.1 - - [17/Jun/2020:02:45:59 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.29.0" "-"
    10.244.2.1 - - [17/Jun/2020:02:46:09 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.29.0" "-"
    10.244.2.1 - - [17/Jun/2020:02:46:12 +0000] "HEAD / HTTP/1.1" 200 0 "-" "curl/7.29.0" "-"
    10.244.2.1 - - [17/Jun/2020:02:46:13 +0000] "HEAD / HTTP/1.1" 200 0 "-" "curl/7.29.0" "-"
    

    5 參考

    Prometheus Operator 監控 etcd 集群: https://www.qikqiak.com/post/prometheus-operator-monitor-etcd/

    Kubernetes v1.18.2 二進制高可用部署: https://www.yp14.cn/2020/05/19/Kubernetes-v1-18-2-二進制高可用部署/

    本站聲明:網站內容來源於博客園,如有侵權,請聯繫我們,我們將及時處理

    【其他文章推薦】

    ※自行創業缺乏曝光? 網頁設計幫您第一時間規劃公司的形象門面

    網頁設計一頭霧水該從何著手呢? 台北網頁設計公司幫您輕鬆架站!

    ※想知道最厲害的網頁設計公司"嚨底家"!

    ※幫你省時又省力,新北清潔一流服務好口碑

    ※別再煩惱如何寫文案,掌握八大原則!

  • Golang 網絡編程

    Golang 網絡編程

    目錄

    • TCP網絡編程
    • UDP網絡編程
    • Http網絡編程
    • 理解函數是一等公民
    • HttpServer源碼閱讀
      • 註冊路由
      • 啟動服務
      • 處理請求
    • HttpClient源碼閱讀
      • DemoCode
      • 整理思路
      • 重要的struct
      • 流程
      • transport.dialConn
      • 發送請求

    TCP網絡編程

    存在的問題:

    • 拆包:
      • 對發送端來說應用程序寫入的數據遠大於socket緩衝區大小,不能一次性將這些數據發送到server端就會出現拆包的情況。
      • 通過網絡傳輸的數據包最大是1500字節,當TCP報文的長度 - TCP頭部的長度 > MSS(最大報文長度時)將會發生拆包,MSS一般長(1460~1480)字節。
    • 粘包:
      • 對發送端來說:應用程序發送的數據很小,遠小於socket的緩衝區的大小,導致一個數據包裏面有很多不通請求的數據。
      • 對接收端來說:接收數據的方法不能及時的讀取socket緩衝區中的數據,導致緩衝區中積壓了不同請求的數據。

    解決方法:

    • 使用帶消息頭的協議,在消息頭中記錄數據的長度。
    • 使用定長的協議,每次讀取定長的內容,不夠的使用空格補齊。
    • 使用消息邊界,比如使用 \n 分隔 不同的消息。
    • 使用諸如 xml json protobuf這種複雜的協議。

    實驗:使用自定義協議

    整體的流程:

    客戶端:發送端連接服務器,將要發送的數據通過編碼器編碼,發送。

    服務端:啟動、監聽端口、接收連接、將連接放在協程中處理、通過解碼器解碼數據。

    	//###########################
    //######  Server端代碼  ###### 
    //###########################
    
    func main() {
    	// 1. 監聽端口 2.accept連接 3.開goroutine處理連接
    	listen, err := net.Listen("tcp", "0.0.0.0:9090")
    	if err != nil {
    		fmt.Printf("error : %v", err)
    		return
    	}
    	for{
    		conn, err := listen.Accept()
    		if err != nil {
    			fmt.Printf("Fail listen.Accept : %v", err)
    			continue
    		}
    		go ProcessConn(conn)
    	}
    }
    
    // 處理網絡請求
    func ProcessConn(conn net.Conn) {
    	defer conn.Close()
    	for  {
    		bt,err:=coder.Decode(conn)
    		if err != nil {
    			fmt.Printf("Fail to decode error [%v]", err)
    			return
    		}
    		s := string(bt)
    		fmt.Printf("Read from conn:[%v]\n",s)
    	}
    }
    
    //###########################
    //######  Clinet端代碼  ###### 
    //###########################
    func main() {
    	conn, err := net.Dial("tcp", ":9090")
    	defer conn.Close()
    	if err != nil {
    		fmt.Printf("error : %v", err)
    		return
    	}
    
    	// 將數據編碼併發送出去
    	coder.Encode(conn,"hi server i am here");
    }
    
    //###########################
    //######  編解碼器代碼  ###### 
    //###########################
    /**
     * 	解碼:
     */
    func Decode(reader io.Reader) (bytes []byte, err error) {
    	// 先把消息頭讀出來
    	headerBuf := make([]byte, len(msgHeader))
    	if _, err = io.ReadFull(reader, headerBuf); err != nil {
    		fmt.Printf("Fail to read header from conn error:[%v]", err)
    		return nil, err
    	}
    	// 檢驗消息頭
    	if string(headerBuf) != msgHeader {
    		err = errors.New("msgHeader error")
    		return nil, err
    	}
    	// 讀取實際內容的長度
    	lengthBuf := make([]byte, 4)
    	if _, err = io.ReadFull(reader, lengthBuf); err != nil {
    		return nil, err
    	}
    	contentLength := binary.BigEndian.Uint32(lengthBuf)
    	contentBuf := make([]byte, contentLength)
    	// 讀出消息體
    	if _, err := io.ReadFull(reader, contentBuf); err != nil {
    		return nil, err
    	}
    	return contentBuf, err
    }
    
    /**
     *  編碼
     *  定義消息的格式: msgHeader + contentLength + content
     *  conn 本身實現了 io.Writer 接口
     */
    func Encode(conn io.Writer, content string) (err error) {
    	// 寫入消息頭
    	if err = binary.Write(conn, binary.BigEndian, []byte(msgHeader)); err != nil {
    		fmt.Printf("Fail to write msgHeader to conn,err:[%v]", err)
    	}
    	// 寫入消息體長度
    	contentLength := int32(len([]byte(content)))
    	if err = binary.Write(conn, binary.BigEndian, contentLength); err != nil {
    		fmt.Printf("Fail to write contentLength to conn,err:[%v]", err)
    	}
    	// 寫入消息
    	if err = binary.Write(conn, binary.BigEndian, []byte(content)); err != nil {
    		fmt.Printf("Fail to write content to conn,err:[%v]", err)
    	}
    	return err
    
    

    客戶端的conn一直不被Close 有什麼表現?

    四次揮手各個狀態的如下:

    主從關閉方						被動關閉方
    established					established
    Fin-wait1					
    										closeWait
    Fin-wait2
    Tiem-wait						lastAck
    Closed							Closed
    

    如果客戶端的連接手動的關閉,它和服務端的狀態會一直保持established建立連接中的狀態。

    MacBook-Pro% netstat -aln | grep 9090
    tcp4       0      0  127.0.0.1.9090         127.0.0.1.62348        ESTABLISHED
    tcp4       0      0  127.0.0.1.62348        127.0.0.1.9090         ESTABLISHED
    tcp46      0      0  *.9090                 *.*                    LISTEN
    

    服務端的conn一直不被關閉 有什麼表現?

    客戶端的進程結束后,會發送fin數據包給服務端,向服務端請求斷開連接。

    服務端的conn不關閉的話,服務端就會停留在四次揮手的close_wait階段(我們不手動Close,服務端就任務還有數據/任務沒處理完,因此它不關閉)。

    客戶端停留在 fin_wait2的階段(在這個階段等着服務端告訴自己可以真正斷開連接的消息)。

    MacBook-Pro% netstat -aln | grep 9090
    tcp4       0      0  127.0.0.1.9090         127.0.0.1.62888        CLOSE_WAIT
    tcp4       0      0  127.0.0.1.62888        127.0.0.1.9090         FIN_WAIT_2
    tcp46      0      0  *.9090                 *.*                    LISTEN
    

    什麼是binary.BigEndian?什麼是binary.LittleEndian?

    對計算機來說一切都是二進制的數據,BigEndian和LittleEndian描述的就是二進制數據的字節順序。計算機內部,小端序被廣泛應用於現代性 CPU 內部存儲數據;大端序常用於網絡傳輸和文件存儲。

    比如:

    一個數的二進製表示為 	 0x12345678
    BigEndian   表示為: 0x12 0x34 0x56 0x78 
    LittleEndian表示為: 0x78 0x56 0x34 0x12
    

    UDP網絡編程

    思路:

    UDP服務器:1、監聽 2、循環讀取消息 3、回複數據。

    UDP客戶端:1、連接服務器 2、發送消息 3、接收消息。

    // ################################
    // ######## UDPServer #########
    // ################################
    func main() {
    	// 1. 監聽端口 2.accept連接 3.開goroutine處理連接
    	listen, err := net.Listen("tcp", "0.0.0.0:9090")
    	if err != nil {
    		fmt.Printf("error : %v", err)
    		return
    	}
    	for{
    		conn, err := listen.Accept()
    		if err != nil {
    			fmt.Printf("Fail listen.Accept : %v", err)
    			continue
    		}
    		go ProcessConn(conn)
    	}
    }
    
    // 處理網絡請求
    func ProcessConn(conn net.Conn) {
    	defer conn.Close()
    	for  {
    		bt,err:= coder.Decode(conn)
    		if err != nil {
    			fmt.Printf("Fail to decode error [%v]", err)
    			return
    		}
    		s := string(bt)
    		fmt.Printf("Read from conn:[%v]\n",s)
    	}
    }
    
    // ################################
    // ######## UDPClient #########
    // ################################
    func main() {
    
    	udpConn, err := net.DialUDP("udp", nil, &net.UDPAddr{
    		IP:   net.IPv4(127, 0, 0, 1),
    		Port: 9091,
    	})
    	if err != nil {
    		fmt.Printf("error : %v", err)
    		return
    	}
    
    	_, err = udpConn.Write([]byte("i am udp client"))
    	if err != nil {
    		fmt.Printf("error : %v", err)
    		return
    	}
    	bytes:=make([]byte,1024)
    	num, addr, err := udpConn.ReadFromUDP(bytes)
    	if err != nil {
    		fmt.Printf("Fail to read from udp error: [%v]", err)
    		return
    	}
    	fmt.Printf("Recieve from udp address:[%v], bytes:[%v], content:[%v]",addr,num,string(bytes))
    }
    

    Http網絡編程

    思路整理:

    HttpServer:1、創建路由器。2、為路由器綁定路由規則。3、創建服務器、監聽端口。 4啟動讀服務。

    HttpClient: 1、創建連接池。2、創建客戶端,綁定連接池。3、發送請求。4、讀取響應。

    func main() {
    	mux := http.NewServeMux()
    	mux.HandleFunc("/login", doLogin)
    	server := &http.Server{
    		Addr:         ":8081",
    		WriteTimeout: time.Second * 2,
    		Handler:      mux,
    	}
    	log.Fatal(server.ListenAndServe())
    }
    
    func doLogin(writer http.ResponseWriter,req *http.Request){
    	_, err := writer.Write([]byte("do login"))
    	if err != nil {
    		fmt.Printf("error : %v", err)
    		return
    	}
    }
    

    HttpClient端

    func main() {
    	transport := &http.Transport{
        // 撥號的上下文
    		DialContext: (&net.Dialer{
    			Timeout:   30 * time.Second, // 撥號建立連接時的超時時間
    			KeepAlive: 30 * time.Second, // 長連接存活的時間
    		}).DialContext,
        // 最大空閑連接數
    		MaxIdleConns:          100,  
        // 超過最大的空閑連接數的連接,經過 IdleConnTimeout時間後會失效
    		IdleConnTimeout:       10 * time.Second, 
        // https使用了SSL安全證書,TSL是SSL的升級版
        // 當我們使用https時,這行配置生效
    		TLSHandshakeTimeout:   10 * time.Second, 
    		ExpectContinueTimeout: 1 * time.Second,  // 100-continue 狀態碼超時時間
    	}
    
    	// 創建客戶端
    	client := &http.Client{
    		Timeout:   time.Second * 10, //請求超時時間
    		Transport: transport,
    	}
    
    	// 請求數據
    	res, err := client.Get("http://localhost:8081/login")
    	if err != nil {
    		fmt.Printf("error : %v", err)
    		return
    	}
    	defer res.Body.Close()
    
    	bytes, err := ioutil.ReadAll(res.Body)
    	if err != nil {
    		fmt.Printf("error : %v", err)
    		return
    	}
    	fmt.Printf("Read from http server res:[%v]", string(bytes))
    }
    

    理解函數是一等公民

    點擊查看在github中函數相關的筆記

    在golang中函數是一等公民,我們可以把一個函數當作普通變量一樣使用。

    比如我們有個函數HelloHandle,我們可以直接使用它。

    func HelloHandle(name string, age int) {
    	fmt.Printf("name:[%v] age:[%v]", name, age)
    }
    
    func main() {
      HelloHandle("tom",12)
    }
    

    閉包

    如何理解閉包:閉包本質上是一個函數,而且這個函數會引用它外部的變量,如下例子中的f3中的匿名函數本身就是一個閉包。 通常我們使用閉包起到一個適配的作用。

    例1:

    // f2是一個普通函數,有兩個入參數
    func f2() {
    	fmt.Printf("f2222")
    }
    
    // f1函數的入參是一個f2類型的函數
    func f1(f2 func()) {
    	f2()
    }
    
    func main() {
      // 由於golang中函數是一等公民,所以我們可以把f2同普通變量一般傳遞給f1
    	f1(f2)
    }
    

    例2: 在上例中更進一步。f2有了自己的參數, 這時就不能直接把f2傳遞給f1了。

    總不能傻傻的這樣吧f1(f2(1,2)) ???

    而閉包就能解決這個問題。

    // f2是一個普通函數,有兩個入參數
    func f2(x int, y int) {
    	fmt.Println("this is f2 start")
    	fmt.Printf("x: %d y: %d \n", x, y)
    	fmt.Println("this is f2 end")
    }
    
    // f1函數的入參是一個f2類型的函數
    func f1(f2 func()) {
    	fmt.Println("this is f1 will call f2")
    	f2()
    	fmt.Println("this is f1 finished call f2")
    }
    
    // 接受一個兩個參數的函數, 返回一個包裝函數
    func f3(f func(int,int) ,x,y int) func() {
    	fun := func() {
    		f(x,y)
    	}
    	return fun
    }
    
    func main() {
    	// 目標是實現如下的傳遞與調用
    	f1(f3(f2,6,6))
    }
    

    實現方法的回調:

    下面的例子中實現這樣的功能:就好像是我設計了一個框架,定好了整個框架運轉的流程(或者說是提供了一個編程模版),框架具體做事的函數你根據自己的需求自己實現,我的框架只是負責幫你回調你具體的方法。

    // 自定義類型,handler本質上是一個函數
    type HandlerFunc func(string, int)
    
    // 閉包
    func (f HandlerFunc) Serve(name string, age int) {
    	f(name, age)
    }
    
    // 具體的處理函數
    func HelloHandle(name string, age int) {
    	fmt.Printf("name:[%v] age:[%v]", name, age)
    }
    
    func main() {
      // 把HelloHandle轉換進自定義的func中
    	handlerFunc := HandlerFunc(HelloHandle)
      // 本質上會去回調HelloHandle方法
    	handlerFunc.Serve("tom", 12)
      
      // 上面兩行效果 == 下面這行
      // 只不過上面的代碼是我在幫你回調,下面的是你自己主動調用
      HelloHandle("tom",12)
    }
    

    HttpServer源碼閱讀

    註冊路由

    直觀上看註冊路由這一步,就是它要做的就是將在路由器url pattern和開發者提供的func關聯起來。 很容易想到,它裏面很可能是通過map實現的。

    
    func main() {
    	// 創建路由器
    	// 為路由器綁定路由規則
    	mux := http.NewServeMux()
    	mux.HandleFunc("/login", doLogin)
    	...
    }
    
    func doLogin(writer http.ResponseWriter,req *http.Request){
    	_, err := writer.Write([]byte("do login"))
    	if err != nil {
    		fmt.Printf("error : %v", err)
    		return
    	}
    }
    

    姑且將ServeMux當作是路由器。我們使用http包下的 NewServerMux 函數創建一個新的路由器對象,進而使用它的HandleFunc(pattern,func)函數完成路由的註冊。

    跟進NewServerMux函數,可以看到,它通過new函數返回給我們一個ServeMux結構體。

    func NewServeMux() *ServeMux {
      return new(ServeMux) 
    }
    

    這個ServeMux結構體長下面這樣:在這個ServeMux結構體中我們就看到了這個維護pattern和func的map

    type ServeMux struct {
    	mu    sync.RWMutex 
    	m     map[string]muxEntry
    	hosts bool // whether any patterns contain hostnames
    }
    

    這個muxEntry長下面這樣:

    type muxEntry struct {
    	h       Handler
    	pattern string
    }
    
    type Handler interface {
    	ServeHTTP(ResponseWriter, *Request)
    }
    

    看到這裏問題就來了,上面我們手動註冊進路由器中的僅僅是一個有規定參數的方法,到這裏怎麼成了一個Handle了?我們也沒有說去手動的實現Handler這個接口,也沒有重寫ServeHTTP函數啊, 在golang中實現一個接口不得像下面這樣搞嗎?**

    type Handle interface {
    	Serve(string, int, string)
    }
    
    type HandleImpl struct {
    
    }
    
    func (h HandleImpl)Serve(string, int, string){
    
    }
    

    帶着這個疑問看下面的方法:

    	// 由於函數是一等公民,故我們將doLogin函數同普通變量一樣當做入參傳遞進去。
     	mux.HandleFunc("/login", doLogin)
    
      func doLogin(writer http.ResponseWriter,req *http.Request){
        ...
    	}
    

    跟進去看 HandleFunc 函數的實現:

    首先:HandleFunc函數的第二個參數是接收的函數的類型和doLogin函數的類型是一致的,所以doLogin能正常的傳遞進HandleFunc中。

    其次:我們的關注點應該是下面的HandlerFunc(handler)

    // HandleFunc registers the handler function for the given pattern.
    func (mux *ServeMux) HandleFunc(pattern string, handler func(ResponseWriter, *Request)) {
    	if handler == nil {
    		panic("http: nil handler")
    	}
    	mux.Handle(pattern, HandlerFunc(handler))
    }
    

    跟進這個HandlerFunc(handler) 看到下圖,真相就大白於天下了。golang以一種優雅的方式悄無聲息的為我們完成了一次適配。這麼看來上面的HandlerFunc(handler)並不是函數的調用,而是doLogin轉換成自定義類型。這個自定義類型去實現了Handle接口(因為它重寫了ServeHTTP函數)以閉包的形式完美的將我們的doLogin適配成了Handle類型。

    在往下看Handle方法:

    第一:將pattern和handler註冊進map中

    第二:為了保證整個過程的併發安全,使用鎖保護整個過程。

    // Handle registers the handler for the given pattern.
    // If a handler already exists for pattern, Handle panics.
    func (mux *ServeMux) Handle(pattern string, handler Handler) {
    	mux.mu.Lock()
    	defer mux.mu.Unlock()
    
    	if pattern == "" {
    		panic("http: invalid pattern")
    	}
    	if handler == nil {
    		panic("http: nil handler")
    	}
    	if _, exist := mux.m[pattern]; exist {
    		panic("http: multiple registrations for " + pattern)
    	}
    
    	if mux.m == nil {
    		mux.m = make(map[string]muxEntry)
    	}
    	mux.m[pattern] = muxEntry{h: handler, pattern: pattern}
    
    	if pattern[0] != '/' {
    		mux.hosts = true
    	}
    
    

    啟動服務

    概覽圖:

    和java對比着看,在java一組複雜的邏輯會被封裝成一個class。在golang中對應的就是一組複雜的邏輯會被封裝成一個結構體。

    對應HttpServer肯定也是這樣,http服務器在golang的實現中有自己的結構體。它就是http包下的Server。

    它有一系列描述性屬性。如監聽的地址、寫超時時間、路由器。

    	server := &http.Server{
    		Addr:         ":8081",
    		WriteTimeout: time.Second * 2,
    		Handler:      mux,
    	}
    	log.Fatal(server.ListenAndServe())
    

    我們看它啟動服務的函數:server.ListenAndServe()

    實現的邏輯是使用net包下的Listen函數,獲取給定地址上的tcp連接。

    再將這個tcp連接封裝進 tcpKeepAliveListenner 結構體中。

    在將這個tcpKeepAliveListenner丟進Server的Serve函數中處理

    // ListenAndServe 會監聽開發者給定網絡地址上的tcp連接,當有請求到來時,會調用Serve函數去處理這個連接。
    // 它接收到所有連接都使用 TCP keep-alives相關的配置
    // 
    // 如果構造Server時沒有指定Addr,他就會使用默認值: “:http”
    // 
    // 當Server ShutDown或者是Close,ListenAndServe總是會返回一個非nil的error。
    // 返回的這個Error是 ErrServerClosed
    func (srv *Server) ListenAndServe() error {
    	if srv.shuttingDown() {
    		return ErrServerClosed
    	}
    	addr := srv.Addr
    	if addr == "" {
    		addr = ":http"
    	}
      // 底層藉助於tcp實現
    	ln, err := net.Listen("tcp", addr)
    	if err != nil {
    		return err
    	}
    	return srv.Serve(tcpKeepAliveListener{ln.(*net.TCPListener)})
    }
    
    // tcpKeepAliveListener會為TCP設置一個keep-alive 超時時長。
    // 它通常被 ListenAndServe 和 ListenAndServeTLS使用。
    // 它保證了已經dead的TCP最終都會消失。
    type tcpKeepAliveListener struct {
    	*net.TCPListener
    }
    

    接着去看看Serve方法,上一個函數中獲取到了一個基於tcp的Listener,從這個Listener中可以不斷的獲取出新的連接,下面的方法中使用無限for循環完成這件事。conn獲取到后將連接封裝進httpConn,為了保證不阻塞下一個連接到到來,開啟新的goroutine處理這個http連接。

    func (srv *Server) Serve(l net.Listener) error {
      // 如果有一個包裹了 srv 和 listener 的鈎子函數,就執行它
    	if fn := testHookServerServe; fn != nil {
    		fn(srv, l) // call hook with unwrapped listener
    	}
    	
      // 將tcp的Listener封裝進onceCloseListener,保證連接不會被關閉多次。
    	l = &onceCloseListener{Listener: l}
    	defer l.Close()
     
      // http2相關的配置
    	if err := srv.setupHTTP2_Serve(); err != nil {
    		return err
    	}
    
    	if !srv.trackListener(&l, true) {
    		return ErrServerClosed
    	}
    	defer srv.trackListener(&l, false)
    	
      // 如果沒有接收到請求睡眠多久
    	var tempDelay time.Duration     // how long to sleep on accept failure
    	baseCtx := context.Background() // base is always background, per Issue 16220
    	ctx := context.WithValue(baseCtx, ServerContextKey, srv)
      // 開啟無限循環,嘗試從Listenner中獲取連接。
    	for {
    		rw, e := l.Accept()
        // accpet過程中發生錯屋
    		if e != nil {
    			select {
            // 如果從server的doneChan中可以獲取內容,返回Server關閉了
    			case <-srv.getDoneChan():
    				return ErrServerClosed
    			default:
    			}
          // 如果發生了 net.Error 並且是臨時的錯誤就睡5毫秒,再發生錯誤睡眠的時間*2,上線是1s
    			if ne, ok := e.(net.Error); ok && ne.Temporary() {
    				if tempDelay == 0 {
    					tempDelay = 5 * time.Millisecond
    				} else {
    					tempDelay *= 2
    				}
    				if max := 1 * time.Second; tempDelay > max {
    					tempDelay = max
    				}
    				srv.logf("http: Accept error: %v; retrying in %v", e, tempDelay)
    				time.Sleep(tempDelay)
    				continue
    			}
    			return e
    		}
        // 如果沒有發生錯誤,清空睡眠的時間
    		tempDelay = 0
        // 將接收到連接封裝進httpConn
    		c := srv.newConn(rw)
    		c.setState(c.rwc, StateNew) // before Serve can return
        // 開啟一條新的協程處理這個連接
    		go c.serve(ctx)
    	}
    }
    

    處理請求

    c.serve(ctx)中就會去解析http相關的報文信息~,將http報文解析進Request結構體中。

    部分代碼如下:

    		// 將 server 包裹為 serverHandler 的實例,執行它的 ServeHTTP 方法,處理請求,返迴響應。
    		// serverHandler 委託給 server 的 Handler 或者 DefaultServeMux(默認路由器)
    		// 來處理 "OPTIONS *" 請求。
    		serverHandler{c.server}.ServeHTTP(w, w.req)
    
    // serverHandler delegates to either the server's Handler or
    // DefaultServeMux and also handles "OPTIONS *" requests.
    type serverHandler struct {
    	srv *Server
    }
    
    func (sh serverHandler) ServeHTTP(rw ResponseWriter, req *Request) {
      // 如果沒有定義Handler就使用默認的
    	handler := sh.srv.Handler
    	if handler == nil {
    		handler = DefaultServeMux
    	}
    	if req.RequestURI == "*" && req.Method == "OPTIONS" {
    		handler = globalOptionsHandler{}
    	}
      // 處理請求,返迴響應。
    	handler.ServeHTTP(rw, req)
    }
    

    可以看到,req中包含了我們前面說的pattern,叫做RequestUri,有了它下一步就知道該回調ServeMux中的哪一個函數。

    HttpClient源碼閱讀

    DemoCode

    func main() {
    	// 創建連接池
    	// 創建客戶端,綁定連接池
    	// 發送請求
    	// 讀取響應
    	transport := &http.Transport{
    		DialContext: (&net.Dialer{
    			Timeout:   30 * time.Second, // 連接超時
    			KeepAlive: 30 * time.Second, // 長連接存活的時間
    		}).DialContext,
        // 最大空閑連接數
    		MaxIdleConns:          100,             
        // 超過最大空閑連接數的連接會在IdleConnTimeout后被銷毀
    		IdleConnTimeout:       10 * time.Second, 
    		TLSHandshakeTimeout:   10 * time.Second, // tls握手超時時間
    		ExpectContinueTimeout: 1 * time.Second,  // 100-continue 狀態碼超時時間
    	}
    
    	// 創建客戶端
    	client := &http.Client{
    		Timeout:   time.Second * 10, //請求超時時間
    		Transport: transport,
    	}
    
    	// 請求數據,獲得響應
    	res, err := client.Get("http://localhost:8081/login")
    	if err != nil {
    		fmt.Printf("error : %v", err)
    		return
    	}
    	defer res.Body.Close()
      // 處理數據
    	bytes, err := ioutil.ReadAll(res.Body)
    	if err != nil {
    		fmt.Printf("error : %v", err)
    		return
    	}
    	fmt.Printf("Read from http server res:[%v]", string(bytes))
    }
    

    整理思路

    http.Client的代碼其實是很多的,全部很細的過一遍肯定也會難度,下面可能也是只能提及其中的一部分。

    首先明白一件事,我們編寫的HttpClient是在干什麼?(雖然這個問題很傻,但是總得問一下)是在發送Http請求。

    一般我們在開發的時候,更多的編寫的是HttpServer的代碼。是在處理Http請求, 而不是去發送Http請求,Http請求都是是前端通過ajax經由瀏覽器發送到後端的。

    其次,Http請求實際上是建立在tcp連接之上的,所以如果我們去看http.Client肯定能找到net.Dial("tcp",adds)相關的代碼。

    那也就是說,我們要看看,http.Client是如何在和服務端建立連接、發送數據、接收數據的。

    重要的struct

    http.Client中有機幾個比較重要的struct,如下

    http.Client結構體中封裝了和http請求相關的屬性,諸如 cookie,timeout,redirect以及Transport。

    type Client struct {
    	Transport RoundTripper
    	CheckRedirect func(req *Request, via []*Request) error
    	Jar CookieJar
    	Timeout time.Duration
    }
    

    Tranport實現了RoundTrpper接口:

     type RoundTripper interface {   
      // 1、RoundTrip會去執行一個簡單的 Http Trancation,併為requestt返回一個響應
      // 2、RoundTrip不會嘗試去解析response
      // 3、注意:只要返回了Reponse,無論response的狀態碼是多少,RoundTrip返回的結果:err == nil 
      // 4、RoundTrip將請求發送出去后,如果他沒有獲取到response,他會返回一個非空的err。
      // 5、同樣,RoundTrip不會嘗試去解析諸如重定向、認證、cookie這種更高級的協議。 
      // 6、除了消費和關閉請求體之外,RoundTrip不會修改request的其他字段
      // 7、RoundTrip可以在一個單獨的gorountine中讀取request的部分字段。一直到ResponseBody關閉之前,調用者都不能取消,或者重用這個request
      // 8、RoundTrip始終會保證關閉Body(包含在發生err時)。根據實現的不同,在RoundTrip關閉前,關閉Body這件事可能會在一個單獨的goroutine中去做。這就意味着,如果調用者想將請求體用於後續的請求,必須等待知道發生Close
      // 9、請求的URL和Header字段必須是被初始化的。 
    	RoundTrip(*Request) (*Response, error)
    }
    

    看上面RoundTrpper接口,它裏面只有一個方法RoundTrip,方法的作用就是執行一次Http請求,發送Request然後獲取Response。

    RoundTrpper被設計成了一個支持併發的結構體。

    Transport結構體如下:

    type Transport struct {
    	idleMu     sync.Mutex
       // user has requested to close all idle conns
    	wantIdle   bool
      // Transport的作用就是用來建立一個連接,這個idleConn就是Transport維護的空閑連接池。
    	idleConn   map[connectMethodKey][]*persistConn // most recently used at end
    	idleConnCh map[connectMethodKey]chan *persistConn
    }
    

    其中的connectMethodKey也是結構體:

    type connectMethodKey struct {
      // proxy 代理的URL,當他不為空時,就會一直使用這個key 
      // scheme 協議的類型, http https
      // addr 代理的url,也就是下游的url
    	proxy, scheme, addr string
    }
    

    persistConn是一個具體的連接實例,包含連接的上下文。

    type persistConn struct {
      // alt可選地指定TLS NextProto RoundTripper。 
      // 這用於今天的HTTP / 2和以後的將來的協議。 如果非零,則其餘字段未使用。
    	alt RoundTripper
    	t         *Transport
    	cacheKey  connectMethodKey
    	conn      net.Conn
    	tlsState  *tls.ConnectionState
      // 用於從conn中讀取內容
    	br        *bufio.Reader       // from conn
      // 用於往conn中寫內容
    	bw        *bufio.Writer       // to conn
    	nwrite    int64               // bytes written
      // 他是個chan,roundTrip會將readLoop中的內容寫入到reqch中
    	reqch     chan requestAndChan 
      // 他是個chan,roundTrip會將writeLoop中的內容寫到writech中
    	writech   chan writeRequest  
    	closech   chan struct{}       // closed when conn closed
    

    另外補充一個結構體:Request,他用來描述一次http請求的實例,它定義於http包request.go, 裏面封裝了對Http請求相關的屬性

    type Request struct {
       Method string
       URL *url.URL
       Proto      string // "HTTP/1.0"
       ProtoMajor int    // 1
       ProtoMinor int    // 0
       Header Header
       Body io.ReadCloser
       GetBody func() (io.ReadCloser, error)
       ContentLength int64
       TransferEncoding []string
       Close bool
       Host string
       Form url.Values
       PostForm url.Values
       MultipartForm *multipart.Form
       Trailer Header
       RemoteAddr string
       RequestURI string
       TLS *tls.ConnectionState
       Cancel <-chan struct{}
       Response *Response
       ctx context.Context
    }
    

    這幾個結構體共同完成如下圖所示http.Client的工作流程

    流程

    我們想發送一次Http請求。首先我們需要構造一個Request,Request本質上是對Http協議的描述(因為大家使用的都是Http協議,所以將這個Request發送到HttpServer后,HttpServer能識別並解析它)。

    // 從這行代碼開始往下看
    	res, err := client.Get("http://localhost:8081/login")
    
    // 跟進Get
    	req, err := NewRequest("GET", url, nil)
    	if err != nil {
    		return nil, err
    	}
    	return c.Do(req)
    
    // 跟進Do
    	func (c *Client) Do(req *Request) (*Response, error) {
    	return c.do(req)
     } 
    
    // 跟進do,do函數中有下面的邏輯,可以看到執行完send后已經拿到返回值了。所以我們得繼續跟進send方法
      if resp, didTimeout, err = c.send(req, deadline); err != nil 
    
    // 跟進send方法,可以看到send中還有一send方法,入參分別是:request,tranpost,deadline
    // 到現在為止,我們沒有看到有任何和服務端建立連接的動作發生,但是構造的req和擁有連接池的tranport已經見面了~
    	resp, didTimeout, err = send(req, c.transport(), deadline)
    
    // 繼續跟進這個send方法,看到了調用了rt的RoundTrip方法。
    // 這個rt就是我們編寫HttpClient代碼時創建的,綁定在http.Client上的tranport實例。
    // 這個RoundTrip方法的作用我們在上面已經說過了,最直接的作用就是:發送request 並獲取response。
    	resp, err = rt.RoundTrip(req)
    
    

    但是RoundTrip他是個定義在RoundTripper接口中的抽象方法,我們看代碼肯定是要去看具體的實現嘛
    這裏可以使用斷點調試法:在上面最後一行上打上斷點,會進入到他的具體實現中。從圖中可以看到具體的實現在roundtrip中。

    RoundTrip中調用的函數是我們自定義的transport的roundTrip函數, 跟進去如下:

    緊接着我們需要一個conn,這個conn我們通過Transport可以獲取到。conn的類型為persistConn。

    // roundTrip函數中又一個無限for循環
    for {
        // 檢查請求的上下文是否關閉了
    		select {
    		case <-ctx.Done():
    			req.closeBody()
    			return nil, ctx.Err()
    		default:
    		}
    
        // 對傳遞進來的req進行了有一層的封裝,封裝后的這個treq可以被roundTrip修改,所以每次重試都會新建
    		treq := &transportRequest{Request: req, trace: trace}
    		cm, err := t.connectMethodForRequest(treq)
    		if err != nil {
    			req.closeBody()
    			return nil, err
    		}
    
        // 到這裏真的執行從tranport中獲取和對應主機的連接,這個連接可能是http、https、http代理、http代理的高速緩存, 但是無論如何我們都已經準備好了向這個連接發送treq
        // 這裏獲取出來的連接就是我們在上文中提及的persistConn
    		pconn, err := t.getConn(treq, cm)
    		if err != nil {
    			t.setReqCanceler(req, nil)
    			req.closeBody()
    			return nil, err
    		}
    
    		var resp *Response
    		if pconn.alt != nil {
    			// HTTP/2 path.
    			t.decHostConnCount(cm.key()) // don't count cached http2 conns toward conns per host
    			t.setReqCanceler(req, nil)   // not cancelable with CancelRequest
    			resp, err = pconn.alt.RoundTrip(req)
    		} else {
          
          // 調用persistConn的roundTrip方法,發送treq並獲取響應。
    			resp, err = pconn.roundTrip(treq)
    		}
    		if err == nil {
    			return resp, nil
    		}
    		if !pconn.shouldRetryRequest(req, err) {
    			// Issue 16465: return underlying net.Conn.Read error from peek,
    			// as we've historically done.
    			if e, ok := err.(transportReadFromServerError); ok {
    				err = e.err
    			}
    			return nil, err
    		}
    		testHookRoundTripRetried()
    
    		// Rewind the body if we're able to.  (HTTP/2 does this itself so we only
    		// need to do it for HTTP/1.1 connections.)
    		if req.GetBody != nil && pconn.alt == nil {
    			newReq := *req
    			var err error
    			newReq.Body, err = req.GetBody()
    			if err != nil {
    				return nil, err
    			}
    			req = &newReq
    		}
    	}
    

    整理思路:然後看上面代碼中獲取conn和roundTrip的實現細節。

    我們需要一個conn,這個conn可以通過Transport獲取到。conn的類型為persistConn。但是不管怎麼樣,都得先獲取出 persistConn,才能進一步完成發送請求再得到服務端到響應。

    然後關於這個persistConn結構體其實上面已經提及過了。重新貼在下面

    type persistConn struct {
      // alt可選地指定TLS NextProto RoundTripper。 
      // 這用於今天的HTTP / 2和以後的將來的協議。 如果非零,則其餘字段未使用。
    	alt RoundTripper
      
      conn      net.Conn
    	t         *Transport
    	br        *bufio.Reader  // 用於從conn中讀取內容
    	bw        *bufio.Writer  // 用於往conn中寫內容
      // 他是個chan,roundTrip會將readLoop中的內容寫入到reqch中
    	reqch     chan requestAndChan 
      // 他是個chan,roundTrip會將writeLoop中的內容寫到writech中
      
    	nwrite    int64               // bytes written
    	cacheKey  connectMethodKey
    	tlsState  *tls.ConnectionState
    	writech   chan writeRequest  
    	closech   chan struct{}       // closed when conn closed
    

    跟進 t.getConn(treq, cm)代碼如下:

    	// 先嘗試從空閑緩衝池中取得連接
      // 所謂的空閑緩衝池就是Tranport結構體中的: idleConn map[connectMethodKey][]*persistConn 
      // 入參位置的cm如下:
      /* type connectMethod struct {
          // 代理的url,如果沒有代理的話,這個值為nil
    			proxyURL     *url.URL 
    			
    			// 連接所使用的協議 http、https
    			targetScheme string
          
    	    // 如果proxyURL指定了http代理或者是https代理,並且使用的協議是http而不是https。
    	    // 那麼下面的targetAddr就會不包含在connect method key中。
    	    // 因為socket可以復用不同的targetAddr值
    			targetAddr string
    	}*/
    	t.getIdleConn(cm);
    
    	// 空閑緩衝池有的空閑連接的話返回conn,否則進行如下的select
    	select {
        // todo 這裏我還不確定是在干什麼,目前猜測是這樣的:每個服務器能打開的socket句柄是有限的
        // 每次來獲取鏈接的時候,我們就計數+1。當整體的句柄在Host允許範圍內時我們不做任何干涉~
    		case <-t.incHostConnCount(cmKey):
    			// count below conn per host limit; proceed
        
        // 重新嘗試從空閑連接池中獲取連接,因為可能有的連接使用完后被放回連接池了
    		case pc := <-t.getIdleConnCh(cm):
    			if trace != nil && trace.GotConn != nil {
    				trace.GotConn(httptrace.GotConnInfo{Conn: pc.conn, Reused: pc.isReused()})
    			}
    			return pc, nil
        // 請求是否被取消了
    		case <-req.Cancel:
    			return nil, errRequestCanceledConn
        // 請求的上下文是否Done掉了
    		case <-req.Context().Done():
    			return nil, req.Context().Err()
    		case err := <-cancelc:
    			if err == errRequestCanceled {
    				err = errRequestCanceledConn
    			}
    			return nil, err
    		}
    
    	// 開啟新的gorountine新建連接一個連接
    	go func() {
        /**
        *	新建連接,方法底層封裝了tcp client dial相關的邏輯
        *	conn, err := t.dial(ctx, "tcp", cm.addr())
        *	以及根據不同的targetScheme構建不同的request的邏輯。
        */
        // 獲取到persistConn
    		pc, err := t.dialConn(ctx, cm)
        // 將persistConn寫到chan中
    		dialc <- dialRes{pc, err}
    	}()
    
    	// 再嘗試從空閑連接池中獲取
      idleConnCh := t.getIdleConnCh(cm)
    	select {
      // 如果上面的go協程撥號成功了,這裏就能取出值來
    	case v := <-dialc:
    		// Our dial finished.
    		if v.pc != nil {
    			if trace != nil && trace.GotConn != nil && v.pc.alt == nil {
    				trace.GotConn(httptrace.GotConnInfo{Conn: v.pc.conn})
    			}
    			return v.pc, nil
    		}
    		// Our dial failed. See why to return a nicer error
    		// value.
        // 將Host的連接-1
    		t.decHostConnCount(cmKey)
    		select {
        ...
    
    

    transport.dialConn

    下面代碼中的cm長這樣

    // dialConn是Transprot的方法
    // 入參:context上下文, connectMethod
    // 出參:persisnConn
    func (t *Transport) dialConn(ctx context.Context, cm connectMethod) (*persistConn, error) {
    	// 構建將要返回的 persistConn
      pconn := &persistConn{
    		t:             t,
    		cacheKey:      cm.key(),
    		reqch:         make(chan requestAndChan, 1),
    		writech:       make(chan writeRequest, 1),
    		closech:       make(chan struct{}),
    		writeErrCh:    make(chan error, 1),
    		writeLoopDone: make(chan struct{}),
    	}
    	trace := httptrace.ContextClientTrace(ctx)
    	wrapErr := func(err error) error {
    		if cm.proxyURL != nil {
    			// Return a typed error, per Issue 16997
    			return &net.OpError{Op: "proxyconnect", Net: "tcp", Err: err}
    		}
    		return err
    	}
      
      // 判斷cm中使用的協議是否是https
    	if cm.scheme() == "https" && t.DialTLS != nil {
    		var err error
    		pconn.conn, err = t.DialTLS("tcp", cm.addr())
    		if err != nil {
    			return nil, wrapErr(err)
    		}
    		if pconn.conn == nil {
    			return nil, wrapErr(errors.New("net/http: Transport.DialTLS returned (nil, nil)"))
    		}
    		if tc, ok := pconn.conn.(*tls.Conn); ok {
    			// Handshake here, in case DialTLS didn't. TLSNextProto below
    			// depends on it for knowing the connection state.
    			if trace != nil && trace.TLSHandshakeStart != nil {
    				trace.TLSHandshakeStart()
    			}
    			if err := tc.Handshake(); err != nil {
    				go pconn.conn.Close()
    				if trace != nil && trace.TLSHandshakeDone != nil {
    					trace.TLSHandshakeDone(tls.ConnectionState{}, err)
    				}
    				return nil, err
    			}
    			cs := tc.ConnectionState()
    			if trace != nil && trace.TLSHandshakeDone != nil {
    				trace.TLSHandshakeDone(cs, nil)
    			}
    			pconn.tlsState = &cs
    		}
    	} else {
        // 如果不是https協議就來到這裏,使用tcp向httpserver撥號,獲取一個tcp連接。
    		conn, err := t.dial(ctx, "tcp", cm.addr())
    		if err != nil {
    			return nil, wrapErr(err)
    		}
        // 將獲取到tcp連接交給我們的persistConn維護
    		pconn.conn = conn
        
        // 處理https相關邏輯
    		if cm.scheme() == "https" {
    			var firstTLSHost string
    			if firstTLSHost, _, err = net.SplitHostPort(cm.addr()); err != nil {
    				return nil, wrapErr(err)
    			}
    			if err = pconn.addTLS(firstTLSHost, trace); err != nil {
    				return nil, wrapErr(err)
    			}
    		}
    	}
    
    	// Proxy setup.
    	switch {
      // 如果代理URL為空,不做任何處理  
    	case cm.proxyURL == nil:
    		// Do nothing. Not using a proxy.
      //   
    	case cm.proxyURL.Scheme == "socks5":
    		conn := pconn.conn
    		d := socksNewDialer("tcp", conn.RemoteAddr().String())
    		if u := cm.proxyURL.User; u != nil {
    			auth := &socksUsernamePassword{
    				Username: u.Username(),
    			}
    			auth.Password, _ = u.Password()
    			d.AuthMethods = []socksAuthMethod{
    				socksAuthMethodNotRequired,
    				socksAuthMethodUsernamePassword,
    			}
    			d.Authenticate = auth.Authenticate
    		}
    		if _, err := d.DialWithConn(ctx, conn, "tcp", cm.targetAddr); err != nil {
    			conn.Close()
    			return nil, err
    		}
    	case cm.targetScheme == "http":
    		pconn.isProxy = true
    		if pa := cm.proxyAuth(); pa != "" {
    			pconn.mutateHeaderFunc = func(h Header) {
    				h.Set("Proxy-Authorization", pa)
    			}
    		}
    	case cm.targetScheme == "https":
    		conn := pconn.conn
    		hdr := t.ProxyConnectHeader
    		if hdr == nil {
    			hdr = make(Header)
    		}
    		connectReq := &Request{
    			Method: "CONNECT",
    			URL:    &url.URL{Opaque: cm.targetAddr},
    			Host:   cm.targetAddr,
    			Header: hdr,
    		}
    		if pa := cm.proxyAuth(); pa != "" {
    			connectReq.Header.Set("Proxy-Authorization", pa)
    		}
    		connectReq.Write(conn)
    
    		// Read response.
    		// Okay to use and discard buffered reader here, because
    		// TLS server will not speak until spoken to.
    		br := bufio.NewReader(conn)
    		resp, err := ReadResponse(br, connectReq)
    		if err != nil {
    			conn.Close()
    			return nil, err
    		}
    		if resp.StatusCode != 200 {
    			f := strings.SplitN(resp.Status, " ", 2)
    			conn.Close()
    			if len(f) < 2 {
    				return nil, errors.New("unknown status code")
    			}
    			return nil, errors.New(f[1])
    		}
    	}
    
    	if cm.proxyURL != nil && cm.targetScheme == "https" {
    		if err := pconn.addTLS(cm.tlsHost(), trace); err != nil {
    			return nil, err
    		}
    	}
    
    	if s := pconn.tlsState; s != nil && s.NegotiatedProtocolIsMutual && s.NegotiatedProtocol != "" {
    		if next, ok := t.TLSNextProto[s.NegotiatedProtocol]; ok {
    			return &persistConn{alt: next(cm.targetAddr, pconn.conn.(*tls.Conn))}, nil
    		}
    	}
    
    	if t.MaxConnsPerHost > 0 {
    		pconn.conn = &connCloseListener{Conn: pconn.conn, t: t, cmKey: pconn.cacheKey}
    	}
      
      // 初始化persistConn的bufferReader和bufferWriter
    	pconn.br = bufio.NewReader(pconn) // 可以從上面給pconn維護的tcpConn中讀數據
    	pconn.bw = bufio.NewWriter(persistConnWriter{pconn})// 可以往上面pconn維護的tcpConn中寫數據 
      
      // 新開啟兩條和persistConn相關的go協程。
    	go pconn.readLoop()
    	go pconn.writeLoop()
    	return pconn, nil
    }
    

    上面的兩條goroutine 和 br bw共同完成如下圖的流程

    發送請求

    發送req的邏輯在http包的下的tranport包中的func (t *Transport) roundTrip(req *Request) (*Response, error) {}函數中。

    如下:

    	// 發送treq
    	resp, err = pconn.roundTrip(treq)
    
    	// 跟進roundTrip
      // 可以看到他將一個writeRequest結構體類型的實例寫入了writech中
    	// 而這個writech會被上圖中的writeLoop消費,藉助bufferWriter寫入tcp連接中,完成往服務端數據的發送。
    	pc.writech <- writeRequest{req, writeErrCh, continueCh}
    

    本站聲明:網站內容來源於博客園,如有侵權,請聯繫我們,我們將及時處理

    【其他文章推薦】

    網頁設計一頭霧水該從何著手呢? 台北網頁設計公司幫您輕鬆架站!

    網頁設計公司推薦不同的風格,搶佔消費者視覺第一線

    ※Google地圖已可更新顯示潭子電動車充電站設置地點!!

    ※廣告預算用在刀口上,台北網頁設計公司幫您達到更多曝光效益

    ※別再煩惱如何寫文案,掌握八大原則!

  • STM32內存受限情況下攝像頭驅動方式與圖像裁剪的選擇

    STM32內存受限情況下攝像頭驅動方式與圖像裁剪的選擇

    1、STM32圖像接收接口

    使用stm32芯片,128kB RAM,512kB Rom,資源有限,接攝像頭採集圖像,這種情況下,內存利用制約程序設計。

    STM32使用DCMI接口讀取攝像頭,協議如下。行同步信號指示了一行數據完成,場同步信號指示了一幀圖像傳輸完成。所以出現了兩種典型的數據接收方式,按照行信號一行一行處理,按照場信號一次接收一副圖像。

     

    2、按行讀取

    以網絡上流行的野火的demo為例,使用行中斷,用DMA來讀取一行數據。

    //記錄傳輸了多少行
    static uint16_t line_num =0;
    //DMA傳輸完成中斷服務函數
    void DMA2_Stream1_IRQHandler(void)
    {
      if ( DMA_GetITStatus(DMA2_Stream1,DMA_IT_TCIF1) == SET )
      {
       /*行計數*/
      line_num++;
      if (line_num==img_height)
      {
      /*傳輸完一幀,計數複位*/
      line_num=0;
      }
      /*DMA 一行一行傳輸*/
      OV2640_DMA_Config(FSMC_LCD_ADDRESS+(lcd_width*2*(lcd_height-line_num-1)),img_width*2/4);
      DMA_ClearITPendingBit(DMA2_Stream1,DMA_IT_TCIF1);
      }
    }
    
     //幀中斷服務函數,使用幀中斷重置line_num,可防止有時掉數據的時候DMA傳送行數出現偏移
    void DCMI_IRQHandler(void)
    {
      if ( DCMI_GetITStatus (DCMI_IT_FRAME) == SET )
      {
      /*傳輸完一幀,計數複位*/
      line_num=0;
      DCMI_ClearITPendingBit(DCMI_IT_FRAME);
      }
    }

    DMA中斷服務函數中主要是使用了一個靜態變量line_num來記錄已傳輸了多少行數據,每進一次DMA中斷時自加1,由於進入一次中斷就代表傳輸完一行數據,所以line_num的值等於lcd_height時(攝像頭輸出的數據行數),表示傳輸完一幀圖像,line_num複位為0,開始另一幀數據的傳輸。line_num計數完畢后利用前面定義的OV2640_DMA_Config函數配置新的一行DMA數據傳輸,它利用line_num變量計算顯存地址的行偏移,控制DCMI數據被傳送到正確的位置,每次傳輸的都是一行像素的數據量。

    當DCMI接口檢測到攝像頭傳輸的幀同步信號時,會進入DCMI_IRQHandler中斷服務函數,在這個函數中不管line_num原來的值是什麼,它都把line_num直接複位為0,這樣下次再進入DMA中斷服務函數的時候,它會開始新一幀數據的傳輸。這樣可以利用DCMI的硬件同步信號,而不只是依靠DMA自己的傳輸計數,這樣可以避免有時STM32內部DMA傳輸受到阻塞而跟不上外部攝像頭信號導致的數據錯誤。

    圖像按幀讀取比按行讀取效率更高,那麼為什麼要按行讀取呢?上面的例子是把圖像送到LCD,如果是送到內存,按幀讀取就需要芯片有很大的內存空間。以752*480的分辨率為例,需要360kB的RAM空間,遠遠超出了芯片RAM的大小。部分應用不需要攝像頭全尺寸的圖像,只需要中心區域,比如為了避免畸變影響一般只用圖像中間的部分,那麼按行讀取就有一個好處,讀到一行后,可以把不需要的丟棄,只保留中間部分的圖像像素。

    那麼問題來了?為什麼不直接配置攝像頭的屬性,來實現只讀取圖像的中間部分呢,全部讀取出來然後在arm的內存中裁剪丟棄不要的像素,第一浪費了讀取時間,第二浪費了讀取的空間。更優的做法是直接配置攝像頭sensor,使用sensor的裁剪功能輸出需要的像素區域。

     

    3、圖像裁剪–使用STM32 crop功能裁剪

    STM32F4系列的DCMI接口支持裁剪功能,對攝像頭輸出的像素點進行截取,不需要的像素部分不被DCMI傳入內存,從硬件接口一側就丟棄了。

    HAL_DCMI_EnableCrop(&hdcmi);
    HAL_DCMI_ConfigCrop(&hdcmi, CAM_ROW_OFFSET, CAM_COL_OFFSET, IMG_ROW-1, IMG_COL-1);

    裁剪的本質如下所述,從接收到的數據里選擇需要的矩形區域。所以STM32 DCMI裁剪功能可以完成節約內存,只選取需要的圖像存入內存的作用。

    此方法相比於一次讀一行,然後丟棄首尾部分后把需要的區域圖像像素存入buffer后再讀下一行,避免了時序錯誤,代碼簡潔了,DCMI硬件計數丟掉不要的像素,也提高了程序可靠性、可讀性。

    成也蕭何敗也蕭何,如上面所述,STM32的crop完成了選取特定區域圖像的功能,那麼也要付出代價,它是從接收到的圖像數據里進行選擇的,這意味着那些不需要的數據依然會傳輸到MCU一側,只不過MCU的DCMI對數據進行計數是忽略了它而已,那麼問題就來了,哪些不需要的數據的傳輸會帶來什麼問題呢?

    有圖為證,下圖是使用了STM32 crop裁剪的時序圖,通道1啟動採集IO置高,frame中斷里拉低,由於使用dma傳輸,那麼被crop裁剪后dma計數的數據量變少,所以DCMI frame中斷能在行數據傳輸完成前到達,通道1高電平部分就代表一有效分辨率的幀的採集時間。通道2 曝光信號管腳,通道3是行掃描信號。其中通道1下降沿到通道3下降沿4.5ms。代表單片機已經收到crop指定尺寸的圖像,採集有效區域(crop區域)的圖像完成,但是line信號沒有結束還有很多行沒傳輸,即CMOS和DCMI接口要傳輸752*480圖像還沒完成。

     舉例說明,如果使用752*480分辨率採集圖像,你只取中間的360*360視野,有效分辨率是360*360,但是總線上的數據依然是752*480,所以幀率無法提高,多餘的數據按說就不應該傳輸出來,如何破解,問題追到這裏,STM32芯片已經無能為力了,接下來需要在CMOS一側發力了。

     

    4、圖像裁剪–配置CMOS寄存器裁剪

    下圖是MT9V034 攝像頭芯片的寄存器手冊,Reg1–4配置CMOS的行列起點和寬度高度。

    修改寄存器后,攝像頭CMOS就不再向外傳輸多餘的數據,被裁剪丟棄的數據也不會反應在接口上,所以STM32 DCMI接收到的數據都是需要保留的有效區數據,極大地減少了數據輸出,提高了傳輸效率。本人也在STM324芯片上,實現了220*220分辨率120幀的連續採集。

    下面是序圖,通道1高電平代表開始採集和一幀結束,不同於使用STM32 的crop裁剪,使用CMOS寄存器裁剪有效窗口,使得幀結束時行信號也同時結束,後續沒有任何需要傳輸的行數據。

     

    5、一幀數據一次性傳輸

    一幀數據一次全部讀入到MCU的方式,其實是最簡單的驅動編寫方式,缺點就是太占內存,但是對於沒有壓縮功能的cmos芯片來說,一般都無力實現。對部分有jpg壓縮功能的cmos芯片而言,比如OV2640可以使用這種方式,一次性讀出一幀圖像。

    __align(4) u32 jpeg_buf[jpeg_buf_size];    //JPEG buffer
    //JPEG 格式
    const u16 jpeg_img_size_tbl[][2]=
    {
        176,144,    //QCIF
        160,120,    //QQVGA
        352,288,    //CIF
        320,240,    //QVGA
        640,480,    //VGA
        800,600,    //SVGA
        1024,768,    //XGA
        1280,1024,    //SXGA
        1600,1200,    //UXGA
    }; 

    //DCMI 接收數據
    void DCMI_IRQHandler(void)
    {
      if(DCMI_GetITStatus(DCMI_IT_FRAME)==SET)// 一幀數據
      {
        jpeg_data_process();  
        DCMI_ClearITPendingBit(DCMI_IT_FRAME); 
      }
    }

     

    本站聲明:網站內容來源於博客園,如有侵權,請聯繫我們,我們將及時處理

    【其他文章推薦】

    ※自行創業缺乏曝光? 網頁設計幫您第一時間規劃公司的形象門面

    網頁設計一頭霧水該從何著手呢? 台北網頁設計公司幫您輕鬆架站!

    ※想知道最厲害的網頁設計公司"嚨底家"!

    ※幫你省時又省力,新北清潔一流服務好口碑

    ※別再煩惱如何寫文案,掌握八大原則!

  • 這一次搞懂Spring代理創建及AOP鏈式調用過程

    這一次搞懂Spring代理創建及AOP鏈式調用過程

    @

    目錄

    • 前言
    • 正文
      • 基本概念
      • 代理對象的創建
      • 小結
      • AOP鏈式調用
      • AOP擴展知識
        • 一、自定義全局攔截器Interceptor
        • 二、循環依賴三級緩存存在的必要性
        • 三、如何在Bean創建之前提前創建代理對象
    • 總結

    前言

    AOP,也就是面向切面編程,它可以將公共的代碼抽離出來,動態的織入到目標類、目標方法中,大大提高我們編程的效率,也使程序變得更加優雅。如事務、操作日誌等都可以使用AOP實現。這種織入可以是在運行期動態生成代理對象實現,也可以在編譯期類加載時期靜態織入到代碼中。而Spring正是通過第一種方法實現,且在代理類的生成上也有兩種方式:JDK Proxy和CGLIB,默認當類實現了接口時使用前者,否則使用後者;另外Spring AOP只能實現對方法的增強。

    正文

    基本概念

    AOP的術語很多,雖然不清楚術語我們也能很熟練地使用AOP,但是要理解分析源碼,術語就需要深刻體會其含義。

    • 增強(Advice):就是我們想要額外增加的功能
    • 目標對象(Target):就是我們想要增強的目標類,如果沒有AOP,我們需要在每個目標對象中實現日誌、事務管理等非業務邏輯
    • 連接點(JoinPoint):程序執行時的特定時機,如方法執行前、后以及拋出異常后等等。
    • 切點(Pointcut):連接點的導航,我們如何找到目標對象呢?切點的作用就在於此,在Spring中就是匹配表達式。
    • 引介(Introduction):引介是一種特殊的增強,它為類添加一些屬性和方法。這樣,即使一個業務類原本沒有實現某個接口,通過AOP的引介功能,我們可以動態地為該業務類添加接口的實現邏輯,讓業務類成為這個接口的實現類。
    • 織入(Weaving):即如何將增強添加到目標對象的連接點上,有動態(運行期生成代理)、靜態(編譯期、類加載時期)兩種方式。
    • 代理(Proxy):目標對象被織入增強后,就會產生一個代理對象,該對象可能是和原對象實現了同樣的一個接口(JDK),也可能是原對象的子類(CGLIB)。
    • 切面(Aspect、Advisor):切面由切點和增強組成,包含了這兩者的定義。

    代理對象的創建

    在熟悉了AOP術語后,下面就來看看Spring是如何創建代理對象的,是否還記得上一篇提到的AOP的入口呢?在AbstractAutowireCapableBeanFactory類的applyBeanPostProcessorsAfterInitialization方法中循環調用了BeanPostProcessorpostProcessAfterInitialization方法,其中一個就是我們創建代理對象的入口。這裡是Bean實例化完成去創建代理對象,理所當然應該這樣,但實際上在Bean實例化之前調用了一個resolveBeforeInstantiation方法,這裏實際上我們也是有機會可以提前創建代理對象的,這裏放到最後來分析,先來看主入口,進入到AbstractAutoProxyCreator類中:

    	public Object postProcessAfterInitialization(@Nullable Object bean, String beanName) {
    		if (bean != null) {
    			Object cacheKey = getCacheKey(bean.getClass(), beanName);
    			if (!this.earlyProxyReferences.contains(cacheKey)) {
    				return wrapIfNecessary(bean, beanName, cacheKey);
    			}
    		}
    		return bean;
    	}
    
    	protected Object wrapIfNecessary(Object bean, String beanName, Object cacheKey) {
    		//創建當前bean的代理,如果這個bean有advice的話,重點看
    		// Create proxy if we have advice.
    		Object[] specificInterceptors = getAdvicesAndAdvisorsForBean(bean.getClass(), beanName, null);
    		//如果有切面,則生成該bean的代理
    		if (specificInterceptors != DO_NOT_PROXY) {
    			this.advisedBeans.put(cacheKey, Boolean.TRUE);
    			//把被代理對象bean實例封裝到SingletonTargetSource對象中
    			Object proxy = createProxy(
    					bean.getClass(), beanName, specificInterceptors, new SingletonTargetSource(bean));
    			this.proxyTypes.put(cacheKey, proxy.getClass());
    			return proxy;
    		}
    
    		this.advisedBeans.put(cacheKey, Boolean.FALSE);
    		return bean;
    	}
    

    先從緩存中拿,沒有則調用wrapIfNecessary方法創建。在這個方法裏面主要看兩個地方:getAdvicesAndAdvisorsForBeancreateProxy。簡單一句話概括就是先掃描后創建,問題是掃描什麼呢?你可以先結合上面的概念思考下,換你會怎麼做。進入到子類AbstractAdvisorAutoProxyCreatorgetAdvicesAndAdvisorsForBean方法中:

    	protected Object[] getAdvicesAndAdvisorsForBean(
    			Class<?> beanClass, String beanName, @Nullable TargetSource targetSource) {
    
    		//找到合格的切面
    		List<Advisor> advisors = findEligibleAdvisors(beanClass, beanName);
    		if (advisors.isEmpty()) {
    			return DO_NOT_PROXY;
    		}
    		return advisors.toArray();
    	}
    
    	protected List<Advisor> findEligibleAdvisors(Class<?> beanClass, String beanName) {
    		//找到候選的切面,其實就是一個尋找有@Aspectj註解的過程,把工程中所有有這個註解的類封裝成Advisor返回
    		List<Advisor> candidateAdvisors = findCandidateAdvisors();
    
    		//判斷候選的切面是否作用在當前beanClass上面,就是一個匹配過程。現在就是一個匹配
    		List<Advisor> eligibleAdvisors = findAdvisorsThatCanApply(candidateAdvisors, beanClass, beanName);
    		extendAdvisors(eligibleAdvisors);
    		if (!eligibleAdvisors.isEmpty()) {
    			//對有@Order@Priority進行排序
    			eligibleAdvisors = sortAdvisors(eligibleAdvisors);
    		}
    		return eligibleAdvisors;
    	}
    

    findEligibleAdvisors方法中可以看到有兩個步驟,第一先找到所有的切面,即掃描所有帶有@Aspect註解的類,並將其中的切點(表達式)增強封裝為切面,掃描完成后,自然是要判斷哪些切面能夠連接到當前Bean實例上。下面一步步來分析,首先是掃描過程,進入到AnnotationAwareAspectJAutoProxyCreator類中:

    	protected List<Advisor> findCandidateAdvisors() {
    		// 先通過父類AbstractAdvisorAutoProxyCreator掃描,這裏不重要
    		List<Advisor> advisors = super.findCandidateAdvisors();
    		// 主要看這裏
    		if (this.aspectJAdvisorsBuilder != null) {
    			advisors.addAll(this.aspectJAdvisorsBuilder.buildAspectJAdvisors());
    		}
    		return advisors;
    	}
    

    這裏委託給了BeanFactoryAspectJAdvisorsBuilderAdapter類,並調用其父類的buildAspectJAdvisors方法創建切面對象:

    	public List<Advisor> buildAspectJAdvisors() {
    		List<String> aspectNames = this.aspectBeanNames;
    
    		if (aspectNames == null) {
    			synchronized (this) {
    				aspectNames = this.aspectBeanNames;
    				if (aspectNames == null) {
    					List<Advisor> advisors = new ArrayList<>();
    					aspectNames = new ArrayList<>();
    					//獲取spring容器中的所有bean的名稱BeanName
    					String[] beanNames = BeanFactoryUtils.beanNamesForTypeIncludingAncestors(
    							this.beanFactory, Object.class, true, false);
    					for (String beanName : beanNames) {
    						if (!isEligibleBean(beanName)) {
    							continue;
    						}
    						Class<?> beanType = this.beanFactory.getType(beanName);
    						if (beanType == null) {
    							continue;
    						}
    						//判斷類上是否有@Aspect註解
    						if (this.advisorFactory.isAspect(beanType)) {
    							aspectNames.add(beanName);
    							AspectMetadata amd = new AspectMetadata(beanType, beanName);
    							if (amd.getAjType().getPerClause().getKind() == PerClauseKind.SINGLETON) {
    								// 當@Aspect的value屬性為""時才會進入到這裏
    								// 創建獲取有@Aspect註解類的實例工廠,負責獲取有@Aspect註解類的實例
    								MetadataAwareAspectInstanceFactory factory =
    										new BeanFactoryAspectInstanceFactory(this.beanFactory, beanName);
    
    								//創建切面advisor對象
    								List<Advisor> classAdvisors = this.advisorFactory.getAdvisors(factory);
    								if (this.beanFactory.isSingleton(beanName)) {
    									this.advisorsCache.put(beanName, classAdvisors);
    								}
    								else {
    									this.aspectFactoryCache.put(beanName, factory);
    								}
    								advisors.addAll(classAdvisors);
    							}
    							else {
    								MetadataAwareAspectInstanceFactory factory =
    										new PrototypeAspectInstanceFactory(this.beanFactory, beanName);
    								this.aspectFactoryCache.put(beanName, factory);
    								advisors.addAll(this.advisorFactory.getAdvisors(factory));
    							}
    						}
    					}
    					this.aspectBeanNames = aspectNames;
    					return advisors;
    				}
    			}
    		}
    		return advisors;
    	}
    

    這個方法裏面首先從IOC中拿到所有Bean的名稱,並循環判斷該類上是否帶有@Aspect註解,如果有則將BeanName和Bean的Class類型封裝到BeanFactoryAspectInstanceFactory中,並調用ReflectiveAspectJAdvisorFactory.getAdvisors創建切面對象:

    	public List<Advisor> getAdvisors(MetadataAwareAspectInstanceFactory aspectInstanceFactory) {
    		//從工廠中獲取有@Aspect註解的類Class
    		Class<?> aspectClass = aspectInstanceFactory.getAspectMetadata().getAspectClass();
    		//從工廠中獲取有@Aspect註解的類的名稱
    		String aspectName = aspectInstanceFactory.getAspectMetadata().getAspectName();
    		validate(aspectClass);
    
    		// 創建工廠的裝飾類,獲取實例只會獲取一次
    		MetadataAwareAspectInstanceFactory lazySingletonAspectInstanceFactory =
    				new LazySingletonAspectInstanceFactoryDecorator(aspectInstanceFactory);
    
    		List<Advisor> advisors = new ArrayList<>();
    
    		//這裏循環沒有@Pointcut註解的方法
    		for (Method method : getAdvisorMethods(aspectClass)) {
    
    			//非常重要重點看看
    			Advisor advisor = getAdvisor(method, lazySingletonAspectInstanceFactory, advisors.size(), aspectName);
    			if (advisor != null) {
    				advisors.add(advisor);
    			}
    		}
    
    		if (!advisors.isEmpty() && lazySingletonAspectInstanceFactory.getAspectMetadata().isLazilyInstantiated()) {
    			Advisor instantiationAdvisor = new SyntheticInstantiationAdvisor(lazySingletonAspectInstanceFactory);
    			advisors.add(0, instantiationAdvisor);
    		}
    
    		//判斷屬性上是否有引介註解,這裏可以不看
    		for (Field field : aspectClass.getDeclaredFields()) {
    			//判斷屬性上是否有DeclareParents註解,如果有返回切面
    			Advisor advisor = getDeclareParentsAdvisor(field);
    			if (advisor != null) {
    				advisors.add(advisor);
    			}
    		}
    
    		return advisors;
    	}
    
    	private List<Method> getAdvisorMethods(Class<?> aspectClass) {
    		final List<Method> methods = new ArrayList<>();
    		ReflectionUtils.doWithMethods(aspectClass, method -> {
    			// Exclude pointcuts
    			if (AnnotationUtils.getAnnotation(method, Pointcut.class) == null) {
    				methods.add(method);
    			}
    		});
    		methods.sort(METHOD_COMPARATOR);
    		return methods;
    	}
    

    根據Aspect的Class拿到所有不帶@Pointcut註解的方法對象(為什麼是不帶@Pointcut註解的方法?仔細想想不難理解),另外要注意這裏對method進行了排序,看看這個METHOD_COMPARATOR比較器:

    	private static final Comparator<Method> METHOD_COMPARATOR;
    
    	static {
    		Comparator<Method> adviceKindComparator = new ConvertingComparator<>(
    				new InstanceComparator<>(
    						Around.class, Before.class, After.class, AfterReturning.class, AfterThrowing.class),
    				(Converter<Method, Annotation>) method -> {
    					AspectJAnnotation<?> annotation =
    						AbstractAspectJAdvisorFactory.findAspectJAnnotationOnMethod(method);
    					return (annotation != null ? annotation.getAnnotation() : null);
    				});
    		Comparator<Method> methodNameComparator = new ConvertingComparator<>(Method::getName);
    		METHOD_COMPARATOR = adviceKindComparator.thenComparing(methodNameComparator);
    	}
    

    關注InstanceComparator構造函數參數,記住它們的順序,這就是AOP鏈式調用中同一個@Aspect類中Advice的執行順序。接着往下看,在getAdvisors方法中循環獲取到的methods,分別調用getAdvisor方法,也就是根據方法逐個去創建切面:

    	public Advisor getAdvisor(Method candidateAdviceMethod, MetadataAwareAspectInstanceFactory aspectInstanceFactory,
    			int declarationOrderInAspect, String aspectName) {
    
    		validate(aspectInstanceFactory.getAspectMetadata().getAspectClass());
    
    		//獲取pointCut對象,最重要的是從註解中獲取表達式
    		AspectJExpressionPointcut expressionPointcut = getPointcut(
    				candidateAdviceMethod, aspectInstanceFactory.getAspectMetadata().getAspectClass());
    		if (expressionPointcut == null) {
    			return null;
    		}
    
    		//創建Advisor切面類,這才是真正的切面類,一個切面類裏面肯定要有1、pointCut 2、advice
    		//這裏pointCut是expressionPointcut, advice 增強方法是 candidateAdviceMethod
    		return new InstantiationModelAwarePointcutAdvisorImpl(expressionPointcut, candidateAdviceMethod,
    				this, aspectInstanceFactory, declarationOrderInAspect, aspectName);
    	}
    
    	private static final Class<?>[] ASPECTJ_ANNOTATION_CLASSES = new Class<?>[] {
    			Pointcut.class, Around.class, Before.class, After.class, AfterReturning.class, AfterThrowing.class};
    			
    	private AspectJExpressionPointcut getPointcut(Method candidateAdviceMethod, Class<?> candidateAspectClass) {
    		//從候選的增強方法裏面 candidateAdviceMethod  找有有註解
    		//Pointcut.class, Around.class, Before.class, After.class, AfterReturning.class, AfterThrowing.class
    		//並把註解信息封裝成AspectJAnnotation對象
    		AspectJAnnotation<?> aspectJAnnotation =
    				AbstractAspectJAdvisorFactory.findAspectJAnnotationOnMethod(candidateAdviceMethod);
    		if (aspectJAnnotation == null) {
    			return null;
    		}
    
    		//創建一個PointCut類,並且把前面從註解裏面解析的表達式設置進去
    		AspectJExpressionPointcut ajexp =
    				new AspectJExpressionPointcut(candidateAspectClass, new String[0], new Class<?>[0]);
    		ajexp.setExpression(aspectJAnnotation.getPointcutExpression());
    		if (this.beanFactory != null) {
    			ajexp.setBeanFactory(this.beanFactory);
    		}
    		return ajexp;
    	}
    

    之前就說過切面的定義,是切點和增強的組合,所以這裏首先通過getPointcut獲取到註解對象,然後new了一個Pointcut對象,並將表達式設置進去。然後在getAdvisor方法中最後new了一個InstantiationModelAwarePointcutAdvisorImpl對象:

    	public InstantiationModelAwarePointcutAdvisorImpl(AspectJExpressionPointcut declaredPointcut,
    			Method aspectJAdviceMethod, AspectJAdvisorFactory aspectJAdvisorFactory,
    			MetadataAwareAspectInstanceFactory aspectInstanceFactory, int declarationOrder, String aspectName) {
    
    		this.declaredPointcut = declaredPointcut;
    		this.declaringClass = aspectJAdviceMethod.getDeclaringClass();
    		this.methodName = aspectJAdviceMethod.getName();
    		this.parameterTypes = aspectJAdviceMethod.getParameterTypes();
    		this.aspectJAdviceMethod = aspectJAdviceMethod;
    		this.aspectJAdvisorFactory = aspectJAdvisorFactory;
    		this.aspectInstanceFactory = aspectInstanceFactory;
    		this.declarationOrder = declarationOrder;
    		this.aspectName = aspectName;
    
    		if (aspectInstanceFactory.getAspectMetadata().isLazilyInstantiated()) {
    			// Static part of the pointcut is a lazy type.
    			Pointcut preInstantiationPointcut = Pointcuts.union(
    					aspectInstanceFactory.getAspectMetadata().getPerClausePointcut(), this.declaredPointcut);
    
    			// Make it dynamic: must mutate from pre-instantiation to post-instantiation state.
    			// If it's not a dynamic pointcut, it may be optimized out
    			// by the Spring AOP infrastructure after the first evaluation.
    			this.pointcut = new PerTargetInstantiationModelPointcut(
    					this.declaredPointcut, preInstantiationPointcut, aspectInstanceFactory);
    			this.lazy = true;
    		}
    		else {
    			// A singleton aspect.
    			this.pointcut = this.declaredPointcut;
    			this.lazy = false;
    			//這個方法重點看看,創建advice對象
    			this.instantiatedAdvice = instantiateAdvice(this.declaredPointcut);
    		}
    	}
    

    這個就是我們的切面類,在其構造方法的最後通過instantiateAdvice創建了Advice對象。注意這裏傳進來的declarationOrder參數,它就是循環method時的序號,其作用就是賦值給這裏的declarationOrder屬性以及Advice的declarationOrder屬性,在後面排序時就會通過這個序號來比較,因此Advice的執行順序是固定的,至於為什麼要固定,後面分析完AOP鏈式調用過程自然就明白了。

    	public Advice getAdvice(Method candidateAdviceMethod, AspectJExpressionPointcut expressionPointcut,
    			MetadataAwareAspectInstanceFactory aspectInstanceFactory, int declarationOrder, String aspectName) {
    
    		//獲取有@Aspect註解的類
    		Class<?> candidateAspectClass = aspectInstanceFactory.getAspectMetadata().getAspectClass();
    		validate(candidateAspectClass);
    
    		//找到candidateAdviceMethod方法上面的註解,並且包裝成AspectJAnnotation對象,這個對象中就有註解類型
    		AspectJAnnotation<?> aspectJAnnotation =
    				AbstractAspectJAdvisorFactory.findAspectJAnnotationOnMethod(candidateAdviceMethod);
    		if (aspectJAnnotation == null) {
    			return null;
    		}
    		
    		AbstractAspectJAdvice springAdvice;
    
    		//根據不同的註解類型創建不同的advice類實例
    		switch (aspectJAnnotation.getAnnotationType()) {
    			case AtPointcut:
    				if (logger.isDebugEnabled()) {
    					logger.debug("Processing pointcut '" + candidateAdviceMethod.getName() + "'");
    				}
    				return null;
    			case AtAround:
    				//實現了MethodInterceptor接口
    				springAdvice = new AspectJAroundAdvice(
    						candidateAdviceMethod, expressionPointcut, aspectInstanceFactory);
    				break;
    			case AtBefore:
    				//實現了MethodBeforeAdvice接口,沒有實現MethodInterceptor接口
    				springAdvice = new AspectJMethodBeforeAdvice(
    						candidateAdviceMethod, expressionPointcut, aspectInstanceFactory);
    				break;
    			case AtAfter:
    				//實現了MethodInterceptor接口
    				springAdvice = new AspectJAfterAdvice(
    						candidateAdviceMethod, expressionPointcut, aspectInstanceFactory);
    				break;
    			case AtAfterReturning:
    				//實現了AfterReturningAdvice接口,沒有實現MethodInterceptor接口
    				springAdvice = new AspectJAfterReturningAdvice(
    						candidateAdviceMethod, expressionPointcut, aspectInstanceFactory);
    				AfterReturning afterReturningAnnotation = (AfterReturning) aspectJAnnotation.getAnnotation();
    				if (StringUtils.hasText(afterReturningAnnotation.returning())) {
    					springAdvice.setReturningName(afterReturningAnnotation.returning());
    				}
    				break;
    			case AtAfterThrowing:
    				//實現了MethodInterceptor接口
    				springAdvice = new AspectJAfterThrowingAdvice(
    						candidateAdviceMethod, expressionPointcut, aspectInstanceFactory);
    				AfterThrowing afterThrowingAnnotation = (AfterThrowing) aspectJAnnotation.getAnnotation();
    				if (StringUtils.hasText(afterThrowingAnnotation.throwing())) {
    					springAdvice.setThrowingName(afterThrowingAnnotation.throwing());
    				}
    				break;
    			default:
    				throw new UnsupportedOperationException(
    						"Unsupported advice type on method: " + candidateAdviceMethod);
    		}
    
    		// Now to configure the advice...
    		springAdvice.setAspectName(aspectName);
    		springAdvice.setDeclarationOrder(declarationOrder);
    		String[] argNames = this.parameterNameDiscoverer.getParameterNames(candidateAdviceMethod);
    		if (argNames != null) {
    			springAdvice.setArgumentNamesFromStringArray(argNames);
    		}
    
    		//計算argNames和類型的對應關係
    		springAdvice.calculateArgumentBindings();
    
    		return springAdvice;
    	}
    

    這裏邏輯很清晰,就是拿到方法上的註解類型,根據類型創建不同的增強Advice對象:AspectJAroundAdvice、AspectJMethodBeforeAdvice、AspectJAfterAdvice、AspectJAfterReturningAdvice、AspectJAfterThrowingAdvice。完成之後通過calculateArgumentBindings方法進行參數綁定,感興趣的可自行研究。這裏主要看看幾個Advice的繼承體系:

    可以看到有兩個Advice是沒有實現MethodInterceptor接口的:AspectJMethodBeforeAdvice和AspectJAfterReturningAdvice。而MethodInterceptor有一個invoke方法,這個方法就是鏈式調用的核心方法,但那兩個沒有實現該方法的Advice怎麼處理呢?稍後會分析。
    到這裏切面對象就創建完成了,接下來就是判斷當前創建的Bean實例是否和這些切面匹配以及對切面排序。匹配過程比較複雜,對理解主流程也沒什麼幫助,所以這裏就不展開分析,感興趣的自行分析(AbstractAdvisorAutoProxyCreator.findAdvisorsThatCanApply())。下面看看排序的過程,回到AbstractAdvisorAutoProxyCreator.findEligibleAdvisors方法:

    	protected List<Advisor> findEligibleAdvisors(Class<?> beanClass, String beanName) {
    		//找到候選的切面,其實就是一個尋找有@Aspectj註解的過程,把工程中所有有這個註解的類封裝成Advisor返回
    		List<Advisor> candidateAdvisors = findCandidateAdvisors();
    
    		//判斷候選的切面是否作用在當前beanClass上面,就是一個匹配過程。。現在就是一個匹配
    		List<Advisor> eligibleAdvisors = findAdvisorsThatCanApply(candidateAdvisors, beanClass, beanName);
    		extendAdvisors(eligibleAdvisors);
    		if (!eligibleAdvisors.isEmpty()) {
    			//對有@Order@Priority進行排序
    			eligibleAdvisors = sortAdvisors(eligibleAdvisors);
    		}
    		return eligibleAdvisors;
    	}
    

    sortAdvisors方法就是排序,但這個方法有兩個實現:當前類AbstractAdvisorAutoProxyCreator和子類AspectJAwareAdvisorAutoProxyCreator,應該走哪個呢?

    通過類圖我們可以肯定是進入的AspectJAwareAdvisorAutoProxyCreator類,因為AnnotationAwareAspectJAutoProxyCreator的父類是它。

    	protected List<Advisor> sortAdvisors(List<Advisor> advisors) {
    		List<PartiallyComparableAdvisorHolder> partiallyComparableAdvisors = new ArrayList<>(advisors.size());
    		for (Advisor element : advisors) {
    			partiallyComparableAdvisors.add(
    					new PartiallyComparableAdvisorHolder(element, DEFAULT_PRECEDENCE_COMPARATOR));
    		}
    		List<PartiallyComparableAdvisorHolder> sorted = PartialOrder.sort(partiallyComparableAdvisors);
    		if (sorted != null) {
    			List<Advisor> result = new ArrayList<>(advisors.size());
    			for (PartiallyComparableAdvisorHolder pcAdvisor : sorted) {
    				result.add(pcAdvisor.getAdvisor());
    			}
    			return result;
    		}
    		else {
    			return super.sortAdvisors(advisors);
    		}
    	}
    

    這裏排序主要是委託給PartialOrder進行的,而在此之前將所有的切面都封裝成了PartiallyComparableAdvisorHolder對象,注意傳入的DEFAULT_PRECEDENCE_COMPARATOR參數,這個就是比較器對象:

    	private static final Comparator<Advisor> DEFAULT_PRECEDENCE_COMPARATOR = new AspectJPrecedenceComparator();
    

    所以我們直接看這個比較器的compare方法:

    	public int compare(Advisor o1, Advisor o2) {
    		int advisorPrecedence = this.advisorComparator.compare(o1, o2);
    		if (advisorPrecedence == SAME_PRECEDENCE && declaredInSameAspect(o1, o2)) {
    			advisorPrecedence = comparePrecedenceWithinAspect(o1, o2);
    		}
    		return advisorPrecedence;
    	}
    
    	private final Comparator<? super Advisor> advisorComparator;
    	public AspectJPrecedenceComparator() {
    		this.advisorComparator = AnnotationAwareOrderComparator.INSTANCE;
    	}
    

    第一步先通過AnnotationAwareOrderComparator去比較,點進去看可以發現是對實現了PriorityOrderedOrdered接口以及標記了PriorityOrder註解的非同一個@Aspect類中的切面進行排序。這個和之前分析BeanFacotryPostProcessor類是一樣的原理。而對同一個@Aspect類中的切面排序主要是comparePrecedenceWithinAspect方法:

    	private int comparePrecedenceWithinAspect(Advisor advisor1, Advisor advisor2) {
    		boolean oneOrOtherIsAfterAdvice =
    				(AspectJAopUtils.isAfterAdvice(advisor1) || AspectJAopUtils.isAfterAdvice(advisor2));
    		int adviceDeclarationOrderDelta = getAspectDeclarationOrder(advisor1) - getAspectDeclarationOrder(advisor2);
    
    		if (oneOrOtherIsAfterAdvice) {
    			// the advice declared last has higher precedence
    			if (adviceDeclarationOrderDelta < 0) {
    				// advice1 was declared before advice2
    				// so advice1 has lower precedence
    				return LOWER_PRECEDENCE;
    			}
    			else if (adviceDeclarationOrderDelta == 0) {
    				return SAME_PRECEDENCE;
    			}
    			else {
    				return HIGHER_PRECEDENCE;
    			}
    		}
    		else {
    			// the advice declared first has higher precedence
    			if (adviceDeclarationOrderDelta < 0) {
    				// advice1 was declared before advice2
    				// so advice1 has higher precedence
    				return HIGHER_PRECEDENCE;
    			}
    			else if (adviceDeclarationOrderDelta == 0) {
    				return SAME_PRECEDENCE;
    			}
    			else {
    				return LOWER_PRECEDENCE;
    			}
    		}
    	}
    
    	private int getAspectDeclarationOrder(Advisor anAdvisor) {
    		AspectJPrecedenceInformation precedenceInfo =
    			AspectJAopUtils.getAspectJPrecedenceInformationFor(anAdvisor);
    		if (precedenceInfo != null) {
    			return precedenceInfo.getDeclarationOrder();
    		}
    		else {
    			return 0;
    		}
    	}
    

    這裏就是通過precedenceInfo.getDeclarationOrder拿到在創建InstantiationModelAwarePointcutAdvisorImpl對象時設置的declarationOrder屬性,這就驗證了之前的說法(實際上這裏排序過程非常複雜,不是簡單的按照這個屬性進行排序)。
    當上面的一切都進行完成后,就該創建代理對象了,回到AbstractAutoProxyCreator.wrapIfNecessary,看關鍵部分代碼:

    	//如果有切面,則生成該bean的代理
    	if (specificInterceptors != DO_NOT_PROXY) {
    		this.advisedBeans.put(cacheKey, Boolean.TRUE);
    		//把被代理對象bean實例封裝到SingletonTargetSource對象中
    		Object proxy = createProxy(
    				bean.getClass(), beanName, specificInterceptors, new SingletonTargetSource(bean));
    		this.proxyTypes.put(cacheKey, proxy.getClass());
    		return proxy;
    	}
    

    注意這裏將被代理對象封裝成了一個SingletonTargetSource對象,它是TargetSource的實現類。

    	protected Object createProxy(Class<?> beanClass, @Nullable String beanName,
    			@Nullable Object[] specificInterceptors, TargetSource targetSource) {
    
    		if (this.beanFactory instanceof ConfigurableListableBeanFactory) {
    			AutoProxyUtils.exposeTargetClass((ConfigurableListableBeanFactory) this.beanFactory, beanName, beanClass);
    		}
    
    		//創建代理工廠
    		ProxyFactory proxyFactory = new ProxyFactory();
    		proxyFactory.copyFrom(this);
    
    		if (!proxyFactory.isProxyTargetClass()) {
    			if (shouldProxyTargetClass(beanClass, beanName)) {
    				//proxyTargetClass 是否對類進行代理,而不是對接口進行代理,設置為true時,使用CGLib代理。
    				proxyFactory.setProxyTargetClass(true);
    			}
    			else {
    				evaluateProxyInterfaces(beanClass, proxyFactory);
    			}
    		}
    
    		//把advice類型的增強包裝成advisor切面
    		Advisor[] advisors = buildAdvisors(beanName, specificInterceptors);
    		proxyFactory.addAdvisors(advisors);
    		proxyFactory.setTargetSource(targetSource);
    		customizeProxyFactory(proxyFactory);
    
    		////用來控制代理工廠被配置后,是否還允許修改代理的配置,默認為false
    		proxyFactory.setFrozen(this.freezeProxy);
    		if (advisorsPreFiltered()) {
    			proxyFactory.setPreFiltered(true);
    		}
    
    		//獲取代理實例
    		return proxyFactory.getProxy(getProxyClassLoader());
    	}
    

    這裏通過ProxyFactory對象去創建代理實例,這是工廠模式的體現,但在創建代理對象之前還有幾個準備動作:需要判斷是JDK代理還是CGLIB代理以及通過buildAdvisors方法將擴展的Advice封裝成Advisor切面。準備完成則通過getProxy創建代理對象:

    	public Object getProxy(@Nullable ClassLoader classLoader) {
    		//根據目標對象是否有接口來判斷採用什麼代理方式,cglib代理還是jdk動態代理
    		return createAopProxy().getProxy(classLoader);
    	}
    
    	protected final synchronized AopProxy createAopProxy() {
    		if (!this.active) {
    			activate();
    		}
    		return getAopProxyFactory().createAopProxy(this);
    	}
    
    	public AopProxy createAopProxy(AdvisedSupport config) throws AopConfigException {
    		if (config.isOptimize() || config.isProxyTargetClass() || hasNoUserSuppliedProxyInterfaces(config)) {
    			Class<?> targetClass = config.getTargetClass();
    			if (targetClass == null) {
    				throw new AopConfigException("TargetSource cannot determine target class: " +
    						"Either an interface or a target is required for proxy creation.");
    			}
    			if (targetClass.isInterface() || Proxy.isProxyClass(targetClass)) {
    				return new JdkDynamicAopProxy(config);
    			}
    			return new ObjenesisCglibAopProxy(config);
    		}
    		else {
    			return new JdkDynamicAopProxy(config);
    		}
    	}
    

    首先通過配置拿到對應的代理類:ObjenesisCglibAopProxy和JdkDynamicAopProxy,然後再通過getProxy創建Bean的代理,這裏以JdkDynamicAopProxy為例:

    	public Object getProxy(@Nullable ClassLoader classLoader) {
    		//advised是代理工廠對象
    		Class<?>[] proxiedInterfaces = AopProxyUtils.completeProxiedInterfaces(this.advised, true);
    		findDefinedEqualsAndHashCodeMethods(proxiedInterfaces);
    		return Proxy.newProxyInstance(classLoader, proxiedInterfaces, this);
    	}
    

    這裏的代碼你應該不陌生了,就是JDK的原生API,newProxyInstance方法傳入的InvocationHandler對象是this,因此,最終AOP代理的調用就是從該類中的invoke方法開始。至此,代理對象的創建就完成了,下面來看下整個過程的時序圖:

    小結

    代理對象的創建過程整體來說並不複雜,首先找到所有帶有@Aspect註解的類,並獲取其中沒有@Pointcut註解的方法,循環創建切面,而創建切面需要切點增強兩個元素,其中切點可簡單理解為我們寫的表達式,增強則是根據@Before、@Around、@After等註解創建的對應的Advice類。切面創建好后則需要循環判斷哪些切面能對當前的Bean實例的方法進行增強並排序,最後通過ProxyFactory創建代理對象。

    AOP鏈式調用

    熟悉JDK動態代理的都知道通過代理對象調用方法時,會進入到InvocationHandler對象的invoke方法,所以我們直接從JdkDynamicAopProxy的這個方法開始:

    	public Object invoke(Object proxy, Method method, Object[] args) throws Throwable {
    		MethodInvocation invocation;
    		Object oldProxy = null;
    		boolean setProxyContext = false;
    
    		//從代理工廠中拿到TargetSource對象,該對象包裝了被代理實例bean
    		TargetSource targetSource = this.advised.targetSource;
    		Object target = null;
    
    		try {
    			//被代理對象的equals方法和hashCode方法是不能被代理的,不會走切面
    			.......
    			
    			Object retVal;
    
    			// 可以從當前線程中拿到代理對象
    			if (this.advised.exposeProxy) {
    				// Make invocation available if necessary.
    				oldProxy = AopContext.setCurrentProxy(proxy);
    				setProxyContext = true;
    			}
    
    			//這個target就是被代理實例
    			target = targetSource.getTarget();
    			Class<?> targetClass = (target != null ? target.getClass() : null);
    			
    			//從代理工廠中拿過濾器鏈 Object是一個MethodInterceptor類型的對象,其實就是一個advice對象
    			List<Object> chain = this.advised.getInterceptorsAndDynamicInterceptionAdvice(method, targetClass);
    
    			//如果該方法沒有執行鏈,則說明這個方法不需要被攔截,則直接反射調用
    			if (chain.isEmpty()) {
    				Object[] argsToUse = AopProxyUtils.adaptArgumentsIfNecessary(method, args);
    				retVal = AopUtils.invokeJoinpointUsingReflection(target, method, argsToUse);
    			}
    			else {
    				invocation = new ReflectiveMethodInvocation(proxy, target, method, args, targetClass, chain);
    				retVal = invocation.proceed();
    			}
    
    			// Massage return value if necessary.
    			Class<?> returnType = method.getReturnType();
    			if (retVal != null && retVal == target &&
    					returnType != Object.class && returnType.isInstance(proxy) &&
    					!RawTargetAccess.class.isAssignableFrom(method.getDeclaringClass())) {
    				retVal = proxy;
    			}
    			return retVal;
    		}
    		finally {
    			if (target != null && !targetSource.isStatic()) {
    				// Must have come from TargetSource.
    				targetSource.releaseTarget(target);
    			}
    			if (setProxyContext) {
    				// Restore old proxy.
    				AopContext.setCurrentProxy(oldProxy);
    			}
    		}
    	}
    

    這段代碼比較長,我刪掉了不關鍵的地方。首先來看this.advised.exposeProxy這個屬性,這在@EnableAspectJAutoProxy註解中可以配置,當為true時,會將該代理對象設置到當前線程的ThreadLocal對象中,這樣就可以通過AopContext.currentProxy拿到代理對象。這個有什麼用呢?我相信有經驗的Java開發都遇到過這樣一個BUG,在Service實現類中調用本類中的另一個方法時,事務不會生效,這是因為直接通過this調用就不會調用到代理對象的方法,而是原對象的,所以事務切面就沒有生效。因此這種情況下就可以從當前線程的ThreadLocal對象拿到代理對象,不過實際上直接使用@Autowired注入自己本身也可以拿到代理對象。
    接下來就是通過getInterceptorsAndDynamicInterceptionAdvice拿到執行鏈,看看具體做了哪些事情:

    	public List<Object> getInterceptorsAndDynamicInterceptionAdvice(
    			Advised config, Method method, @Nullable Class<?> targetClass) {
    
    		AdvisorAdapterRegistry registry = GlobalAdvisorAdapterRegistry.getInstance();
    		//從代理工廠中獲得該被代理類的所有切面advisor,config就是代理工廠對象
    		Advisor[] advisors = config.getAdvisors();
    		List<Object> interceptorList = new ArrayList<>(advisors.length);
    		Class<?> actualClass = (targetClass != null ? targetClass : method.getDeclaringClass());
    		Boolean hasIntroductions = null;
    
    		for (Advisor advisor : advisors) {
    			//大部分走這裏
    			if (advisor instanceof PointcutAdvisor) {
    				// Add it conditionally.
    				PointcutAdvisor pointcutAdvisor = (PointcutAdvisor) advisor;
    				//如果切面的pointCut和被代理對象是匹配的,說明是切面要攔截的對象
    				if (config.isPreFiltered() || pointcutAdvisor.getPointcut().getClassFilter().matches(actualClass)) {
    					MethodMatcher mm = pointcutAdvisor.getPointcut().getMethodMatcher();
    					boolean match;
    					if (mm instanceof IntroductionAwareMethodMatcher) {
    						if (hasIntroductions == null) {
    							hasIntroductions = hasMatchingIntroductions(advisors, actualClass);
    						}
    						match = ((IntroductionAwareMethodMatcher) mm).matches(method, actualClass, hasIntroductions);
    					}
    					else {
    						//接下來判斷方法是否是切面pointcut需要攔截的方法
    						match = mm.matches(method, actualClass);
    					}
    					//如果類和方法都匹配
    					if (match) {
    
    						//獲取到切面advisor中的advice,並且包裝成MethodInterceptor類型的對象
    						MethodInterceptor[] interceptors = registry.getInterceptors(advisor);
    						if (mm.isRuntime()) {
    							for (MethodInterceptor interceptor : interceptors) {
    								interceptorList.add(new InterceptorAndDynamicMethodMatcher(interceptor, mm));
    							}
    						}
    						else {
    							interceptorList.addAll(Arrays.asList(interceptors));
    						}
    					}
    				}
    			}
    			//如果是引介切面
    			else if (advisor instanceof IntroductionAdvisor) {
    				IntroductionAdvisor ia = (IntroductionAdvisor) advisor;
    				if (config.isPreFiltered() || ia.getClassFilter().matches(actualClass)) {
    					Interceptor[] interceptors = registry.getInterceptors(advisor);
    					interceptorList.addAll(Arrays.asList(interceptors));
    				}
    			}
    			else {
    				Interceptor[] interceptors = registry.getInterceptors(advisor);
    				interceptorList.addAll(Arrays.asList(interceptors));
    			}
    		}
    
    		return interceptorList;
    	}
    

    這也是個長方法,看關鍵的部分,因為之前我們創建的基本上都是InstantiationModelAwarePointcutAdvisorImpl對象,該類是PointcutAdvisor的實現類,所以會進入第一個if判斷里,這裏首先進行匹配,看切點當前對象以及該對象的哪些方法匹配,如果能匹配上,則調用getInterceptors獲取執行鏈:

    	private final List<AdvisorAdapter> adapters = new ArrayList<>(3);
    	public DefaultAdvisorAdapterRegistry() {
    		registerAdvisorAdapter(new MethodBeforeAdviceAdapter());
    		registerAdvisorAdapter(new AfterReturningAdviceAdapter());
    		registerAdvisorAdapter(new ThrowsAdviceAdapter());
    	}
    
    	public MethodInterceptor[] getInterceptors(Advisor advisor) throws UnknownAdviceTypeException {
    		List<MethodInterceptor> interceptors = new ArrayList<>(3);
    		Advice advice = advisor.getAdvice();
    		//如果是MethodInterceptor類型的,如:AspectJAroundAdvice
    		//AspectJAfterAdvice
    		//AspectJAfterThrowingAdvice
    		if (advice instanceof MethodInterceptor) {
    			interceptors.add((MethodInterceptor) advice);
    		}
    
    		//處理 AspectJMethodBeforeAdvice  AspectJAfterReturningAdvice
    		for (AdvisorAdapter adapter : this.adapters) {
    			if (adapter.supportsAdvice(advice)) {
    				interceptors.add(adapter.getInterceptor(advisor));
    			}
    		}
    		if (interceptors.isEmpty()) {
    			throw new UnknownAdviceTypeException(advisor.getAdvice());
    		}
    		return interceptors.toArray(new MethodInterceptor[0]);
    	}
    

    這裏我們可以看到如果是MethodInterceptor的實現類,則直接添加到鏈中,如果不是,則需要通過適配器去包裝后添加,剛好這裡有MethodBeforeAdviceAdapterAfterReturningAdviceAdapter兩個適配器對應上文兩個沒有實現MethodInterceptor接口的類。最後將Interceptors返回。

    if (chain.isEmpty()) {
    	Object[] argsToUse = AopProxyUtils.adaptArgumentsIfNecessary(method, args);
    	retVal = AopUtils.invokeJoinpointUsingReflection(target, method, argsToUse);
    }
    else {
    	// We need to create a method invocation...
    	invocation = new ReflectiveMethodInvocation(proxy, target, method, args, targetClass, chain);
    	// Proceed to the joinpoint through the interceptor chain.
    	retVal = invocation.proceed();
    }
    

    返回到invoke方法后,如果執行鏈為空,說明該方法不需要被增強,所以直接反射調用原對象的方法(注意傳入的是TargetSource封裝的被代理對象);反之,則通過ReflectiveMethodInvocation類進行鏈式調用,關鍵方法就是proceed

    	private int currentInterceptorIndex = -1;
    	
    	public Object proceed() throws Throwable {
    		//如果執行鏈中的advice全部執行完,則直接調用joinPoint方法,就是被代理方法
    		if (this.currentInterceptorIndex == this.interceptorsAndDynamicMethodMatchers.size() - 1) {
    			return invokeJoinpoint();
    		}
    
    		Object interceptorOrInterceptionAdvice =
    				this.interceptorsAndDynamicMethodMatchers.get(++this.currentInterceptorIndex);
    		if (interceptorOrInterceptionAdvice instanceof InterceptorAndDynamicMethodMatcher) {
    			InterceptorAndDynamicMethodMatcher dm =
    					(InterceptorAndDynamicMethodMatcher) interceptorOrInterceptionAdvice;
    			Class<?> targetClass = (this.targetClass != null ? this.targetClass : this.method.getDeclaringClass());
    			if (dm.methodMatcher.matches(this.method, targetClass, this.arguments)) {
    				return dm.interceptor.invoke(this);
    			}
    			else {
    				return proceed();
    			}
    		}
    		else {
    			//調用MethodInterceptor中的invoke方法
    			return ((MethodInterceptor) interceptorOrInterceptionAdvice).invoke(this);
    		}
    	}
    

    這個方法的核心就在兩個地方:invokeJoinpointinterceptorOrInterceptionAdvice.invoke(this)。當增強方法調用完后就會通過前者調用到被代理的方法,否則則是依次調用Interceptorinvoke方法。下面就分別看看每個Interceptor是怎麼實現的。

    • AspectJAroundAdvice
    	public Object invoke(MethodInvocation mi) throws Throwable {
    		if (!(mi instanceof ProxyMethodInvocation)) {
    			throw new IllegalStateException("MethodInvocation is not a Spring ProxyMethodInvocation: " + mi);
    		}
    		ProxyMethodInvocation pmi = (ProxyMethodInvocation) mi;
    		ProceedingJoinPoint pjp = lazyGetProceedingJoinPoint(pmi);
    		JoinPointMatch jpm = getJoinPointMatch(pmi);
    		return invokeAdviceMethod(pjp, jpm, null, null);
    	}
    
    • MethodBeforeAdviceInterceptor -> AspectJMethodBeforeAdvice
    	public Object invoke(MethodInvocation mi) throws Throwable {
    		this.advice.before(mi.getMethod(), mi.getArguments(), mi.getThis());
    		return mi.proceed();
    	}
    
    	public void before(Method method, Object[] args, @Nullable Object target) throws Throwable {
    		invokeAdviceMethod(getJoinPointMatch(), null, null);
    	}
    
    • AspectJAfterAdvice
    	public Object invoke(MethodInvocation mi) throws Throwable {
    		try {
    			return mi.proceed();
    		}
    		finally {
    			invokeAdviceMethod(getJoinPointMatch(), null, null);
    		}
    	}
    
    • AfterReturningAdviceInterceptor -> AspectJAfterReturningAdvice
    	public Object invoke(MethodInvocation mi) throws Throwable {
    		Object retVal = mi.proceed();
    		this.advice.afterReturning(retVal, mi.getMethod(), mi.getArguments(), mi.getThis());
    		return retVal;
    	}
    
    	public void afterReturning(@Nullable Object returnValue, Method method, Object[] args, @Nullable Object target) throws Throwable {
    		if (shouldInvokeOnReturnValueOf(method, returnValue)) {
    			invokeAdviceMethod(getJoinPointMatch(), returnValue, null);
    		}
    	}
    
    • AspectJAfterThrowingAdvice
    	public Object invoke(MethodInvocation mi) throws Throwable {
    		try {
    			return mi.proceed();
    		}
    		catch (Throwable ex) {
    			if (shouldInvokeOnThrowing(ex)) {
    				invokeAdviceMethod(getJoinPointMatch(), null, ex);
    			}
    			throw ex;
    		}
    	}
    

    這裏的調用順序是怎樣的呢?其核心就是通過proceed方法控制流程,每執行完一個Advice就會回到proceed方法中調用下一個Advice。可以思考一下,怎麼才能讓調用結果滿足如下圖的執行順序

    以上就是AOP的鏈式調用過程,但是這隻是只有一個切面類的情況,如果有多個@Aspect類呢,這個調用過程又是怎樣的?其核心思想和“棧”一樣,就是“先進后出,後進先出”。

    AOP擴展知識

    一、自定義全局攔截器Interceptor

    在上文創建代理對象的時候有這樣一個方法:

    	protected Advisor[] buildAdvisors(@Nullable String beanName, @Nullable Object[] specificInterceptors) {
    		//自定義MethodInterceptor.拿到setInterceptorNames方法注入的Interceptor對象
    		Advisor[] commonInterceptors = resolveInterceptorNames();
    
    		List<Object> allInterceptors = new ArrayList<>();
    		if (specificInterceptors != null) {
    			allInterceptors.addAll(Arrays.asList(specificInterceptors));
    			if (commonInterceptors.length > 0) {
    				if (this.applyCommonInterceptorsFirst) {
    					allInterceptors.addAll(0, Arrays.asList(commonInterceptors));
    				}
    				else {
    					allInterceptors.addAll(Arrays.asList(commonInterceptors));
    				}
    			}
    		}
    
    		Advisor[] advisors = new Advisor[allInterceptors.size()];
    		for (int i = 0; i < allInterceptors.size(); i++) {
    			//對自定義的advice要進行包裝,把advice包裝成advisor對象,切面對象
    			advisors[i] = this.advisorAdapterRegistry.wrap(allInterceptors.get(i));
    		}
    		return advisors;
    	}
    

    這個方法的作用就在於我們可以擴展我們自己的Interceptor,首先通過resolveInterceptorNames方法獲取到通過setInterceptorNames方法設置的Interceptor,然後調用DefaultAdvisorAdapterRegistry.wrap方法將其包裝為DefaultPointcutAdvisor對象並返回:

    	public Advisor wrap(Object adviceObject) throws UnknownAdviceTypeException {
    		if (adviceObject instanceof Advisor) {
    			return (Advisor) adviceObject;
    		}
    		if (!(adviceObject instanceof Advice)) {
    			throw new UnknownAdviceTypeException(adviceObject);
    		}
    		Advice advice = (Advice) adviceObject;
    		if (advice instanceof MethodInterceptor) {
    			return new DefaultPointcutAdvisor(advice);
    		}
    		for (AdvisorAdapter adapter : this.adapters) {
    			if (adapter.supportsAdvice(advice)) {
    				return new DefaultPointcutAdvisor(advice);
    			}
    		}
    		throw new UnknownAdviceTypeException(advice);
    	}
    
    	public DefaultPointcutAdvisor(Advice advice) {
    		this(Pointcut.TRUE, advice);
    	}
    

    需要注意DefaultPointcutAdvisor構造器裏面傳入了一個Pointcut.TRUE,表示這種擴展的Interceptor是全局的攔截器。下面來看看如何使用:

    public class MyMethodInterceptor implements MethodInterceptor {
        @Override
        public Object invoke(MethodInvocation invocation) throws Throwable {
    
            System.out.println("自定義攔截器");
            return invocation.proceed();
        }
    }
    

    首先寫一個類實現MethodInterceptor 接口,在invoke方法中實現我們的攔截邏輯,然後通過下面的方式測試,只要UserService 有AOP攔截就會發現自定義的MyMethodInterceptor也生效了。

        public void costomInterceptorTest() {
            AnnotationAwareAspectJAutoProxyCreator bean = applicationContext.getBean(AnnotationAwareAspectJAutoProxyCreator.class);
            bean.setInterceptorNames("myMethodInterceptor ");
    
            UserService userService = applicationContext.getBean(UserService.class);
            userService.queryUser("dark");
        }
    

    但是如果換個順序,像下面這樣:

        public void costomInterceptorTest() {
    
            UserService userService = applicationContext.getBean(UserService.class);
    
            AnnotationAwareAspectJAutoProxyCreator bean = applicationContext.getBean(AnnotationAwareAspectJAutoProxyCreator.class);
            bean.setInterceptorNames("myMethodInterceptor ");
    
            userService.queryUser("dark");
        }
    

    這時自定義的全局攔截器就沒有作用了,這是為什麼呢?因為當執行getBean的時候,如果有切面匹配就會通過ProxyFactory去創建代理對象,注意Interceptor是存到這個Factory對象中的,而這個對象和代理對象是一一對應的,因此調用getBean時,還沒有myMethodInterceptor這個對象,自定義攔截器就沒有效果了,也就是說要想自定義攔截器生效,就必須在代理對象生成之前註冊進去。

    二、循環依賴三級緩存存在的必要性

    在上一篇文章我分析了Spring是如何通過三級緩存來解決循環依賴的問題的,但你是否考慮過第三級緩存為什麼要存在?我直接將bean存到二級不就行了么,為什麼還要存一個ObjectFactory對象到第三級緩存中?這個在學習了AOP之後就很清楚了,因為我們在@Autowired對象時,想要注入的不一定是Bean本身,而是想要注入一個修改過後的對象,如代理對象。在AbstractAutowireCapableBeanFactory.getEarlyBeanReference方法中循環調用了SmartInstantiationAwareBeanPostProcessor.getEarlyBeanReference方法,AbstractAutoProxyCreator對象就實現了該方法:

    	public Object getEarlyBeanReference(Object bean, String beanName) {
    		Object cacheKey = getCacheKey(bean.getClass(), beanName);
    		if (!this.earlyProxyReferences.contains(cacheKey)) {
    			this.earlyProxyReferences.add(cacheKey);
    		}
    		// 創建代理對象
    		return wrapIfNecessary(bean, beanName, cacheKey);
    	}
    

    因此,當我們想要對循壞依賴的Bean做出修改時,就可以像AOP這樣做。

    三、如何在Bean創建之前提前創建代理對象

    Spring的代理對象基本上都是在Bean實例化完成之後創建的,但在文章開始我就說過,Spring也提供了一個機會在創建Bean對象之前就創建代理對象,在AbstractAutowireCapableBeanFactory.resolveBeforeInstantiation方法中:

    	protected Object resolveBeforeInstantiation(String beanName, RootBeanDefinition mbd) {
    		Object bean = null;
    		if (!Boolean.FALSE.equals(mbd.beforeInstantiationResolved)) {
    			// Make sure bean class is actually resolved at this point.
    			if (!mbd.isSynthetic() && hasInstantiationAwareBeanPostProcessors()) {
    				Class<?> targetType = determineTargetType(beanName, mbd);
    				if (targetType != null) {
    					bean = applyBeanPostProcessorsBeforeInstantiation(targetType, beanName);
    					if (bean != null) {
    						bean = applyBeanPostProcessorsAfterInitialization(bean, beanName);
    					}
    				}
    			}
    			mbd.beforeInstantiationResolved = (bean != null);
    		}
    		return bean;
    	}
    
    	protected Object applyBeanPostProcessorsBeforeInstantiation(Class<?> beanClass, String beanName) {
    		for (BeanPostProcessor bp : getBeanPostProcessors()) {
    			if (bp instanceof InstantiationAwareBeanPostProcessor) {
    				InstantiationAwareBeanPostProcessor ibp = (InstantiationAwareBeanPostProcessor) bp;
    				Object result = ibp.postProcessBeforeInstantiation(beanClass, beanName);
    				if (result != null) {
    					return result;
    				}
    			}
    		}
    		return null;
    	}
    

    主要是InstantiationAwareBeanPostProcessor.postProcessBeforeInstantiation方法中,這裏又會進入到AbstractAutoProxyCreator類中:

    	public Object postProcessBeforeInstantiation(Class<?> beanClass, String beanName) {
    		TargetSource targetSource = getCustomTargetSource(beanClass, beanName);
    		if (targetSource != null) {
    			if (StringUtils.hasLength(beanName)) {
    				this.targetSourcedBeans.add(beanName);
    			}
    			Object[] specificInterceptors = getAdvicesAndAdvisorsForBean(beanClass, beanName, targetSource);
    			Object proxy = createProxy(beanClass, beanName, specificInterceptors, targetSource);
    			this.proxyTypes.put(cacheKey, proxy.getClass());
    			return proxy;
    		}
    
    		return null;
    	}
    
    	protected TargetSource getCustomTargetSource(Class<?> beanClass, String beanName) {
    		// We can't create fancy target sources for directly registered singletons.
    		if (this.customTargetSourceCreators != null &&
    				this.beanFactory != null && this.beanFactory.containsBean(beanName)) {
    			for (TargetSourceCreator tsc : this.customTargetSourceCreators) {
    				TargetSource ts = tsc.getTargetSource(beanClass, beanName);
    				if (ts != null) {
    					return ts;
    				}
    			}
    		}
    
    		// No custom TargetSource found.
    		return null;
    	}
    

    看到這裏大致應該明白了,先是獲取到一個自定義的TargetSource對象,然後創建代理對象,所以我們首先需要自己實現一個TargetSource類,這裏直接繼承一個抽象類,getTarget方法則返回原始對象:

    public class MyTargetSource extends AbstractBeanFactoryBasedTargetSource {
        @Override
        public Object getTarget() throws Exception {
            return getBeanFactory().getBean(getTargetBeanName());
        }
    }
    

    但這還不夠,上面首先判斷了customTargetSourceCreators!=null,而這個屬性是個數組,可以通過下面這個方法設置進來:

    	public void setCustomTargetSourceCreators(TargetSourceCreator... targetSourceCreators) {
    		this.customTargetSourceCreators = targetSourceCreators;
    	}
    

    所以我們還要實現一個TargetSourceCreator類,同樣繼承一個抽象類實現,並只對userServiceImpl對象進行攔截:

    public class MyTargetSourceCreator extends AbstractBeanFactoryBasedTargetSourceCreator {
        @Override
        protected AbstractBeanFactoryBasedTargetSource createBeanFactoryBasedTargetSource(Class<?> beanClass, String beanName) {
    
            if (getBeanFactory() instanceof ConfigurableListableBeanFactory) {
                if(beanName.equalsIgnoreCase("userServiceImpl")) {
                    return new MyTargetSource();
                }
            }
    
            return null;
        }
    }
    

    createBeanFactoryBasedTargetSource方法是在AbstractBeanFactoryBasedTargetSourceCreator.getTargetSource中調用的,而getTargetSource就是在上面getCustomTargetSource中調用的。以上工作做完后,還需要將其設置到AnnotationAwareAspectJAutoProxyCreator對象中,因此需要我們注入這個對象:

    @Configuration
    public class TargetSourceCreatorBean {
    
        @Autowired
        private BeanFactory beanFactory;
    
       @Bean
        public AnnotationAwareAspectJAutoProxyCreator annotationAwareAspectJAutoProxyCreator() {
            AnnotationAwareAspectJAutoProxyCreator creator = new AnnotationAwareAspectJAutoProxyCreator();
            MyTargetSourceCreator myTargetSourceCreator = new MyTargetSourceCreator();
            myTargetSourceCreator.setBeanFactory(beanFactory);
            creator.setCustomTargetSourceCreators(myTargetSourceCreator);
            return creator;
        }
    }
    

    這樣,當我們通過getBean獲取userServiceImpl的對象時,就會優先生成代理對象,然後在調用執行鏈的過程中再通過TargetSource.getTarget獲取到被代理對象。但是,為什麼我們在getTarget方法中調用getBean就能拿到被代理對象呢?
    繼續探究,通過斷點我發現從getTarget進入時,在resolveBeforeInstantiation方法中返回的bean就是null了,而getBeanPostProcessors方法返回的Processors中也沒有了AnnotationAwareAspectJAutoProxyCreator對象,也就是沒有進入到AbstractAutoProxyCreator.postProcessBeforeInstantiation方法中,所以不會再次獲取到代理對象,那AnnotationAwareAspectJAutoProxyCreator對象是在什麼時候移除的呢?
    帶着問題,我開始反推,發現在AbstractBeanFactoryBasedTargetSourceCreator類中有這樣一個方法buildInternalBeanFactory

    	protected DefaultListableBeanFactory buildInternalBeanFactory(ConfigurableBeanFactory containingFactory) {
    		DefaultListableBeanFactory internalBeanFactory = new DefaultListableBeanFactory(containingFactory);
    
    		// Required so that all BeanPostProcessors, Scopes, etc become available.
    		internalBeanFactory.copyConfigurationFrom(containingFactory);
    
    		// Filter out BeanPostProcessors that are part of the AOP infrastructure,
    		// since those are only meant to apply to beans defined in the original factory.
    		internalBeanFactory.getBeanPostProcessors().removeIf(beanPostProcessor ->
    				beanPostProcessor instanceof AopInfrastructureBean);
    
    		return internalBeanFactory;
    	}
    

    在這裏移除掉了所有AopInfrastructureBean的子類,而AnnotationAwareAspectJAutoProxyCreator就是其子類,那這個方法是在哪裡調用的呢?繼續反推:

    	protected DefaultListableBeanFactory getInternalBeanFactoryForBean(String beanName) {
    		synchronized (this.internalBeanFactories) {
    			DefaultListableBeanFactory internalBeanFactory = this.internalBeanFactories.get(beanName);
    			if (internalBeanFactory == null) {
    				internalBeanFactory = buildInternalBeanFactory(this.beanFactory);
    				this.internalBeanFactories.put(beanName, internalBeanFactory);
    			}
    			return internalBeanFactory;
    		}
    	}
    
    	public final TargetSource getTargetSource(Class<?> beanClass, String beanName) {
    		AbstractBeanFactoryBasedTargetSource targetSource =
    				createBeanFactoryBasedTargetSource(beanClass, beanName);
    		
    		// 創建完targetSource后就移除掉AopInfrastructureBean類型的BeanPostProcessor對象,如AnnotationAwareAspectJAutoProxyCreator
    		DefaultListableBeanFactory internalBeanFactory = getInternalBeanFactoryForBean(beanName);
    
    		......
    		return targetSource;
    	}
    

    至此,關於TargetSource接口擴展的原理就搞明白了。

    總結

    本篇篇幅比較長,主要搞明白Spring代理對象是如何創建的以及AOP鏈式調用過程,而後面的擴展則是對AOP以及Bean創建過程中一些疑惑的補充,可根據實際情況學習掌握。

    本站聲明:網站內容來源於博客園,如有侵權,請聯繫我們,我們將及時處理

    【其他文章推薦】

    網頁設計一頭霧水該從何著手呢? 台北網頁設計公司幫您輕鬆架站!

    網頁設計公司推薦不同的風格,搶佔消費者視覺第一線

    ※Google地圖已可更新顯示潭子電動車充電站設置地點!!

    ※廣告預算用在刀口上,台北網頁設計公司幫您達到更多曝光效益

    ※別再煩惱如何寫文案,掌握八大原則!

  • Java筆試面試總結—try、catch、finally語句中有return 的各類情況

    Java筆試面試總結—try、catch、finally語句中有return 的各類情況

    前言

    之前在刷筆試題和面試的時候經常會遇到或者被問到 try-catch-finally 語法塊的執行順序等問題,今天就抽空整理了一下這個知識點,然後記錄下來。

    正文

    本篇文章主要是通過舉例的方式來闡述各種情況,我這裏根據 try-catch-finally 語法塊分為兩種大情況討論:try-catch 語法塊和 try-catch-finally 語句塊,然後再在每種情況里再去具體討論。

    一、try-catch 語句塊

    我們可以看看下面程序:

    public static void main(String[] args) {
    
        System.out.println(handleException0());
      }
    
      /**
       * try,catch都有return
       * @return
       */
      private static String handleException0() {
        try{
          System.out.println("try開始");
          String s = null;
          int length = s.charAt(0);
          System.out.println("try結束");
          return "try塊的返回值";
        }catch (Exception e){
          System.out.println("捕獲到了異常");
          return "catch的返回值";
        }
      }
    

    執行結果

    try開始
    捕獲到了異常
    catch的返回值

    分析:程序首先執行 try 塊裏面的代碼,try 塊裏面發現有異常,try 塊後面的代碼不會執行(自然也不會return),然後進入匹配異常的那個 catch 塊,然後進入 catch 塊裏面將代碼執行完畢,當執行到 catch 裏面的return 語句的時候,程序中止,然後將此 return 的最終結果返回回去。

    二、try-catch-finally 語句塊

    這種語法塊我分為了 4 種情況討論,下面進行一一列舉。

    1、第一種情況,try 塊裏面有 return 的情況,並且捕獲到異常

    例1:

    public static void main(String[] args) {
      String result = handleException1();
      System.out.println(result);
    }
    private static String handleException1() {
      try{
        System.out.println("try開始");
        String str = null;
        int length = str.length();
        System.out.println("try結束");
      }catch (Exception e){
        System.out.println("捕獲到了異常");
      }finally {
        System.out.println("finally塊執行完畢了");
      }
      return "最終的結果";
    }
    

    例1執行的結果如下

    try開始
    捕獲到了異常
    finally塊執行完畢了
    最終的結果

    例2:

    public static void main(String[] args) {
      String result = handleException2();
      System.out.println(result);
    }
    private static String handleException2() {
      try{
        System.out.println("try開始");
        String str = null;
        int length = str.length();
        System.out.println("try結束");
        return "try塊的返回值";
      }catch (Exception e){
        System.out.println("捕獲到了異常");
      }finally {
        System.out.println("finally塊執行完畢了");
      }
      return "最終的結果";
    }
    

    例2的執行結果如下

    try開始
    捕獲到了異常
    finally塊執行完畢了
    最終的結果

    分析:首先 例1 和 例2 的結果是很顯然的,當遇到異常的時候,直接進入匹配到相對應的 catch 塊,然後繼續執行 finallly 語句塊,最後將 return 結果返回回去。

    第二種情況:try塊裏面有return的情況,但是不會捕獲到異常

    例3:

    思考:下面代碼try語句塊中有return語句,那麼是否執行完try語句塊就直接return退出方法了呢?

    public static void main(String[] args) {
      String result = handleException3();
      System.out.println(result);
    }
    private static String handleException3() {
      try{
      	System.out.println("");
        return "try塊的返回值";
      }catch (Exception e){
        System.out.println("捕獲到了異常");
      }finally {
        System.out.println("finally塊執行完畢了");
      }
      return "最終的結果";
    }
    

    例3的執行結果如下

    finally塊執行完畢了
    try塊的返回值

    分析:例3的結果其實我們可以通過打斷點的方式去看看程序的具體執行流程,通過打斷點我們可以發現,代碼先執行 try塊 里的代碼,當執行到 return 語句的時候,handleException3方法並沒有立刻結束,而是繼續執行finally塊里的代碼,finally塊里的代碼執行完后,緊接着回到 try 塊的 return 語句,再把最終結果返回回去, handleException 方法執行完畢。

    第三種情況:try塊和finally裏面都有return的情況

    例4:

    public static void main(String[] args) {
        System.out.println(handleException4());
      }
    
      /**
       * 情況3:try和finally中均有return
       * @return
       */
      private static String handleException4() {
        try{
          System.out.println("");
          return "try塊的返回值";
        }catch (Exception e){
          System.out.println("捕獲到了異常");
        }finally {
          System.out.println("finally塊執行完畢了");
          return "finally的返回值";
        }
      //  return "最終的結果";//不能再有返回值
      }
    

    例4的執行結果

    finally塊執行完畢了
    finally的返回值

    分析:需要注意的是,當 try 塊和 finally 裏面都有 return 的時候,在 try/catch/finally 語法塊之外不允許再有return 關鍵字。我們還是通過在程序中打斷點的方式來看看代碼的具體執行流程。代碼首先執行 try 塊 里的代碼,當執行到 return 語句的時候,handleException4 方法並沒有立刻結束,而是繼續執行 finally 塊里的代碼,當發現 finally 塊里有 return 的時候,直接將 finally 里的返回值(也就是最終結果)返回回去, handleException4 方法執行完畢。

    第四種情況:try塊,catch塊,finally塊都有return

    例5:

    public static void main(String[] args) {
        System.out.println(handleException5());
      }
    
      /**
       * 情況4:try,catch,finally都有return
       * @return
       */
      private static String handleException5() {
        try{
          System.out.println("try開始");
          int[] array = {1, 2, 3};
          int i = array[10];
          System.out.println("try結束");
          return "try塊的返回值";
        }catch (Exception e){
          e.printStackTrace();//這行代碼其實就是打印輸出異常的具體信息
          System.out.println("捕獲到了異常");
          return "catch的返回值";
        }finally {
          System.out.println("finally塊執行完畢了");
          return "finally的返回值";
        }
    //    return "最終的結果";
      }
    

    例5的執行結果

    try開始
    捕獲到了異常
    finally塊執行完畢了
    finally的返回值
    java.lang.ArrayIndexOutOfBoundsException: 10
    at com.example.javabasic.javabasic.ExceptionAndError.TryCatchFinally.handleException5(TryCatchFinally.java:25)
    at com.example.javabasic.javabasic.ExceptionAndError.TryCatchFinally.main(TryCatchFinally.java:14)

    分析:程序首先執行try塊裏面的代碼,try塊裏面發現有異常,try塊後面的代碼不會執行(自然也不會return),然後進入匹配異常的那個catch塊,然後進入catch塊裏面將代碼執行完畢,當執行到catch裏面的return語句的時候,程序不會馬上終止,而是繼續執行finally塊的代碼,最後執行finally裏面的return,然後將此return的最終結果返回回去。

    總結

    其實,我們通過以上例子我們可以發現,不管return關鍵字在哪,finally一定會執行完畢。理論上來說try、catch、finally塊中都允許書寫return關鍵字,但是執行優先級較低的塊中的return關鍵字定義的返回值將覆蓋執行優先級較高的塊中return關鍵字定義的返回值。也就是說finally塊中定義的返回值將會覆蓋catch塊、try塊中定義的返回值;catch塊中定義的返回值將會覆蓋try塊中定義的返回值。
    再換句話說如果在finally塊中通過return關鍵字定義了返回值,那麼之前所有通過return關鍵字定義的返回值都將失效——因為finally塊中的代碼一定是會執行的。

    公眾號:良許Linux

    有收穫?希望老鐵們來個三連擊,給更多的人看到這篇文章

    本站聲明:網站內容來源於博客園,如有侵權,請聯繫我們,我們將及時處理

    【其他文章推薦】

    ※自行創業缺乏曝光? 網頁設計幫您第一時間規劃公司的形象門面

    網頁設計一頭霧水該從何著手呢? 台北網頁設計公司幫您輕鬆架站!

    ※想知道最厲害的網頁設計公司"嚨底家"!

    ※幫你省時又省力,新北清潔一流服務好口碑

    ※別再煩惱如何寫文案,掌握八大原則!

  • 神秘疾病侵襲 加勒比海珊瑚礁群拉警報

    摘錄自2019年11月12日中央通訊社綜合報導

    短短一年多來,墨西哥加勒比海地區的珊瑚因為遭到一種罕為人知「石珊瑚組織損失症」(SCTLD)侵襲,已經損失30%。這種疾病會造成珊瑚鈣化和死亡。

    專家警告,這種疾病可能造成大部分中美洲珊瑚礁(Mesoamerican Reef)死亡。這處龐大的弧狀珊瑚礁群範圍廣達超過1000公里,為墨西哥、貝里斯、瓜地馬拉和宏都拉斯等國家共有。

    SCTLD已使加勒比海地區陷入困境,這種疾病可能摧毀環礁地區民眾賴以為生的觀光產業。中美洲珊瑚礁是僅次於澳洲大堡礁的世界第2大珊瑚礁。科學家表示,旅遊業太發達非常有可能讓問題火上加油。

    「健康珊瑚礁、健康人民」在墨西哥的協調人員蘇鐸(Melina Soto)說,SCTLD只需幾週時間,就可以殺死需要花費數十年生長起來的珊瑚組織。蘇鐸表示:「如果以這種速度繼續下去,這個生態系統將會在未來的5到10年內崩潰。」

    本站聲明:網站內容來源環境資訊中心https://e-info.org.tw/,如有侵權,請聯繫我們,我們將及時處理

    【其他文章推薦】

    網頁設計一頭霧水該從何著手呢? 台北網頁設計公司幫您輕鬆架站!

    網頁設計公司推薦不同的風格,搶佔消費者視覺第一線

    ※Google地圖已可更新顯示潭子電動車充電站設置地點!!

    ※廣告預算用在刀口上,台北網頁設計公司幫您達到更多曝光效益

    ※別再煩惱如何寫文案,掌握八大原則!