kong 配置负载均衡

负载均衡

route根据paths转发给相应的service根据hostupstreamname)转发给 upstream负载均衡至targets,这就是kong的负载均衡执行流程,下面通过restApi分别配置upstream,service,route

upstreams

创建名为upstream.apiupstream

curl -X POST localhost:8001/upstreams
-d "name=upstream.api"
// reponse
{
    "created_at": 1553661443,
    "hash_on": "none",
    "id": "04c9c36c-eea8-4d58-8668-3bfa117c34fd",
    "name": "upstream.api",
    ...
}

upstream.api添加后端服务器

curl -X POST localhost:8001/upstreams/upstream.api/targets \
-d "target=192.168.20.6:8888" \
-d "weight=100"
// reponse
{
    "created_at": 1553663185.86,
    "upstream": {
        "id": "04c9c36c-eea8-4d58-8668-3bfa117c34fd"
    },
    "id": "3386af25-8643-4c9c-aff5-bd30451ae24b",
    "target": "192.168.20.6:8888",
    "weight": 100
}

curl -X POST localhost:8001/upstreams/upstream.api/targets \
-d "target=192.168.20.6:9999" \
-d "weight=100"
// reponse
{
    "created_at": 1553663185.86,
    "upstream": {
        "id": "04c9c36c-eea8-4d58-8668-3bfa117c34fd"
    },
    "id": "3386af25-8643-4c9c-aff5-bd30451ae24b",
    "target": "192.168.20.6:9999",
    "weight": 100
}

等同于创建了如下配置:

upstream upstream.api {
    server 192.168.20.6:8888 weight=100;
    server 192.168.20.6:9999 weight=100;
}

services

创建名为service.api的服务,并通过host绑定相应的后端服务upstream.api

curl -X POST localhost:8001/services/service.api
-d "name=service.api"
-d "host=upstream.api"
//
{
    "host": "upstream.api",//绑定的upstream
    "created_at": 1553663485,
    "connect_timeout": 60000,
    "id": "5b93eda7-7ba5-4acc-a536-cf12f58a1144",//service.id
    "protocol": "http",
    "name": "service.api",
    "read_timeout": 60000,
    "port": 80,
    "path": "/api/v1",
    "updated_at": 1553663485,
    "retries": 5,
    "write_timeout": 60000
}

等同于

http {
    server {
        listen 8000;
        location waiting-for-define {
            proxy_pass http://upstream.api;
        }
    }
}

routes

为服务service.api绑定路由。需要理解,route并非一条url,它是kong的路由服务,可以为某个kong服务管理管理一套路由集合route就相当于 http > server 中的 location 规则集合。

#为service.api添加路由集合
curl -X POST localhost:8001/routes \
-d "name=route.api" \
-d "paths[]=/api/v1" \
-d "paths[]=/api/v2" \
-d "paths[]=/api/v3" \
-d "hosts[]=api.service.com" \
-d "hosts[]=service.com" \
-d "service.id=5b93eda7-7ba5-4acc-a536-cf12f58a1144"

#或者通过 services 的接口

curl -X POST localhost:8001/services/service.api/routes \
-d "name=route.api" \
-d "paths[]=/api/v1" \
-d "paths[]=/api/v2" \
-d "paths[]=/api/v3" \
-d "hosts[]=localhost" \
-d "hosts[]=api.service.com" \
-d "hosts[]=service.com" \

我们还同时指定了hosts,相当于server_name,顺手把虚拟主机也做了。
大致等同于如下配置:

http {
    server {
        listen 8000;
        server_name localhost api.service.com service.com;
        location /api/v1 {
            proxy_pass http://upstream.api;
        }
        location /api/v2 {
            proxy_pass http://upstream.api;
        }
        location /api/v3 {
            proxy_pass http://upstream.api;
        }
    }
}

这样我们可以通过

localhost:8000/api/v1
api.service.com:8000/api/v2
service.com:8000/api/v3

来访问 service.api服务,此服务后端有两台服务器做 LB

    原文作者:big_cat
    原文地址: https://segmentfault.com/a/1190000018675599
    本文转自网络文章,转载此文章仅为分享知识,如有侵权,请联系博主进行删除。
点赞