Kubernetes中的DNS无法正常工作

我按照
https://github.com/GoogleCloudPlatform/kubernetes/tree/master/cluster/addons/dns的例子

但我不能以nslookup输出为例.

执行时

kubectl exec busybox -- nslookup kubernetes

它假设要回来了

Server:    10.0.0.10
Address 1: 10.0.0.10

Name:      kubernetes
Address 1: 10.0.0.1

但我只能得到

nslookup: can't resolve 'kubernetes'
Server:    10.0.2.3
Address 1: 10.0.2.3

error: Error executing remote command: Error executing command in container: Error executing in Docker Container: 1

我的Kubernetes正在VM上运行,其ifconfig输出如下:

docker0   Link encap:Ethernet  HWaddr 56:84:7a:fe:97:99  
          inet addr:172.17.42.1  Bcast:0.0.0.0  Mask:255.255.0.0
          inet6 addr: fe80::5484:7aff:fefe:9799/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:50 errors:0 dropped:0 overruns:0 frame:0
          TX packets:34 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:2899 (2.8 KB)  TX bytes:2343 (2.3 KB)

eth0      Link encap:Ethernet  HWaddr 08:00:27:ed:09:81  
          inet addr:10.0.2.15  Bcast:10.0.2.255  Mask:255.255.255.0
          inet6 addr: fe80::a00:27ff:feed:981/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:4735 errors:0 dropped:0 overruns:0 frame:0
          TX packets:2762 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:367445 (367.4 KB)  TX bytes:280749 (280.7 KB)

eth1      Link encap:Ethernet  HWaddr 08:00:27:1f:0d:84  
          inet addr:192.168.144.17  Bcast:192.168.144.255  Mask:255.255.255.0
          inet6 addr: fe80::a00:27ff:fe1f:d84/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:3 errors:0 dropped:0 overruns:0 frame:0
          TX packets:19 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:330 (330.0 B)  TX bytes:1746 (1.7 KB)

lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:127976 errors:0 dropped:0 overruns:0 frame:0
          TX packets:127976 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:13742978 (13.7 MB)  TX bytes:13742978 (13.7 MB)

veth142cdac Link encap:Ethernet  HWaddr e2:b6:29:d1:f5:dc  
          inet6 addr: fe80::e0b6:29ff:fed1:f5dc/64 Scope:Link
          UP BROADCAST RUNNING  MTU:1500  Metric:1
          RX packets:18 errors:0 dropped:0 overruns:0 frame:0
          TX packets:18 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:1336 (1.3 KB)  TX bytes:1336 (1.3 KB)

以下是我尝试启动Kubernetes的步骤:

vagrant@kubernetes:~/kubernetes$hack/local-up-cluster.sh 
+++ [0623 11:18:47] Building go targets for linux/amd64:
    cmd/kube-proxy
    cmd/kube-apiserver
    cmd/kube-controller-manager
    cmd/kubelet
    cmd/hyperkube
    cmd/kubernetes
    plugin/cmd/kube-scheduler
    cmd/kubectl
    cmd/integration
    cmd/gendocs
    cmd/genman
    cmd/genbashcomp
    cmd/genconversion
    cmd/gendeepcopy
    examples/k8petstore/web-server
    github.com/onsi/ginkgo/ginkgo
    test/e2e/e2e.test
+++ [0623 11:18:52] Placing binaries
curl: (7) Failed to connect to 127.0.0.1 port 8080: Connection refused
API SERVER port is free, proceeding...
Starting etcd

etcd -data-dir /tmp/test-etcd.FcQ75s --bind-addr 127.0.0.1:4001 >/dev/null 2>/dev/null

Waiting for etcd to come up.
+++ [0623 11:18:53] etcd:
{"action":"set","node":{"key":"/_test","value":"","modifiedIndex":3,"createdIndex":3}}
Waiting for apiserver to come up
+++ [0623 11:18:55] apiserver:
    {
    "kind":
    "PodList",
    "apiVersion":
    "v1beta3",
    "metadata":
    {
    "selfLink":
    "/api/v1beta3/pods",
    "resourceVersion":
    "11"
    },
    "items":
    []
    }
Local Kubernetes cluster is running. Press Ctrl-C to shut it down.

Logs:
  /tmp/kube-apiserver.log
  /tmp/kube-controller-manager.log
  /tmp/kube-proxy.log
  /tmp/kube-scheduler.log
  /tmp/kubelet.log

To start using your cluster, open up another terminal/tab and run:

  cluster/kubectl.sh config set-cluster local --server=http://127.0.0.1:8080 --insecure-skip-tls-verify=true
  cluster/kubectl.sh config set-context local --cluster=local
  cluster/kubectl.sh config use-context local
  cluster/kubectl.sh

然后在新的终端窗口中,我执行了:

cluster/kubectl.sh config set-cluster local --server=http://127.0.0.1:8080 --insecure-skip-tls-verify=true
cluster/kubectl.sh config set-context local --cluster=local
cluster/kubectl.sh config use-context local

之后,我创建了busybox Pod as

kubectl create -f busybox.yaml

busybox.yaml的内容来自https://github.com/GoogleCloudPlatform/kubernetes/blob/master/cluster/addons/dns/README.md

最佳答案 看起来
local-cluster-up.sh不支持开箱即用的DNS.要使DNS正常工作,需要将kubelet传递给标志–cluster_dns =< ip-of-dns-service>和–cluster_domain =启动时的cluster.local.此标志未包含在
the set of flags passed to the kubelet中,因此kubelet不会尝试联系您为名称解析服务创建的DNS窗格.

要解决此问题,您可以修改脚本以将这两个标志添加到kubelet,然后在创建DNS服务时,您需要确保将传递给–cluster_dns标志的相同IP地址设置为portalIP服务规范的字段(参见示例here).

点赞