Elasticsearch不允许分配未分配的分片

我有一个2个节点的ES集群.当我重新启动节点时,群集状态为黄色,因为某些分片未分配.我试图谷歌和常见的解决方案是重新路由未分配的分片.不幸的是,它对我不起作用.

curl localhost:9200/_cluster/health?pretty=true
{
  "cluster_name" : "infra",
  "status" : "yellow",
  "timed_out" : false,
  "number_of_nodes" : 2,
  "number_of_data_nodes" : 2,
  "active_primary_shards" : 34,
  "active_shards" : 68,
  "relocating_shards" : 0,
  "initializing_shards" : 0,
  "unassigned_shards" : 31,
  "delayed_unassigned_shards" : 0,
  "number_of_pending_tasks" : 0,
  "number_of_in_flight_fetch" : 0,
  "task_max_waiting_in_queue_millis" : 0,
  "active_shards_percent_as_number" : 68.68686868686868
}

curl localhost:9200/_cluster/settings?pretty
{
  "persistent" : { },
  "transient" : {
    "cluster" : {
      "routing" : {
        "allocation" : {
          "enable" : "all"
        }
      }
    }
  }
}

curl localhost:9200/_cat/indices?v

health status index                   pri rep docs.count docs.deleted store.size pri.store.size
yellow open   logstash-log-2016.05.13   5   2      88314            0    300.5mb        150.2mb
yellow open   logstash-log-2016.05.12   5   2     254450            0    833.9mb        416.9mb
yellow open   .kibana                   1   2          3            0     47.8kb         25.2kb
green  open   .marvel-es-data-1         1   1          3            0      8.7kb          4.3kb
yellow open   logstash-log-2016.05.11   5   2     313095            0    709.1mb        354.6mb
yellow open   logstash-log-2016.05.10   5   2     613744            0        1gb        520.2mb
green  open   .marvel-es-1-2016.05.18   1   1      88720          495     89.9mb           45mb
green  open   .marvel-es-1-2016.05.17   1   1      69430          492     59.4mb         29.7mb
yellow open   logstash-log-2016.05.17   5   2     188924            0    518.2mb          259mb
yellow open   logstash-log-2016.05.18   5   2     226775            0    683.7mb        366.1mb

重新路由

curl -XPOST 'localhost:9200/_cluster/reroute?pretty' -d '{
     "commands": [
        {
            "allocate": {
                "index": "logstash-log-2016.05.13",
                "shard": 3,
                "node": "elasticsearch-mon-1",
                "allow_primary": true
          }
        }
    ]
  }'
{
  "error" : {
    "root_cause" : [ {
      "type" : "illegal_argument_exception",
      "reason" : "[allocate] allocation of [logstash-log-2016.05.13][3] on node {elasticsearch-mon-1}{K-J8WKyZRB6bE4031kHkKA}{172.45.0.56}{172.45.0.56:9300} is not allowed, reason: [YES(allocation disabling is ignored)][NO(shard cannot be allocated on same node [K-J8WKyZRB6bE4031kHkKA] it already exists on)][YES(no allocation awareness enabled)][YES(allocation disabling is ignored)][YES(target node version [2.3.2] is same or newer than source node version [2.3.2])][YES(primary is already active)][YES(total shard limit disabled: [index: -1, cluster: -1] <= 0)][YES(shard not primary or relocation disabled)][YES(node passes include/exclude/require filters)][YES(enough disk for shard on node, free: [25.4gb])][YES(below shard recovery limit of [2])]"
    } ],
    "type" : "illegal_argument_exception",
    "reason" : "[allocate] allocation of [logstash-log-2016.05.13][3] on node {elasticsearch-mon-1}{K-J8WKyZRB6bE4031kHkKA}{172.45.0.56}{172.45.0.56:9300} is not allowed, reason: [YES(allocation disabling is ignored)][NO(shard cannot be allocated on same node [K-J8WKyZRB6bE4031kHkKA] it already exists on)][YES(no allocation awareness enabled)][YES(allocation disabling is ignored)][YES(target node version [2.3.2] is same or newer than source node version [2.3.2])][YES(primary is already active)][YES(total shard limit disabled: [index: -1, cluster: -1] <= 0)][YES(shard not primary or relocation disabled)][YES(node passes include/exclude/require filters)][YES(enough disk for shard on node, free: [25.4gb])][YES(below shard recovery limit of [2])]"
  },
  "status" : 400
}

curl -XPOST 'localhost:9200/_cluster/reroute?pretty' -d '{
     "commands": [
        {
            "allocate": {
                "index": "logstash-log-2016.05.13",
                "shard": 3,
                "node": "elasticsearch-mon-2",
                "allow_primary": true
          }
        }
    ]
  }'
{
  "error" : {
    "root_cause" : [ {
      "type" : "illegal_argument_exception",
      "reason" : "[allocate] allocation of [logstash-log-2016.05.13][3] on node {elasticsearch-mon-2}{Rxgq2aWPSVC0pvUW2vBgHA}{172.45.0.166}{172.45.0.166:9300} is not allowed, reason: [YES(allocation disabling is ignored)][NO(shard cannot be allocated on same node [Rxgq2aWPSVC0pvUW2vBgHA] it already exists on)][YES(no allocation awareness enabled)][YES(allocation disabling is ignored)][YES(target node version [2.3.2] is same or newer than source node version [2.3.2])][YES(primary is already active)][YES(total shard limit disabled: [index: -1, cluster: -1] <= 0)][YES(shard not primary or relocation disabled)][YES(node passes include/exclude/require filters)][YES(enough disk for shard on node, free: [25.4gb])][YES(below shard recovery limit of [2])]"
    } ],
    "type" : "illegal_argument_exception",
    "reason" : "[allocate] allocation of [logstash-log-2016.05.13][3] on node {elasticsearch-mon-2}{Rxgq2aWPSVC0pvUW2vBgHA}{172.45.0.166}{172.45.0.166:9300} is not allowed, reason: [YES(allocation disabling is ignored)][NO(shard cannot be allocated on same node [Rxgq2aWPSVC0pvUW2vBgHA] it already exists on)][YES(no allocation awareness enabled)][YES(allocation disabling is ignored)][YES(target node version [2.3.2] is same or newer than source node version [2.3.2])][YES(primary is already active)][YES(total shard limit disabled: [index: -1, cluster: -1] <= 0)][YES(shard not primary or relocation disabled)][YES(node passes include/exclude/require filters)][YES(enough disk for shard on node, free: [25.4gb])][YES(below shard recovery limit of [2])]"
  },
  "status" : 400
}

所以它失败了,没有做任何改变.碎片仍处于未分配状态.

谢谢.

添加

curl localhost:9200/_cat/shards

logstash-log-2016.05.13 2 p STARTED     17706  31.6mb 172.45.0.166 elasticsearch-mon-2
logstash-log-2016.05.13 2 r STARTED     17706  31.5mb 172.45.0.56  elasticsearch-mon-1
logstash-log-2016.05.13 2 r UNASSIGNED
logstash-log-2016.05.13 4 p STARTED     17698  31.6mb 172.45.0.166 elasticsearch-mon-2
logstash-log-2016.05.13 4 r STARTED     17698  31.4mb 172.45.0.56  elasticsearch-mon-1
logstash-log-2016.05.13 4 r UNASSIGNED

最佳答案 对于黄色的所有索引,您已配置2个副本:

health status index                   pri rep
yellow open   logstash-log-2016.05.13   5   2
yellow open   logstash-log-2016.05.12   5   2
yellow open   .kibana                   1   2
yellow open   logstash-log-2016.05.11   5   2
yellow open   logstash-log-2016.05.10   5   2
yellow open   logstash-log-2016.05.17   5   2
yellow open   logstash-log-2016.05.18   5   2

两个节点集群上的2个副本是不可能的.您需要第三个节点来分配所有副本.

或者,减少副本数量:

PUT /logstash-log-*,.kibana/_settings
{
  "index": {
    "number_of_replicas": 1
  }
}
点赞