個人用のKubernetesクラスタを1.9.3に更新したのでログを残します。
% kops version Version 1.9.0
kops upgrade cluster コマンドでS3の構成情報を更新します。
% kops upgrade cluster Using cluster from kubectl context: hello.k8s.local ITEM PROPERTY OLD NEW Cluster KubernetesVersion 1.8.7 1.9.3 Must specify --yes to perform upgrade % kops upgrade cluster --yes Using cluster from kubectl context: hello.k8s.local ITEM PROPERTY OLD NEW Cluster KubernetesVersion 1.8.7 1.9.3 Updates applied to configuration. You can now apply these changes, using `kops update cluster hello.k8s.local`
kops update cluster コマンドでAWSリソースの設定(Launch Config等)を更新します。
% kops update cluster
Using cluster from kubectl context: hello.k8s.local
I0420 20:07:38.893483 47153 executor.go:91] Tasks: 0 done / 77 total; 31 can run
I0420 20:07:42.825316 47153 executor.go:91] Tasks: 31 done / 77 total; 26 can run
I0420 20:07:45.897275 47153 executor.go:91] Tasks: 57 done / 77 total; 18 can run
I0420 20:07:48.800846 47153 executor.go:91] Tasks: 75 done / 77 total; 2 can run
I0420 20:07:49.113445 47153 executor.go:91] Tasks: 77 done / 77 total; 0 can run
Will modify resources:
DHCPOptions/hello.k8s.local
Tags {KubernetesCluster: hello.k8s.local, Name: hello.k8s.local} -> {Name: hello.k8s.local, KubernetesCluster: hello.k8s.local, kubernetes.io/cluster/hello.k8s.local: owned}
EBSVolume/a.etcd-events.hello.k8s.local
VolumeType standard -> gp2
Tags {k8s.io/etcd/events: a/a, KubernetesCluster: hello.k8s.local, Name: a.etcd-events.hello.k8s.local, k8s.io/role/master: 1} -> {k8s.io/etcd/events: a/a, k8s.io/role/master: 1, kubernetes.io/cluster/hello.k8s.local: owned, Name: a.etcd-events.hello.k8s.local, KubernetesCluster: hello.k8s.local}
EBSVolume/a.etcd-main.hello.k8s.local
VolumeType standard -> gp2
Tags {k8s.io/role/master: 1, k8s.io/etcd/main: a/a, Name: a.etcd-main.hello.k8s.local, KubernetesCluster: hello.k8s.local} -> {k8s.io/etcd/main: a/a, k8s.io/role/master: 1, kubernetes.io/cluster/hello.k8s.local: owned, Name: a.etcd-main.hello.k8s.local, KubernetesCluster: hello.k8s.local}
InternetGateway/hello.k8s.local
Tags {KubernetesCluster: hello.k8s.local, Name: hello.k8s.local} -> {Name: hello.k8s.local, KubernetesCluster: hello.k8s.local, kubernetes.io/cluster/hello.k8s.local: owned}
LaunchConfiguration/master-us-west-2a.masters.hello.k8s.local
UserData
...
set -o pipefail
+ NODEUP_URL=https://kubeupv2.s3.amazonaws.com/kops/1.9.0/linux/amd64/nodeup
- NODEUP_URL=https://kubeupv2.s3.amazonaws.com/kops/1.8.1/linux/amd64/nodeup
+ NODEUP_HASH=54ecae66a2b4e1409b36fc00b550f2501afedbfc
+
- NODEUP_HASH=bb41724c37d15ab7e039e06230e742b9b38d0808
...
- max-file=5
storage: overlay,aufs
+ version: 17.03.2
- version: 1.13.1
encryptionConfig: null
etcdClusters:
events:
- version: 3.2.14
+ image: gcr.io/google_containers/etcd:3.2.14
+ version: 3.2.14
+ main:
+ image: gcr.io/google_containers/etcd:3.2.14
- main:
version: 3.2.14
kubeAPIServer:
...
- DefaultStorageClass
- DefaultTolerationSeconds
+ - MutatingAdmissionWebhook
+ - ValidatingAdmissionWebhook
- NodeRestriction
- ResourceQuota
...
etcdServersOverrides:
- /events#http://127.0.0.1:4002
+ image: gcr.io/google_containers/kube-apiserver:v1.9.3
- image: k8s.gcr.io/kube-apiserver:v1.8.7
insecurePort: 8080
kubeletPreferredAddressTypes:
...
clusterName: hello.k8s.local
configureCloudRoutes: true
+ image: gcr.io/google_containers/kube-controller-manager:v1.9.3
- image: k8s.gcr.io/kube-controller-manager:v1.8.7
leaderElection:
leaderElect: true
...
clusterCIDR: 100.96.0.0/11
cpuRequest: 100m
- featureGates: null
hostnameOverride: '@aws'
+ image: gcr.io/google_containers/kube-proxy:v1.9.3
- image: k8s.gcr.io/kube-proxy:v1.8.7
logLevel: 2
kubeScheduler:
+ image: gcr.io/google_containers/kube-scheduler:v1.9.3
- image: k8s.gcr.io/kube-scheduler:v1.8.7
leaderElection:
leaderElect: true
...
networkPluginName: kubenet
nonMasqueradeCIDR: 100.64.0.0/10
+ podInfraContainerImage: gcr.io/google_containers/pause-amd64:3.0
- podInfraContainerImage: k8s.gcr.io/pause-amd64:3.0
podManifestPath: /etc/kubernetes/manifests
- requireKubeconfig: true
masterKubelet:
allowPrivileged: true
...
networkPluginName: kubenet
nonMasqueradeCIDR: 100.64.0.0/10
+ podInfraContainerImage: gcr.io/google_containers/pause-amd64:3.0
- podInfraContainerImage: k8s.gcr.io/pause-amd64:3.0
podManifestPath: /etc/kubernetes/manifests
registerSchedulable: false
- requireKubeconfig: true
__EOF_CLUSTER_SPEC
...
nodeLabels:
kops.k8s.io/instancegroup: master-us-west-2a
+ suspendProcesses: null
taints: null
...
cat > kube_env.yaml << '__EOF_KUBE_ENV'
Assets:
+ - ef979a00ba2f7bf4ee5023e82f94ced2d94c1726@https://storage.googleapis.com/kubernetes-release/release/v1.9.3/bin/linux/amd64/kubelet
- - 0f3a59e4c0aae8c2b2a0924d8ace010ebf39f48e@https://storage.googleapis.com/kubernetes-release/release/v1.8.7/bin/linux/amd64/kubelet
+ - a27d808eb011dbeea876fe5326349ed167a7ed28@https://storage.googleapis.com/kubernetes-release/release/v1.9.3/bin/linux/amd64/kubectl
- - 36340bb4bb158357fe36ffd545d8295774f55ed9@https://storage.googleapis.com/kubernetes-release/release/v1.8.7/bin/linux/amd64/kubectl
- - 1d9788b0f5420e1a219aad2cb8681823fc515e7c@https://storage.googleapis.com/kubernetes-release/network-plugins/cni-0799f5732f2a11b329d9e3d51b9c8f2e3759f2ff.tar.gz
+ - d595d3ded6499a64e8dac02466e2f5f2ce257c9f@https://storage.googleapis.com/kubernetes-release/network-plugins/cni-plugins-amd64-v0.6.0.tgz
+ - c6f310214f687b6c2f32e81c2a49235182950be3@https://kubeupv2.s3.amazonaws.com/kops/1.9.0/linux/amd64/utils.tar.gz
- - 42b15a0a0a56531750bde3c7b08d0cf27c170c48@https://kubeupv2.s3.amazonaws.com/kops/1.8.1/linux/amd64/utils.tar.gz
ClusterName: hello.k8s.local
ConfigBase: s3://state.hello.k8s.local/hello.k8s.local
...
- s3://state.hello.k8s.local/hello.k8s.local/addons/bootstrap-channel.yaml
protokubeImage:
+ hash: 4bbfcc6df1c1c0953bd0532113a74b7ae21e0ded
- hash: 0b1f26208f8f6cc02468368706d0236670fec8a2
+ name: protokube:1.9.0
- name: protokube:1.8.1
+ source: https://kubeupv2.s3.amazonaws.com/kops/1.9.0/images/protokube.tar.gz
- source: https://kubeupv2.s3.amazonaws.com/kops/1.8.1/images/protokube.tar.gz
__EOF_KUBE_ENV
...
LaunchConfiguration/nodes.hello.k8s.local
UserData
...
set -o pipefail
+ NODEUP_URL=https://kubeupv2.s3.amazonaws.com/kops/1.9.0/linux/amd64/nodeup
- NODEUP_URL=https://kubeupv2.s3.amazonaws.com/kops/1.8.1/linux/amd64/nodeup
+ NODEUP_HASH=54ecae66a2b4e1409b36fc00b550f2501afedbfc
+
- NODEUP_HASH=bb41724c37d15ab7e039e06230e742b9b38d0808
...
- max-file=5
storage: overlay,aufs
+ version: 17.03.2
- version: 1.13.1
kubeProxy:
clusterCIDR: 100.96.0.0/11
cpuRequest: 100m
- featureGates: null
hostnameOverride: '@aws'
+ image: gcr.io/google_containers/kube-proxy:v1.9.3
- image: k8s.gcr.io/kube-proxy:v1.8.7
logLevel: 2
kubelet:
...
networkPluginName: kubenet
nonMasqueradeCIDR: 100.64.0.0/10
+ podInfraContainerImage: gcr.io/google_containers/pause-amd64:3.0
- podInfraContainerImage: k8s.gcr.io/pause-amd64:3.0
podManifestPath: /etc/kubernetes/manifests
- requireKubeconfig: true
__EOF_CLUSTER_SPEC
...
nodeLabels:
kops.k8s.io/instancegroup: nodes
+ suspendProcesses: null
taints: null
...
cat > kube_env.yaml << '__EOF_KUBE_ENV'
Assets:
+ - ef979a00ba2f7bf4ee5023e82f94ced2d94c1726@https://storage.googleapis.com/kubernetes-release/release/v1.9.3/bin/linux/amd64/kubelet
- - 0f3a59e4c0aae8c2b2a0924d8ace010ebf39f48e@https://storage.googleapis.com/kubernetes-release/release/v1.8.7/bin/linux/amd64/kubelet
+ - a27d808eb011dbeea876fe5326349ed167a7ed28@https://storage.googleapis.com/kubernetes-release/release/v1.9.3/bin/linux/amd64/kubectl
- - 36340bb4bb158357fe36ffd545d8295774f55ed9@https://storage.googleapis.com/kubernetes-release/release/v1.8.7/bin/linux/amd64/kubectl
- - 1d9788b0f5420e1a219aad2cb8681823fc515e7c@https://storage.googleapis.com/kubernetes-release/network-plugins/cni-0799f5732f2a11b329d9e3d51b9c8f2e3759f2ff.tar.gz
+ - d595d3ded6499a64e8dac02466e2f5f2ce257c9f@https://storage.googleapis.com/kubernetes-release/network-plugins/cni-plugins-amd64-v0.6.0.tgz
+ - c6f310214f687b6c2f32e81c2a49235182950be3@https://kubeupv2.s3.amazonaws.com/kops/1.9.0/linux/amd64/utils.tar.gz
- - 42b15a0a0a56531750bde3c7b08d0cf27c170c48@https://kubeupv2.s3.amazonaws.com/kops/1.8.1/linux/amd64/utils.tar.gz
ClusterName: hello.k8s.local
ConfigBase: s3://state.hello.k8s.local/hello.k8s.local
...
- s3://state.hello.k8s.local/hello.k8s.local/addons/bootstrap-channel.yaml
protokubeImage:
+ hash: 4bbfcc6df1c1c0953bd0532113a74b7ae21e0ded
- hash: 0b1f26208f8f6cc02468368706d0236670fec8a2
+ name: protokube:1.9.0
- name: protokube:1.8.1
+ source: https://kubeupv2.s3.amazonaws.com/kops/1.9.0/images/protokube.tar.gz
- source: https://kubeupv2.s3.amazonaws.com/kops/1.8.1/images/protokube.tar.gz
__EOF_KUBE_ENV
...
ManagedFile/hello.k8s.local-addons-bootstrap
Contents
...
selector:
k8s-addon: kube-dns.addons.k8s.io
+ version: 1.14.9
- version: 1.14.8
- id: k8s-1.6
kubernetesVersion: '>=1.6.0'
...
selector:
k8s-addon: kube-dns.addons.k8s.io
+ version: 1.14.9
- version: 1.14.8
- id: k8s-1.8
kubernetesVersion: '>=1.8.0'
...
selector:
k8s-addon: dns-controller.addons.k8s.io
+ version: 1.9.0
- version: 1.8.0
- id: k8s-1.6
kubernetesVersion: '>=1.6.0'
...
selector:
k8s-addon: dns-controller.addons.k8s.io
+ version: 1.9.0
- version: 1.8.0
- id: v1.7.0
kubernetesVersion: '>=1.7.0'
...
ManagedFile/hello.k8s.local-addons-dns-controller.addons.k8s.io-k8s-1.6
Contents
...
k8s-addon: dns-controller.addons.k8s.io
k8s-app: dns-controller
+ version: v1.9.0
- version: v1.8.0
name: dns-controller
namespace: kube-system
...
k8s-addon: dns-controller.addons.k8s.io
k8s-app: dns-controller
+ version: v1.9.0
- version: v1.8.0
spec:
containers:
...
- --zone=*/*
- -v=2
+ image: kope/dns-controller:1.9.0
- image: kope/dns-controller:1.8.0
name: dns-controller
resources:
...
ManagedFile/hello.k8s.local-addons-dns-controller.addons.k8s.io-pre-k8s-1.6
Contents
...
k8s-addon: dns-controller.addons.k8s.io
k8s-app: dns-controller
+ version: v1.9.0
- version: v1.8.0
name: dns-controller
namespace: kube-system
...
k8s-addon: dns-controller.addons.k8s.io
k8s-app: dns-controller
+ version: v1.9.0
- version: v1.8.0
spec:
containers:
...
- --zone=*/*
- -v=2
+ image: kope/dns-controller:1.9.0
- image: kope/dns-controller:1.8.0
name: dns-controller
resources:
...
ManagedFile/hello.k8s.local-addons-kube-dns.addons.k8s.io-k8s-1.6
Contents
...
- --logtostderr=true
- --v=2
+ image: gcr.io/google_containers/cluster-proportional-autoscaler-amd64:1.1.2-r2
- image: k8s.gcr.io/cluster-proportional-autoscaler-amd64:1.1.2-r2
name: autoscaler
resources:
...
- name: PROMETHEUS_PORT
value: "10055"
+ image: gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.9
- image: k8s.gcr.io/k8s-dns-kube-dns-amd64:1.14.8
livenessProbe:
failureThreshold: 5
...
- -k
- --cache-size=1000
+ - --dns-forward-max=150
- --no-negcache
- --log-facility=-
...
- --server=/in-addr.arpa/127.0.0.1#10053
- --server=/in6.arpa/127.0.0.1#10053
+ image: gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.9
- image: k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64:1.14.8
livenessProbe:
failureThreshold: 5
...
- --probe=kubedns,127.0.0.1:10053,kubernetes.default.svc.cluster.local,5,A
- --probe=dnsmasq,127.0.0.1:53,kubernetes.default.svc.cluster.local,5,A
+ image: gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.9
- image: k8s.gcr.io/k8s-dns-sidecar-amd64:1.14.8
livenessProbe:
failureThreshold: 5
...
ManagedFile/hello.k8s.local-addons-kube-dns.addons.k8s.io-pre-k8s-1.6
Contents
...
- --logtostderr=true
- --v=2
+ image: gcr.io/google_containers/cluster-proportional-autoscaler-amd64:1.0.0
- image: k8s.gcr.io/cluster-proportional-autoscaler-amd64:1.0.0
name: autoscaler
resources:
...
- name: PROMETHEUS_PORT
value: "10055"
+ image: gcr.io/google_containers/kubedns-amd64:1.9
- image: k8s.gcr.io/kubedns-amd64:1.9
livenessProbe:
failureThreshold: 5
...
- args:
- --cache-size=1000
+ - --dns-forward-max=150
- --no-resolv
- --server=127.0.0.1#10053
- --log-facility=-
+ image: gcr.io/google_containers/k8s-dns-dnsmasq-amd64:1.14.9
- image: k8s.gcr.io/k8s-dns-dnsmasq-amd64:1.14.8
livenessProbe:
failureThreshold: 5
...
- --v=2
- --logtostderr
+ image: gcr.io/google_containers/dnsmasq-metrics-amd64:1.0
- image: k8s.gcr.io/dnsmasq-metrics-amd64:1.0
livenessProbe:
failureThreshold: 5
...
- --port=8080
- --quiet
+ image: gcr.io/google_containers/exechealthz-amd64:1.2
- image: k8s.gcr.io/exechealthz-amd64:1.2
name: healthz
ports:
...
RouteTable/hello.k8s.local
Tags {KubernetesCluster: hello.k8s.local, Name: hello.k8s.local} -> {Name: hello.k8s.local, KubernetesCluster: hello.k8s.local, kubernetes.io/cluster/hello.k8s.local: owned, kubernetes.io/kops/role: public}
SecurityGroup/masters.hello.k8s.local
Tags {Name: masters.hello.k8s.local, KubernetesCluster: hello.k8s.local} -> {Name: masters.hello.k8s.local, KubernetesCluster: hello.k8s.local, kubernetes.io/cluster/hello.k8s.local: owned}
SecurityGroup/nodes.hello.k8s.local
Tags {KubernetesCluster: hello.k8s.local, Name: nodes.hello.k8s.local} -> {Name: nodes.hello.k8s.local, KubernetesCluster: hello.k8s.local, kubernetes.io/cluster/hello.k8s.local: owned}
Must specify --yes to apply changes
% kops update cluster --yes
Using cluster from kubectl context: hello.k8s.local
I0420 20:31:55.615730 47173 executor.go:91] Tasks: 0 done / 77 total; 31 can run
I0420 20:31:55.696294 47173 logging_retryer.go:60] Retryable error (RequestError: send request failed
caused by: Post https://ec2.us-west-2.amazonaws.com/: EOF) from ec2/DescribeKeyPairs - will retry after delay of 35ms
I0420 20:31:59.537746 47173 executor.go:91] Tasks: 31 done / 77 total; 26 can run
I0420 20:32:04.352538 47173 executor.go:91] Tasks: 57 done / 77 total; 18 can run
I0420 20:32:11.539050 47173 executor.go:91] Tasks: 75 done / 77 total; 2 can run
I0420 20:32:12.183580 47173 executor.go:91] Tasks: 77 done / 77 total; 0 can run
I0420 20:32:12.184122 47173 dns.go:153] Pre-creating DNS records
I0420 20:32:15.840376 47173 update_cluster.go:291] Exporting kubecfg for cluster
kops has set your kubectl context to hello.k8s.local
Cluster changes have been applied to the cloud.
Changes may require instances to restart: kops rolling-update cluster
kops rolling-update cluster コマンドでEC2インスタンスを再作成します。
% kops rolling-update cluster Using cluster from kubectl context: hello.k8s.local NAME STATUS NEEDUPDATE READY MIN MAX NODES master-us-west-2a NeedsUpdate 1 0 1 1 1 nodes NeedsUpdate 3 0 3 3 3 Must specify --yes to rolling-update. % kops rolling-update cluster --yes Using cluster from kubectl context: hello.k8s.local NAME STATUS NEEDUPDATE READY MIN MAX NODES master-us-west-2a NeedsUpdate 1 0 1 1 1 nodes NeedsUpdate 3 0 3 3 3 I0420 20:38:41.164848 47267 instancegroups.go:157] Draining the node: "ip-172-20-39-230.us-west-2.compute.internal". node "ip-172-20-39-230.us-west-2.compute.internal" cordoned node "ip-172-20-39-230.us-west-2.compute.internal" cordoned WARNING: Deleting pods not managed by ReplicationController, ReplicaSet, Job, DaemonSet or StatefulSet: etcd-server-events-ip-172-20-39-230.us-west-2.compute.internal, etcd-server-ip-172-20-39-230.us-west-2.compute.internal, kube-apiserver-ip-172-20-39-230.us-west-2.compute.internal, kube-controller-manager-ip-172-20-39-230.us-west-2.compute.internal, kube-proxy-ip-172-20-39-230.us-west-2.compute.internal, kube-scheduler-ip-172-20-39-230.us-west-2.compute.internal pod "dns-controller-dcb5b7668-kjb5m" evicted node "ip-172-20-39-230.us-west-2.compute.internal" drained I0420 20:40:21.517156 47267 instancegroups.go:273] Stopping instance "i-09db1c4916f902ccc", node "ip-172-20-39-230.us-west-2.compute.internal", in group "master-us-west-2a.masters.hello.k8s.local". I0420 20:45:23.226395 47267 instancegroups.go:188] Validating the cluster. I0420 20:45:32.898059 47267 instancegroups.go:249] Cluster validated. I0420 20:45:35.639303 47267 instancegroups.go:157] Draining the node: "ip-172-20-42-214.us-west-2.compute.internal". node "ip-172-20-42-214.us-west-2.compute.internal" cordoned node "ip-172-20-42-214.us-west-2.compute.internal" cordoned WARNING: Deleting pods not managed by ReplicationController, ReplicaSet, Job, DaemonSet or StatefulSet: kube-proxy-ip-172-20-42-214.us-west-2.compute.internal; Ignoring DaemonSet-managed pods: prometheus-node-exporter-hp78d pod "kube-dns-6c4cb66dfb-zpbg8" evicted pod "heapster-heapster-697757c69d-7q72n" evicted ... node "ip-172-20-42-214.us-west-2.compute.internal" drained I0420 20:47:14.245433 47267 instancegroups.go:273] Stopping instance "i-08698814f70aa43bb", node "ip-172-20-42-214.us-west-2.compute.internal", in group "nodes.hello.k8s.local". I0420 20:51:16.646042 47267 instancegroups.go:188] Validating the cluster. I0420 20:51:30.401590 47267 instancegroups.go:249] Cluster validated. I0420 20:51:30.401634 47267 instancegroups.go:157] Draining the node: "ip-172-20-40-170.us-west-2.compute.internal". node "ip-172-20-40-170.us-west-2.compute.internal" cordoned node "ip-172-20-40-170.us-west-2.compute.internal" cordoned WARNING: Ignoring DaemonSet-managed pods: prometheus-node-exporter-mzcb9; Deleting pods not managed by ReplicationController, ReplicaSet, Job, DaemonSet or StatefulSet: kube-proxy-ip-172-20-40-170.us-west-2.compute.internal; Deleting pods with local storage: kubernetes-dashboard-778c8bcdb6-56mgh pod "kube-dns-6c4cb66dfb-tbnpw" evicted pod "kube-dns-autoscaler-f4c47db64-znjhn" evicted ... node "ip-172-20-40-170.us-west-2.compute.internal" drained I0420 20:53:27.672263 47267 instancegroups.go:273] Stopping instance "i-0bf2956226b308e8e", node "ip-172-20-40-170.us-west-2.compute.internal", in group "nodes.hello.k8s.local". I0420 20:57:30.211080 47267 instancegroups.go:188] Validating the cluster. I0420 20:57:35.694808 47267 instancegroups.go:249] Cluster validated. I0420 20:57:35.694846 47267 instancegroups.go:157] Draining the node: "ip-172-20-33-12.us-west-2.compute.internal". node "ip-172-20-33-12.us-west-2.compute.internal" cordoned node "ip-172-20-33-12.us-west-2.compute.internal" cordoned WARNING: Deleting pods with local storage: grafana-5d74df4b45-b9srz; Deleting pods not managed by ReplicationController, ReplicaSet, Job, DaemonSet or StatefulSet: kube-proxy-ip-172-20-33-12.us-west-2.compute.internal; Ignoring DaemonSet-managed pods: prometheus-node-exporter-95wj6 pod "heapster-heapster-697757c69d-mzxxh" evicted pod "kube-dns-6c4cb66dfb-mbd59" evicted ... node "ip-172-20-33-12.us-west-2.compute.internal" drained I0420 20:59:46.310060 47267 instancegroups.go:273] Stopping instance "i-0e3622a73e8046b63", node "ip-172-20-33-12.us-west-2.compute.internal", in group "nodes.hello.k8s.local". I0420 21:03:47.697124 47267 instancegroups.go:188] Validating the cluster. I0420 21:03:53.581898 47267 instancegroups.go:249] Cluster validated. I0420 21:03:53.581958 47267 rollingupdate.go:193] Rolling update completed for cluster "hello.k8s.local"!