kubernetes 如何获取dashboard python 获取cpu使用率率是怎么计算的

docker kubernetes dashboard安装部署详细介绍
转载 &更新时间:日 09:35:29 & 投稿:lqh
这篇文章主要介绍了docker kubernetes dashboard安装部署详细介绍的相关资料,需要的朋友可以参考下
docker之kubernetes dashboard部署
1. 环境说明:
注: 本次实验服务器环境均采用centos 7. 服务安装均采用yum install.
192.168.3.7 master
192.168.3.16 node
2).使用的软件包:
master: docker kubernetes-master etcd flannel
nodes: docker kubernetes-node flannel
3). 软件版本:
docker: 1.10.3
kubernetes*: 1.2.0
etcd: 2.3.7
4). 软件包说明:
docker: 主角,不用说了
kubernetes-master: kubernetes 服务端
kubernetes-node: kubernetes 客户端
etcd: 服务器发现的键值存储
flannel: 打通多台服务器上的docker容器之间的网络互通
2. 环境初始化:
你都搞docker了,初始化该做些什么,我就不教你了.
备份现有的yum源文件,搞一个阿里云的yum源,epel源.
地址: http://mirrors.aliyun.com
3. 安装配置docker:
注: docker采用net模式. 确保device-mapper软件包已经安装,否则docker无法启动.
# yum install docker -y
# cat /etc/sysconfig/docker|egrep -v "^#|^$"
OPTIONS=''
DOCKER_CERT_PATH=/etc/docker
4. 配置master
1). 安装软件包.
# yum install kubernetes-master etcd flannel-y
2). 配置etcd.
# cat /etc/etcd/etcd.conf |egrep -v "^#|^$"
ETCD_NAME=default
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379" ## 监听地址端口
ETCD_ADVERTISE_CLIENT_URLS="http://192.168.3.7:2379" ## etcd集群配置;多个etcd服务器,直接在后面加url
##启动etcd服务
# systemctl start etcd
3). 配置kubernetes.
在/etc/kubernetes 目录中有以下几个文件:
apiserver: kubernetes api 配置文件
config: kubernetes 主配置文件
controller-manager: kubernetes 集群管理配置文件
scheduler: kubernetes scheduler配置文件
# cd /etc/kubernetes
KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0" ## kube启动时绑定的地址
KUBE_ETCD_SERVERS="--etcd-servers=http://192.168.3.7:2379" ## kube调用etcd的url
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=172.17.0.0/16" ## 此地址是docker容器的地址段
KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota"
KUBE_API_ARGS=""
注意KUBE_ADMISSION_CONTROL这一行的配置: 移除ServiceAccount 项目,否则在后期中会报出没有认证错误.
# cat config |egrep -v "^#|^$"
KUBE_LOGTOSTDERR="--logtostderr=true"
KUBE_LOG_LEVEL="--v=0"
KUBE_ALLOW_PRIV="--allow-privileged=false"
KUBE_MASTER="--master=http://192.168.3.7:8080" ## kube master api url
controller-manager scheduler 两个文件采用默认配置即可.
5. 配置nodes
1). 安装软件包.
# yum install kubernetes-node flannel -y
2). 配置kubernetes node
安装完软件包之后,会在/etc/kubernetes目录下出现以下文件:
config: kubernetes 主配置文件
kubelet: kubelet node配置文件
proxy: kubernetes proxy 配置文件
# cd /etc/kubernetes
# cat config |egrep -v "^#|^$"
KUBE_LOGTOSTDERR="--logtostderr=true"
KUBE_LOG_LEVEL="--v=0"
KUBE_ALLOW_PRIV="--allow-privileged=false"
KUBE_MASTER="--master=http://192.168.3.7:8080" ## kube master api url
# cat kubelet |egrep -v "^#|^$"
KUBELET_ADDRESS="--address=0.0.0.0" ## kubelet 启动后绑定的地址
KUBELET_PORT="--port=10250" ## kubelet 端口
KUBELET_HOSTNAME="--hostname-override=192.168.3.16" ##kubelet的hostname,在master执行kubectl get nodes显示的名字
KUBELET_API_SERVER="--api-servers=http://192.168.3.7:8080" ## kube master api url
KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest"
KUBELET_ARGS=""
proxy 配置默认即可.
6. 网络配置:
master和node均已经安装flannel
master配置:
# cat /etc/sysconfig/flanneld |egrep -v "^#|^$"
FLANNEL_ETCD="http://192.168.3.7:2379"
FLANNEL_ETCD_KEY="/kube/network"
# etcdctl mk /kube/network/config '{"Network":"172.17.0.0/16"}' ## 注意此处的ip和上文中出现的ip地址保持一致.
# cat /etc/sysconfig/flanneld |egrep -v "^#|^$"
FLANNEL_ETCD="http://192.168.3.7:2379"
FLANNEL_ETCD_KEY="/kube/network"
7. 启动服务.
1). 启动docker服务.
# systemctl start docker
# ps aux|grep docker ## 确认下服务是否正常启动.如果没有启动请移步/var/log/message看问题
2). 启动etcd服务
# systemctl start etcd
3). 启动master 和node上的flanneld服务
# systemctl start flanneld
查看ip,会出现flannel0的网络接口设备,该地址和docker0地址是一致的,如果不一致请确认以上服务是否正常启动
4). 启动运行在master上的k8s服务.
启动顺序:kube-apiserver居首.
# systemctl start kube-apiserver
# systemctl start kube-controller-manager
# systemctl start kube-scheduler
请确认以上服务是否都有正常启动.
5). 启动运行在node上的k8s服务.
# systemctl start kube-proxy
# systemctl start kubelet
请确认以上服务是否都有正常启动.
6). 访问http://kube-apiserver:port
http://192.168.3.7:8080 查看所有请求url
http://192.168.3.7:8080/healthz/ping 查看健康状况
8. 开启k8s dashboard:
1). 在master上验证服务.
# kubectl get nodes ## 获取k8s客户端.
NAME STATUS AGE
192.168.3.16 Ready 6h
# kubectl get namespace ## 获取k8s所有命名空间
NAME STATUS AGE
default Active 17h
2). 在master上新建kube-system的namespace
# cd /usr/local/src/docker
# cat kube-namespace.yaml
"kind": "Namespace",
"apiVersion": "v1",
"metadata": {
"name": "kube-system"
# kubectl create -f kube-namespace.yaml
namespace "kube-system" created
# kubectl get namespace ## 获取k8s所有命名空间
NAME STATUS AGE
default Active 17h
kube-system Active 17h
3). 在master上新建kube-dashboard.yaml
wget http://docs.minunix.com/docker/kubernetes-dashboard.yaml -O /usr/local/src/docker/kube-dashboard.yaml
请将文件中apiserver-host修改为自己的kebu-apiserver
# kubectl create -f kube-dashboard.yaml
deployment "kubernetes-dashboard" created
You have exposed your service on an external port on all nodes in your
cluster. If you want to expose this service to the external internet, you may
need to set up firewall rules for the service port(s) (tcp:31576) to serve traffic.
See http://releases.k8s.io/release-1.2/docs/user-guide/services-firewalls.md for more details.
service "kubernetes-dashboard" created
# kubectl get pods --namespace=kube-system
NAME READY STATUS RESTARTS AGE
kubernetes-dashboard--grtfm 1/1 ContainerCreating 0 27s
查看该容器的详细过程:
# kubectl describe pods kubernetes-dashboard--grtfm --namespace=kube-system
当有多个node,可以用该命令中看到容器被分配到哪个node上,启动后分配的ip地址等信息.
如果在结果中看到"State: Running"时,请移步到node查看容器运行状态,应该也是up的状态.
4). 此时可以通过http://kube-apiserver:port/ui访问
http://192.168.3.7:8080/ui
开始尽情使用docker吧!
9. 注意点&遇到的问题:
1). 注意服务的启动顺序,特别是master,在确保etcd启动的情况下,先启动apiserver
2). 注意yaml文件的格式缩进.
3). 如果发现刚创建的pod的status是depening,原因可能有几点:其一,客户端配置有listen 127.0.0.1的服务,master无法与其建立连接;其二,环境初始化没有做好;其三,移步node,使用docker logs 查看日志
4). kubernetes-dashboard.yaml文件中的containers images地址目前为公开的,09月30日会关闭.
5). 如果自己有国外vps,可在服务器上创建先pull下google的k8s dashboard,然后push到自己的registry,自己修改下yaml中的image即可.
感谢阅读,希望能帮助到大家,谢谢大家对本站的支持!
您可能感兴趣的文章:
大家感兴趣的内容
12345678910
最近更新的内容
常用在线小工具kubernetes 1.5安装dashboard,heapster
Installing Kubernetes 1.5 on all nodes
不知道为什么blog格式全乱了,看起来很累,也不知道怎么改,所以这篇文章会被重新分成三篇。以下是连接地址:
第一个 集群安装:
http://blog.csdn.net/wenwst/article/details/
第二个dashboard 安装:
http://blog.csdn.net/wenwst/article/details/
第三个 heapster 安装:
http://blog.csdn.net/wenwst/article/details/
系统配置:
3.10.0-327.36.3.el7.x86_64 #1 SMP Mon Oct 24 16:09:20 UTC
x86_64 x86_64 GNU/Linux
配置前系统 操作:
Last login: Mon Dec 26 22:26:56 2016
[root@localhost ~]# systemctl disable firewalld
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
Removed symlink /etc/systemd/system/basic.target.wants/firewalld.service.
[root@localhost ~]# systemctl stop firewalld
[root@localhost ~]#
配置主机名:
hostnamectl --static set-hostname centos-master
确定selinux是关闭的:
/etc/selinux/config
SELINUX=disabled
-----------------------------------------可选--------------------------
把下面两行加入到/etc/hosts中
61.91.161.217 gcr.io
61.91.161.217 www.gcr.io
--------------------------------------------------------------------------
以下在所有的节点上安装:
Kubernetes 1.5
来源于官网配置
在centos系统yum中加入:
cat &&EOF & /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://yum.kubernetes.io/repos/kubernetes-el7-x86_64
gpgcheck=0
repo_gpgcheck=0
yum install -y socat kubelet kubeadm kubectl kubernetes-cni
---这里需要注意,根据自己的实际情况选择,如果在下一步下载镜 像的速度太慢,可以加上---
在docker的启动文件
/lib/systemd/system/docker.service中加入--registry-mirror="http://b438f72b.m.daocloud.io"
详细如下:
[root@localhost ~]# vi /lib/systemd/system/docker.service
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target
Type=notify
# the default is not to use systemd for cgroups because the delegate issues still
# exists and systemd currently does not support the cgroup feature set required
# for containers run by docker
ExecStart=/usr/bin/dockerd --registry-mirror="http://b438f72b.m.daocloud.io"
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
#TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
WantedBy=multi-user.target
~让docker开机启动
systemctl enable docker
启动docker
systemctl start docker
让kubelet开机启动
systemctl enable kubelet
启动kubelet
systemctl start kubelet
----------------------
syssysystsystemctl start kubelet
---------------------------------------------
下载镜像:
images=(kube-proxy-amd64:v1.5.1 kube-discovery-amd64:1.0 kubedns-amd64:1.9 kube-scheduler-amd64:v1.5.1 kube-controller-manager-amd64:v1.5.1 kube-apiserver-amd64:v1.5.1 etcd-amd64:3.0.14-kubeadm kube-dnsmasq-amd64:1.4 exechealthz-amd64:1.2 pause-amd64:3.0 kubernetes-dashboard-amd64:v1.5.0 dnsmasq-metrics-amd64:1.0)
for imageName in ${images[@]} ; do
docker pull jicki/$imageName
docker tag jicki/$imageName gcr.io/google_containers/$imageName
docker rmi jicki/$imageName
--------------------------------------------------
虽然我们在这里安装下载了weaveworks/weave-kube:1.8.2 但还是要注意安装weaveworks的yaml文件中对应的版本。
特别是在安装dns时候,kubeadm会自动安装,因此没有yaml,那么使用下面的命令进行查看:
kubectl --namespace=kube-system edit deployment kube-dns
--------------------------------------------------
这两个是网络
docker pull weaveworks/weave-kube:1.8.2
docker pull weaveworks/weave-npc:1.8.2
这两个是监控
docker pull kubernetes/heapster:canary
docker pull kubernetes/heapster_influxdb:v0.6
docker pull gcr.io/google_containers/heapster_grafana:v3.1.1
以上的操作在每一个节点都要执行,镜像也要在每一个节点上下载。
接下来配置集群:
在你的master服务器上面运行:
kubeadm init --api-advertise-addresses=192.168.7.206 --pod-network-cidr 10.245.0.0/16
上面的192.168.7.206是我的master的地址。
这个命令不可以运行两回。
也就是只能运行一次。如果再次运行,需要执行 kubeadm reset.
输出 内容如下:
[root@centos-master ~]# kubeadm init
[kubeadm] WARNING: kubeadm is in alpha, please do not use it for production clusters.
[preflight] Running pre-flight checks
[init] Using Kubernetes version: v1.5.1
[tokens] Generated token: "60a95a.93c5ab"
[certificates] Generated Certificate Authority key and certificate.
[certificates] Generated API Server key and certificate
[certificates] Generated Service Account signing keys
[certificates] Created keys and certificates in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[apiclient] Created API client, waiting for the control plane to become ready
[apiclient] All control plane components are healthy after 81.803134 seconds
[apiclient] Waiting for at least one node to register and become ready
[apiclient] First node is ready after 2.002437 seconds
[apiclient] Creating a test deployment
[apiclient] Test deployment succeeded
[token-discovery] Created the kube-discovery deployment, waiting for it to become ready
[token-discovery] kube-discovery is ready after 22.002704 seconds
[addons] Created essential addon: kube-proxy
[addons] Created essential addon: kube-dns
Your Kubernetes master has initialized successfully!
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
http://kubernetes.io/docs/admin/addons/
You can now join any number of machines by running the following on each node:
kubeadm join --token=60a95a.93c5ab 192.168.7.206
这个拷下来,后面有用。
在上面的操作中,已经初始化了master. 最后一行是token,后面将用来增加结点。
接下来安装结点:
在所有的结点运行下面的命令:
kubeadm join --token=60a95a.93c5ab 192.168.7.206
运行完输出 信息如下:
[root@centos-minion-1 kubelet]# kubeadm join --token=60a95a.93c5ab 192.168.7.206
[kubeadm] WARNING: kubeadm is in alpha, please do not use it for production clusters.
[preflight] Running pre-flight checks
[tokens] Validating provided token
[discovery] Created cluster info discovery client, requesting info from "http://192.168.7.206:9898/cluster-info/v1/?token-id=60a95a"
[discovery] Cluster info object received, verifying signature using given token
[discovery] Cluster info signature and contents are valid, will use API endpoints [https://192.168.7.206:6443]
[bootstrap] Trying to connect to endpoint https://192.168.7.206:6443
[bootstrap] Detected server version: v1.5.1
[bootstrap] Successfully established connection with endpoint "https://192.168.7.206:6443"
[csr] Created API client to obtain unique certificate for this node, generating keys and certificate signing request
[csr] Received signed certificate from the API server:
Issuer: CN=kubernetes | Subject: CN=system:node:centos-minion-1 | CA: false
Not before:
07:06:00 +0000 UTC Not After:
07:06:00 +0000 UTC
[csr] Generating kubelet configuration
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
Node join complete:
* Certificate signing request sent to master and response
* Kubelet informed of new secure connection details.
Run 'kubectl get nodes' on the master to see this machine join.
在master上面运行kubectl get nodes
[root@centos-master ~]# kubectl get nodes
centos-master
Ready,master
centos-minion-1
centos-minion-2
45s网上有一个命令是:
kubectl taint nodes --all dedicated-
[root@centos-master ~]# kubectl taint nodes --all dedicated-
taint key="dedicated" and effect="" not found.
taint key="dedicated" and effect="" not found.
taint key="dedicated" and effect="" not found.
执行这个命令好像这个命令好像是可以让master上面也可以运行pod。
但执行没有效果。
kubectl get nodes
[root@centos-master ~]# kubectl get nodes
centos-master
Ready,master
centos-minion-1
centos-minion-2
还是这个样子。
[root@centos-master new]# kubectl --namespace=kube-system get pod
dummy--9zfjl
etcd-centos-master
kube-apiserver-centos-master
kube-controller-manager-centos-master
kube-discovery--6ldk1
kube-proxy-34q7p
kube-proxy-hqkkg
kube-proxy-nbgn3
kube-scheduler-centos-master
weave-net-kkdh9
weave-net-mtd83
weave-net-q91sr
现在,需要安装pod network.
执行下面命令:
kubectl apply -f https://git.io/weave-kube
这个命令里面的镜像可能无法下载,那么,按以下的方法来做。[root@centos-master new]# vi weave-daemonset.yaml
apiVersion: extensions/v1beta1
kind: DaemonSet
name: weave-net
namespace: kube-system
name: weave-net
annotations:
scheduler.alpha.kubernetes.io/tolerations: |
"key": "dedicated",
"operator": "Equal",
"value": "master",
"effect": "NoSchedule"
hostNetwork: true
hostPID: true
containers:
- name: weave
image: weaveworks/weave-kube:1.8.2
- /home/weave/launch.sh
livenessProbe:
initialDelaySeconds: 30
host: 127.0.0.1
path: /status
port: 6784
securityContext:
privileged: true
volumeMounts:
- name: weavedb
mountPath: /weavedb
- name: cni-bin
mountPath: /opt
- name: cni-bin2
mountPath: /host_home
- name: cni-conf
mountPath: /etc
resources:
- name: weave-npc
image: weaveworks/weave-kube:1.8.2
resources:
securityContext:
privileged: true
restartPolicy: Always
- name: weavedb
emptyDir: {}
- name: cni-bin
path: /opt
- name: cni-bin2
path: /home
- name: cni-conf
path: /etc
注意上面的镜像地址,如果前面下载的镜像不对,最好找好相应版本的镜像。
kubectl apply -f weave-daemonset.yaml
安装完后,显示pod:
[root@localhost ~]# kubectl --namespace=kube-system get pod
dummy--xjj21
etcd-centos-master
kube-apiserver-centos-master
kube-controller-manager-centos-master
kube-discovery--c45gd
kube-dns--96xms
kube-proxy-33lsn
kube-proxy-jnz6q
kube-proxy-vfql2
kube-scheduler-centos-master
weave-net-k5tlz
weave-net-q3n89
weave-net-x57k7
安装dashboard过程如下:
下载安装yaml文件:
https://rawgit.com/kubernetes/dashboard/master/src/deploy/kubernetes-dashboard.yaml[root@centos-master new]# cat kubernetes-dashboard.yaml
# Copyright 2015 Google Inc. All Rights Reserved.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Configuration to deploy release version of the Dashboard UI.
# Example usage: kubectl create -f &this_file&
kind: Deployment
apiVersion: extensions/v1beta1
app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
replicas: 1
matchLabels:
app: kubernetes-dashboard
app: kubernetes-dashboard
# Comment the following annotation if Dashboard must not be deployed on master
annotations:
scheduler.alpha.kubernetes.io/tolerations: |
"key": "dedicated",
"operator": "Equal",
"value": "master",
"effect": "NoSchedule"
containers:
- name: kubernetes-dashboard
image: gcr.io/google_containers/kubernetes-dashboard-amd64:v1.5.0
imagePullPolicy: Always
- containerPort: 9090
protocol: TCP
# Uncomment the following line to manually specify Kubernetes API server Host
# If not specified, Dashboard will attempt to auto discover the API server and connect
# to it. Uncomment only if the default does not work.
# - --apiserver-host=http://my-address:port
livenessProbe:
port: 9090
initialDelaySeconds: 30
timeoutSeconds: 30
kind: Service
apiVersion: v1
app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
type: NodePort
- port: 80
targetPort: 9090
app: kubernetes-dashboard
上面加红加粗标注的镜像地址可以提前下载到master服务器上面。
kubectl create -f https://rawgit.com/kubernetes/dashboard/master/src/deploy/kubernetes-dashboard.yaml查看到
kubernetes-dashboard--w3fjd
已经运行。
[root@centos-master new]# kubectl get pod --all-namespaces
kube-system
dummy--9zfjl
kube-system
etcd-centos-master
kube-system
kube-apiserver-centos-master
kube-system
kube-controller-manager-centos-master
kube-system
kube-discovery--6ldk1
kube-system
kube-proxy-34q7p
kube-system
kube-proxy-hqkkg
kube-system
kube-proxy-nbgn3
kube-system
kube-scheduler-centos-master
kube-system
kubernetes-dashboard--w3fjd
kube-system
weave-net-kkdh9
kube-system
weave-net-mtd83
kube-system
weave-net-q91sr
查看节点端口为:31551
[root@centos-master new]# kubectl describe svc kubernetes-dashboard --namespace=kube-system
kubernetes-dashboard
Namespace:
kube-system
app=kubernetes-dashboard
app=kubernetes-dashboard
10.96.35.20
&unset& 80/TCP
&unset& 31551/TCP
Endpoints:
10.40.0.1:9090
Session Affinity:
No events.
安装heapster
yaml文件来自于github.
[root@localhost heapster]# cat grafana-deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
name: monitoring-grafana
namespace: kube-system
replicas: 1
task: monitoring
k8s-app: grafana
- name: grafana-storage
emptyDir: {}
containers:
- name: grafana
image: gcr.io/google_containers/heapster_grafana:v3.1.1
- containerPort: 3000
protocol: TCP
volumeMounts:
- mountPath: /var
name: grafana-storage
- name: INFLUXDB_HOST
value: monitoring-influxdb
- name: GRAFANA_PORT
value: "3000"
# The following env variables are required to make Grafana accessible via
# the kubernetes api-server proxy. On production clusters, we recommend
# removing these env variables, setup auth for grafana, and expose the grafana
# service using a LoadBalancer or a public IP.
- name: GF_AUTH_BASIC_ENABLED
value: "false"
- name: GF_AUTH_ANONYMOUS_ENABLED
value: "true"
- name: GF_AUTH_ANONYMOUS_ORG_ROLE
value: Admin
- name: GF_SERVER_ROOT_URL
# value: /api/v1/proxy/namespaces/kube-system/services/monitoring-grafana/
grafana-deployment.yaml
中的image我们在上面已经下载过
docker pull gcr.io/google_containers/heapster_grafana:v3.1.1
[root@localhost heapster]# cat grafana-service.yaml
apiVersion: v1
kind: Service
# For use as a Cluster add-on (https://github.com/kubernetes/kubernetes/tree/master/cluster/addons)
# If you are NOT using this as an addon, you should comment out this line.
kubernetes.io/cluster-service: 'true'
kubernetes.io/name: monitoring-grafana
name: monitoring-grafana
namespace: kube-system
# In a production setup, we recommend accessing Grafana through an external Loadbalancer
# or through a public IP.
# type: LoadBalancer
type: NodePort
- port: 80
targetPort: 3000
k8s-app: grafana
influxdb-deployment.yaml 中的image也是下载过的。
[root@localhost heapster]# cat influxdb-deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
name: monitoring-influxdb
namespace: kube-system
replicas: 1
task: monitoring
k8s-app: influxdb
- name: influxdb-storage
emptyDir: {}
containers:
- name: influxdb
image: kubernetes/heapster_influxdb:v0.6
volumeMounts:
- mountPath: /data
name: influxdb-storage[root@localhost heapster]# cat influxdb-service.yaml
apiVersion: v1
kind: Service
task: monitoring
# For use as a Cluster add-on (https://github.com/kubernetes/kubernetes/tree/master/cluster/addons)
# If you are NOT using this as an addon, you should comment out this line.
kubernetes.io/cluster-service: 'true'
kubernetes.io/name: monitoring-influxdb
name: monitoring-influxdb
namespace: kube-system
# type: NodePort
- name: api
port: 8086
targetPort: 8086
k8s-app: influxdb
heapster-deployment.yaml
[root@localhost heapster]# cat heapster-deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
name: heapster
namespace: kube-system
replicas: 1
task: monitoring
k8s-app: heapster
version: v6
containers:
- name: heapster
image: kubernetes/heapster:canary
imagePullPolicy: Always
- /heapster
- --source=kubernetes:https://kubernetes.default
- --sink=influxdb:http://monitoring-influxdb:8086
image也是下载过的。
[root@localhost heapster]# cat heapster-service.yaml
apiVersion: v1
kind: Service
task: monitoring
# For use as a Cluster add-on (https://github.com/kubernetes/kubernetes/tree/master/cluster/addons)
# If you are NOT using this as an addon, you should comment out this line.
kubernetes.io/cluster-service: 'true'
kubernetes.io/name: Heapster
name: heapster
namespace: kube-system
- port: 80
targetPort: 8082
k8s-app: heapster
六个文件准备好了以后。
kubectl create -f grafana-deployment.yaml -f grafana-service.yaml -f influxdb-deployment.yaml -f
influxdb-service.yaml -f heapster-deployment.yaml -f
heapster-service.yaml
[root@localhost heapster]# kubectl get pod --namespace=kube-system
dummy--xjj21
etcd-centos-master
heapster--j1jxn
kube-apiserver-centos-master
kube-controller-manager-centos-master
kube-discovery--c45gd
kube-dns--96xms
kube-proxy-33lsn
kube-proxy-jnz6q
kube-proxy-vfql2
kube-scheduler-centos-master
kubernetes-dashboard--8mxgz
monitoring-grafana--h92v7
monitoring-influxdb--q2445
weave-net-k5tlz
weave-net-q3n89
weave-net-x57k7
[root@localhost heapster]# kubectl get svc --namespace=kube-system
CLUSTER-IP
EXTERNAL-IP
10.98.45.1
10.96.0.10
53/UDP,53/TCP
kubernetes-dashboard
10.108.45.66
80:32155/TCP
monitoring-grafana
10.97.110.225
80:30687/TCP
monitoring-influxdb
10.96.175.67
查看grafana的详细信息:
[root@localhost heapster]# kubectl
--namespace=kube-system describe svc monitoring-grafana
monitoring-grafana
Namespace:
kube-system
kubernetes.io/cluster-service=true
kubernetes.io/name=monitoring-grafana
k8s-app=grafana
10.97.110.225
&unset& 80/TCP
&unset& 30687/TCP
Endpoints:
10.32.0.2:3000
Session Affinity:
No events.
看到开放端口为30687
通过节点IP加端口号访问:
然后确认是k8s就可以了:
图形出来了:
没有更多推荐了,

我要回帖

更多关于 java 获取cpu使用率 的文章

 

随机推荐