Grafana 仪表板与 AlertManager 规则
High Contrast
Dark Mode
Light Mode
Sepia
Forest
1 min read298 words

Grafana 仪表板与 AlertManager 规则

核心问题:怎样把 PromQL 查询变成直观的仪表板?告警应该在什么条件下触发,并通知到哪里?


Grafana Dashboard as Code

手动在 UI 创建仪表板容易丢失。推荐用 ConfigMap 或 Provisioning 持久化:

# grafana-dashboards-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: api-dashboard
namespace: monitoring
labels:
grafana_dashboard: "1"     # kube-prometheus-stack sidecar 自动加载带此标签的 ConfigMap
data:
api-dashboard.json: |
{
"title": "API Service Dashboard",
"uid": "api-dashboard",
"refresh": "30s",
"panels": [
{
"title": "Request Rate (QPS)",
"type": "graph",
"gridPos": { "x": 0, "y": 0, "w": 12, "h": 8 },
"targets": [{
"expr": "sum(rate(http_requests_total{job='api-server'}[5m])) by (status_code)",
"legendFormat": "{{status_code}}"
}]
},
{
"title": "P95 Latency",
"type": "graph",
"gridPos": { "x": 12, "y": 0, "w": 12, "h": 8 },
"targets": [{
"expr": "histogram_quantile(0.95, sum by(le) (rate(http_request_duration_seconds_bucket{job='api-server'}[5m])))",
"legendFormat": "P95"
}]
}
]
}

Prometheus 告警规则

# PrometheusRule CRD(Prometheus Operator 声明式告警)
apiVersion: monitoring.coreos.com/v1
kind: PrometheusRule
metadata:
name: api-alerts
namespace: production
labels:
release: kube-prometheus-stack   # 必须匹配 Prometheus 的 ruleSelector
spec:
groups:
- name: api.rules
interval: 1m
rules:
# ---- 可用性告警 ----
- alert: APIHighErrorRate
expr: |
sum(rate(http_requests_total{job="api-server", status_code=~"5.."}[5m]))
/
sum(rate(http_requests_total{job="api-server"}[5m]))
> 0.05
for: 2m          # 持续 2 分钟才触发(避免抖动)
labels:
severity: critical
team: api
annotations:
summary: "API 错误率过高 ({{ $value | humanizePercentage }})"
description: "命名空间 {{ $labels.namespace }} 的 API 错误率已超过 5% 超过 2 分钟"
runbook_url: "https://wiki.example.com/runbooks/api-high-error-rate"
# ---- 延迟告警 ----
- alert: APIHighP95Latency
expr: |
histogram_quantile(0.95,
sum by(le) (rate(http_request_duration_seconds_bucket{job="api-server"}[5m]))
) > 1
for: 5m
labels:
severity: warning
team: api
annotations:
summary: "API P95 延迟过高 ({{ $value | humanizeDuration }})"
description: "P95 延迟已超过 1 秒超过 5 分钟"
# ---- Pod 健康告警 ----
- alert: PodCrashLooping
expr: |
increase(kube_pod_container_status_restarts_total[1h]) > 5
for: 0m           # 立即触发
labels:
severity: warning
annotations:
summary: "Pod {{ $labels.namespace }}/{{ $labels.pod }} 频繁重启"
description: "过去 1 小时内重启了 {{ $value }} 次"
- alert: DeploymentReplicasMismatch
expr: |
kube_deployment_status_replicas_available{namespace="production"}
< kube_deployment_spec_replicas{namespace="production"}
for: 5m
labels:
severity: warning
annotations:
summary: "Deployment {{ $labels.deployment }} 副本不足"
# ---- 资源告警 ----
- alert: NodeDiskSpaceLow
expr: |
predict_linear(
node_filesystem_free_bytes{mountpoint="/"}[6h],
4 * 3600
) / node_filesystem_size_bytes{mountpoint="/"} < 0.1
for: 0m
labels:
severity: critical
annotations:
summary: "节点 {{ $labels.node }} 磁盘预计 4 小时内满"
- alert: ContainerOOMKilled
expr: |
kube_pod_container_status_last_terminated_reason{reason="OOMKilled"} == 1
for: 0m
labels:
severity: warning
annotations:
summary: "容器 {{ $labels.container }} OOM 被杀"

AlertManager 配置

AlertManager 接收 Prometheus 的告警,进行路由、分组、静默、发送通知:

# alertmanager-config.yaml
apiVersion: monitoring.coreos.com/v1alpha1
kind: AlertmanagerConfig
metadata:
name: main-config
namespace: monitoring
spec:
route:
groupBy: ['alertname', 'namespace']
groupWait: 30s           # 等待 30 秒,让同类告警合并
groupInterval: 5m        # 相同组的告警每 5 分钟发一次
repeatInterval: 4h       # 持续告警每 4 小时重发一次
receiver: default-receiver
routes:
# Critical 告警立即发到 PagerDuty
- matchers:
- name: severity
value: critical
receiver: pagerduty-critical
groupWait: 0s
repeatInterval: 1h
# API 团队告警发到 API Slack 频道
- matchers:
- name: team
value: api
receiver: slack-api-team
receivers:
- name: default-receiver
slackConfigs:
- apiURL:
key: slack-webhook-url
name: alertmanager-secrets
channel: '#alerts-general'
title: '{{ .GroupLabels.alertname }}'
text: |
*告警数量*: {{ len .Alerts.Firing }}
{{ range .Alerts.Firing }}
*告警*: {{ .Annotations.summary }}
*描述*: {{ .Annotations.description }}
*严重级别*: {{ .Labels.severity }}
{{ end }}
sendResolved: true
- name: slack-api-team
slackConfigs:
- apiURL:
key: slack-webhook-url
name: alertmanager-secrets
channel: '#alerts-api'
title: '⚠️ {{ .GroupLabels.alertname }}'
text: '{{ range .Alerts }}{{ .Annotations.summary }}{{ end }}'
- name: pagerduty-critical
pagerdutyConfigs:
- serviceKey:
key: pagerduty-service-key
name: alertmanager-secrets
description: '{{ .GroupLabels.alertname }}: {{ .Annotations.summary }}'

告警严重级别规范

级别 英文 响应要求 典型场景
P0 critical 立即响应(5 分钟内)+ 叫人 服务完全不可用、数据丢失
P1 warning 1 小时内响应 错误率上升、延迟增加
P2 info 工作时间处理 资源预警、配置漂移

告警黄金指标(SRE 四个黄金信号)

信号 PromQL 示例 告警阈值参考
延迟 histogram_quantile(0.95, ...) P95 > 500ms
流量 rate(http_requests_total[5m]) 异常波动 ±50%
错误率 rate(errors[5m]) / rate(requests[5m]) > 1% 告警,> 5% 严重
饱和度 CPU/内存/磁盘使用率 CPU > 80%,内存 > 85%

静默(Silence)

维护窗口期间临时禁止某类告警:

# CLI 创建静默(2 小时维护窗口)
amtool silence add \
--alertmanager.url http://alertmanager.monitoring:9093 \
--duration 2h \
--author "ops-team" \
--comment "scheduled maintenance window" \
namespace=production
# 查看当前静默
amtool silence query --alertmanager.url http://alertmanager.monitoring:9093
# 提前结束静默
amtool silence expire <silence-id>

下一章多环境与多集群管理