-
Notifications
You must be signed in to change notification settings - Fork 1
/
Copy pathhardening.yml
146 lines (103 loc) · 2.53 KB
/
hardening.yml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
# 2019-02-17 (cc) <paul4hough@gmail.com>
#
---
Why: .-
How: UbU
Who: '!:)!'
When: yesterday
Vendor priority alignment - my/our priortys reduce cost; short contracts;
people > vendor
open vendor + people > people
sf cost, shared time
sf value ... ROFL
to: dl :)
subject: graphic UbU
Enjoy !:)!
UbU: the u-Bee-u Bee grafic - contest
ted finn *= ted paul
- contributors list
** += **^**
Those with give a .*itis are sought after ...
operator ++ (n) { for n := 0.0; ; n++ { n+=n^n } }
Why - dot-dash
How - UbU
Who - :)
for dot-dash = 100, maul = 2; maul > (dot-dash / 40; ++ maul {
sleep 7 * 24 * 60 * 60
}
dot-dash:
dnr: maul - it not gentel
p: ?
a: 1, 5, 10, 20, 40, 80, 120, n year company
y: !:)!
it's where your going !:)!
func export(term) how { return if term == 'ccu' ; return UbU !:)! }
validation is the path to automated chanage management
prove-it is first nature
join me in the next iteration
security anylizzer integration - o my
go big or go home
conditions:
down:
- hardware:
consul_metric: node gone
- exporters:
metric: up
- component:
list:
- prometheus
- alertmanager
- grafana
- blackbox_exporter
- vmware_exporter
- agate
detect: cross cluster scraping
- downstream:
ticket-sys:
- hpsm
- gitlab
- mock
- ...
detect:
- blackbox
- agate
responce:
agate:
- persistent outbound queue:
queue max - weeks - save and panic
- retry: for n hours
- report
- repeat
noisy:
responce: amtool inhibition and supression
detection: alertmanager scrapes
responce: alert excessive
cloudera top level api alert inibited by low level alerts.
label:
team: cloudera
cluster: abc
alert: cloudera_api_error
redirect - o my
metrics w/ retry
- consumer: see above
dest ticket system down:
- metric: bbox_ping{ ... }
bbox: snarf status
scrape_freeq: 15min
alert:
- tsys down
alert:
- metric: agate_ticket_sys_down{ name = "gitlab|hpsm|mock|george"}
agate:
responces:
- metric.Incr()
- log details
- respond:
- agate_tsys_retry{ name, since="" }
- retry 10m
alert:
- tsys_retry > 12 # still down after 12 hours - pay attention
* exporter down:
- metric: up
node down:
metric: consul