-
Notifications
You must be signed in to change notification settings - Fork 3
/
Copy pathrob_notes.txt
267 lines (200 loc) · 9.54 KB
/
rob_notes.txt
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
Things to remember to do:
* Production code is checked out to a directory underneath
global/cfs/cdirs/desc-td/SOFTWARE/tom_deployment]
production : tom_desc is checkout used by desc-tom.lbl.gov
production-2 : tom_desc is checkout used by desc-tom-2.lbl.gov
* Update spin server if necessary
* kill -HUP 1 on the TOM server to get new code changes
* If database schema are changed in the code, run
python manage.py pgmakemigrations -n <short name>
to make new migrations
* pyton manage.py migrate when there are new migraitons
======================================================================
Resetting migrations
WARNING : This will blow away the database tables for the app!!!
I needed to do this as I wanted to move from PostGIS to G3C, but the old
migrations would have required PostGIS to work. What I did:
* python manage.py migrate <app> zero
* blow away everything in <app>/migrations
* python manage.py makemigrations <app>
* python manage.py migrate <app>
======================================================================
Connecting to the PITT-Google broker:
Requires pittgoogle-client>=0.2.0 from pip
Need to authenticate a service account : https://mwvgroup.github.io/pittgoogle-client/overview/authentication.html#service-account-recommended
Google cloud project :
https://console.cloud.google.com/welcome?project=elasticc-challenge
Google's instructions for creating a service account :
https://cloud.google.com/iam/docs/service-accounts-create#creating
Account needs role "Pub/Sub editor"
Under "IAM & Admin" (left sidebar, if you get to the right page in the
mess that is the Google Cloud UI), do "Service Accounts". Find the
service account in the list. From the three-dot menu to the right,
select "Manage keys". Add key, key type JSON. Private key is saved to
the computer.
Turn this key into a secret; see tom-2-secrets.yam in admin/spin.
Encode this into base64 with
base64 -w 0 <keyfile>
Paste the resultant barf file after pitt_gogole_auth_key.json: | (indented
properly)
The pitt-google code in brokerpoll2 requires environment variables:
GOOGLE_CLOUD_PROJECT="elasticc-challenge"
GOOGLE_APPLICATION_CREDENTIALS="/secrets/path/to/auth_key.json"
======================================================================
Moving users from one TOM to another:
# THIS DOESN'T WORK
# WARNING. This is hacky. It's not officially documented. It may be
# horribly broken. This is a functionality that really OUGHT to be
# officially documented, but, what can you do.
#
# In fact, it CAN'T work, unless you're lucky, and counting on that is
# very scary. The auth permissions table points to the django_content_type
# table, but that table isn't owned by auth, and will already be
# populated in the new TOM. The permissions table in the auth
# module points to rows of that table, but it's entirely possible that
# the same primary keys don't have the same meaning in the new TOM.
# So, there's no safe way to do this. Very annoying.
#
# Log into the old TOM, and do:
#
# python manage.py dumpdata auth --output /tmp/auth.dump
#
# Get that file off of the old tom:
#
# rancher kubectl cp --namespace desc-tom <pod>:/tmp/auth.dump auth.dump
#
# Push that file to the new tom:
#
# rancher kubectl cp --namespace desc-tom auth.dump <pod>:/tmp/auth.dump
#
# Log into th enew TOM and run:
#
# python manage.py loaddata /tmp/auth.dump
#
# ...and it doesn't work
To preserve users when doing this:
pg_dump -a -t auth_user tom_desc > users.sql
Manually edit this to remove the line with the anonymous user.
(I did this in /var/lib/postgresql/data since I could write there. I
then copied it out with
rancher kubectl cp <namespace>/<pod>:/var/lib/postgres/data/users.sql users.sql
and then to the pod where I wanted to restore with a similar command.)
Then, once the database is set back up, on the pod for the postgres:
psql -U postgres tom_desc -f users.sql
You will still manually need to make groups and permissions and add them
back. Right now, that means making and adding users to elasticc_brokers
and elasticc_admin.
The permissions you have to set:
from django.contrib.auth.models import Permission, User, Group
perm = Permission.objects.get( codename='elasticc_admin' )
group = Group.objects.get( name='elasticc_admin' )
group.permissions.add( perm )
perm = Permission.objects.get( codename='elasticc_broker' )
group = Group.objects.get( name='elasticc_broker' )
group.permissions.add( perm )
(These two permissions are defined somewhere in the permissions for the
elasticc django application.)
======================================================================
Adding a group permission
from django.contrib.auth.models import Group, Permission
perm = Permission.objects.get( codename='elasticc_admin' )
g = Group.objects.get( name='elasticc_admin' )
g.permissions.add( perm )
To show permissions, to g.permissions.all()
======================================================================
Mouting postgres
I had trouble with my persistent volume claim because postgres couldn't
write to the directory. I created a temp container (with just some
linux image) that moutned the same persistent mount, went in there and
did a chown of 101:104 (the postgres user and group in my postgres
image) on the persistent mount.
======================================================================
Env vars from the old tom that we might need again:
FINK_GROUP_ID lsstfr-johann
FINK_SERVER 134.158.74.95:24499,
FINK_TOPIC fink_early_sn_candidates_ztf
FINK_USERNAME tom-rknop-dev-postgres-7fc9fb874c-77nj4
GOOGLE_APPLICATION_CREDENTIALS /secrets/GCP_auth_key-pitt_broker_user_project.json
GOOGLE_CLOUD_PROJECT pitt-broker-user-project
There's also the secret
GCP_auth_key-pitt_broker_user_project.json
======================================================================
Making a partial elasticc backup (for testing purposes).
FIRST, have to make temp tables
CREATE TABLE temp_diaobject (LIKE elasticc_diaobject);
CREATE TABLE temp_diasource (LIKE elasticc_diasource);
CREATE TABLE temp_diaforcedsource (LIKE elasticc_diaforcedsource);
CREATE TABLE temp_diaalert (LIKE elasticc_diaalert);
CREATE TABLE temp_diaobjecttruth (LIKE elasticc_diaobjecttruth);
CREATE TABLE temp_diatruth (LIKE elasticc_diatruth);
CREATE TABLE temp_brokermessage (LIKE elasticc_brokermessage);
CREATE TABLE temp_brokerclassifier (LIKE elasticc_brokerclassifier);
CREATE TABLE temp_brokerclassification (LIKE elasticc_brokerclassification);
Copy over some objects
INSERT INTO temp_diaobject SELECT * FROM elasticc_diaobject
ORDER BY RANDOM() LIMIT 1000;
INSERT INTO temp_diasource SELECT * FROM elasticc_diasource
WHERE "diaObjectId" IN
( SELECT "diaObjectId" FROM temp_diaobject );
INSERT INTO temp_diaforcedsource SELECT * FROM elasticc_diaforcedsource
WHERE "diaObjectId" IN
( SELECT "diaObjectId" FROM temp_diaobject );
INSERT INTO temp_diaalert SELECT * FROM elasticc_diaalert
WHERE "diaObjectId" IN
( SELECT "diaObjectId" FROM temp_diaobject );
INSERT INTO temp_diaobjecttruth SELECT * FROM elasticc_diaobjecttruth
WHERE "diaObjectId" IN
( SELECT "diaObjectId" FROM temp_diaobject );
INSERT INTO temp_diatruth SELECT * FROM elasticc_diatruth
WHERE "diaObjectId" IN
( SELECT "diaObjectId" FROM temp_diaobject );
INSERT INTO temp_brokerclassifier SELECT * FROM elasticc_brokerclassifier;
INSERT INTO temp_brokermessage SELECT * FROM elasticc_brokermessage
WHERE "diaSourceId" IN
( SELECT "diaSourceId" FROM temp_diasource );
INSERT INTO temp_brokerclassification
SELECT * FROM elasticc_brokerclassification
WHERE "brokerMessageId" IN
( SELECT "brokerMessageId" from temp_brokermessage );
Then dump out the temp tables
pg_dump -h <server> -U postgres --data-only -f elasticc_sample.sql
-t temp_diaobject -t temp_diasource -t temp_diaforcedsource \
-t temp_diaobjecttruth -t temp_diatruth \
-t temp_diaalert -t temp_brokerclassifier \
-t temp_brokermessage -t temp_brokerclassification \
tom_desc
Delete the temp tables:
DROP TABLE temp_diaobject;
DROP TABLE temp_diasource;
DROP TABLE temp_diaforcedsource;
DROP TABLE temp_diaalert;
DROP TABLE temp_diaobjecttruth;
DROP TABLE temp_diatruth;
DROP TABLE temp_message;
DROP TABLE temp_brokerclassifier;
DROP TABLE temp_brokerclassification;
Now the painful part. First, pg_dump does *not* dump the tables in the
order you requested, so even though I listed them in an order above that
would allow restoration without complaints about missing foreign keys,
the dump file won't have that. So, you have to emacs the enormous dump
file and make some edits:
* Rename all temp_* tables to elasticc_*
* Order the data for the tables in the order
elasticc_diaobject
elasticc_diasource
elasticc_diaforcedsource
elasticc_diaalert
elasticc_diaobjecttruth
elasticc_diatruth
elasticc_brokerclassifier
elasticc_brokermessage
elasticc_brokerclassification
* Add to the bottom a fix to the three auto sequence that these tables
use. (NOTE : look at the sequences actually created in the django
migration to make sure these names are right!!!!!)
SELECT setval(public.'"elasticc_brokerclassification_classificationId_seq"',
( SELECT 1+MAX("classificationId") FROM public.elasticc_brokerclassification ) );
SELECT setval(public.'"elasticc_brokerclassifier_dbClassifierIndex_seq"',
( SELECT 1+MAX("classifierId") FROM public.elasticc_brokerclassifier ) );
SELECT setval(public.'"elasticc_brokermessage_dbMessageIndex_seq"',
( SELECT 1+MAX("brokerMessageId") FROM public.elasticc_brokermessage ) );