-
Notifications
You must be signed in to change notification settings - Fork 68
/
Copy pathbbr-backup-clusters.html.md.erb
479 lines (361 loc) · 22.8 KB
/
bbr-backup-clusters.html.md.erb
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
---
title: Backing Up Tanzu Kubernetes Grid Integrated Edition
owner: TKGI
---
This topic describes how to use BOSH Backup and Restore (BBR) to back up VMware Tanzu Kubernetes Grid Integrated Edition (TKGI) provisioned Kubernetes clusters.
##<a id="overview"></a> Overview
You can use BOSH Backup and Restore (BBR) to back up Kubernetes clusters provisioned by TKGI, including the control plane nodes, etcd database, and worker node VMs.
Kubernetes clusters provisioned by TKGI include custom back up and restore scripts which encapsulate the correct procedure
for backing up and restoring the cluster nodes and etcd database.
BBR orchestrates running the back up and restore scripts and transferring the generated backup artifacts to and from a backup directory.
If configured correctly, BBR can use TLS to communicate securely with back up targets.
To view the BBR release notes, see the Cloud Foundry documentation, [BOSH Backup and Restore Release Notes](https://docs.cloudfoundry.org/bbr/bbr-rn.html).
## <a id='recs'></a> Recommendations
<%= vars.recommended_by %> recommends:
* Follow the full procedure documented in this topic when creating a backup. This ensures that you always have a consistent backup of Ops Manager and Tanzu Kubernetes Grid Integrated Edition to restore from.
* Back up your Kubernetes clusters frequently, especially before upgrading your Tanzu Kubernetes Grid Integrated Edition deployment.
* For BOSH v270.0 and above (currently in <%= vars.platform_name %> 2.7), prune the BOSH blobstore by running `bosh clean-up --all` prior to running a back up of the BOSH director. This removes all unused resources, including packages compiled against older stemcell versions, which can result in a smaller, faster back up of the BOSH Director. For more information see the [`clean-Up`](https://bosh.io/docs/cli-v2/#clean-up) command.
<p class="note"><strong>Note:</strong>The command <code>bosh clean-up --all</code> is a destructive operation and can remove resources that are unused but needed. For example, if an On-Demand Service Broker such as Tanzu Kubernetes Grid Integrated Edition is deployed <strong>and</strong> no service instances have been created, the releases needed to create a service instance will be categorized as unused and removed.</p>
## <a id="prepare"></a> Prepare to Back Up
<%= partial 'preparing-for-bbr' %>
## <a id='backup'></a> Back Up Tanzu Kubernetes Grid Integrated Edition
To back up your Tanzu Kubernetes Grid Integrated Edition environment you must first connect to your jump box before executing `bbr` back up commands.
### <a id='connect-to-jumpbox'></a> Connect to Your Jump Box
You can establish a connection to your jump box in one of the following ways:
* [Connect with SSH](#ssh)
* [Connect with BOSH_ALL_PROXY](#bosh-all-proxy)
For general information about the jump box, see [Installing BOSH Backup and Restore](bbr-install.html).
#### <a id='ssh'></a> Connect with SSH
To connect to your jump box with SSH, do one of the following:
+ **If you are using the Ops Manager VM as your jump box, log in to the Ops Manager VM.** See
[Log in to the Ops Manager VM with SSH](https://techdocs.broadcom.com/us/en/vmware-tanzu/platform/tanzu-operations-manager/3-0/tanzu-ops-manager/install-trouble-advanced.html#ssh) in _Advanced Troubleshooting with the BOSH CLI_.
<br><br>
+ **If you want to connect to your jump box using the command line, run the following
command:**
```
ssh -i PATH-TO-KEY JUMP-BOX-USERNAME@JUMP-BOX-ADDRESS
```
Where:
* `PATH-TO-KEY` is the local path to your private key for the jump box host.
* `JUMP-BOX-USERNAME` is your jump box user name.
* `JUMP-BOX-ADDRESS` is the address of the jump box.
<p class="note"><strong>Note:</strong> If you connect to your jump box with SSH, you must run the BBR commands
in the following sections from within your jump box.</p>
#### <a id='bosh-all-proxy'></a> Connect with BOSH_ALL_PROXY
You can use the `BOSH_ALL_PROXY` environment variable to open an SSH tunnel with SOCKS5 to your jump box.
This tunnel enables you to forward requests from your local machine to the BOSH Director through the jump box.
When `BOSH_ALL_PROXY` is set, BBR always uses its value to forward requests to the BOSH Director.
<p class="note"><strong>Note:</strong>
For the following procedures to work, ensure the SOCKS port is not already in use by a different tunnel or process.</p>
To connect with `BOSH_ALL_PROXY`, do one of the following:
* **If you want to establish the tunnel separate from the BOSH CLI, do the following:**
1. Establish the tunnel and make it available on a local port by running the following command:
```
ssh -4 -D SOCKS-PORT -fNC JUMP-BOX-USERNAME@JUMP-BOX-ADDRESS -i JUMP-BOX-KEY-FILE -o ServerAliveInterval=60
```
Where:
* `SOCKS-PORT` is the local SOCKS port.
* `JUMP-BOX-USERNAME` is your jump box user name.
* `JUMP-BOX-ADDRESS` is the address of the jump box.
* `JUMP-BOX-KEY-FILE` is the local SSH private key for accessing the jump box.
For example:
```console
$ ssh -4 -D 12345 -fNC jumpbox@203.0.113.0 -i jumpbox.key -o ServerAliveInterval=60
```
1. Provide the BOSH CLI with access to the tunnel through `BOSH_ALL_PROXY` by running the following command:
```
export BOSH_ALL_PROXY=socks5://localhost:SOCKS-PORT
```
Where is `SOCKS-PORT` is your local SOCKS port.
* **If you want to establish the tunnel using the BOSH CLI, do the following:**
1. Provide the BOSH CLI with the necessary SSH credentials to create the
tunnel by running the following command:
```
export BOSH_ALL_PROXY=ssh+socks5://JUMP-BOX-USERNAME@JUMP-BOX-ADDRESS:SOCKS-PORT?private_key=JUMP-BOX-KEY-FILE
```
Where:
* `JUMP-BOX-USERNAME` is your jump box user name.
* `JUMP-BOX-ADDRESS` is the address of the jump box.
* `SOCKS-PORT` is your local SOCKS port.
* `JUMP-BOX-KEY-FILE` is the local SSH private key for accessing the jump box.
For example:
```console
$ export BOSH_ALL_PROXY=ssh+socks5://jumpbox@203.0.113.0:12345?private_key=jumpbox.key
```
<p class="note"><strong>Note:</strong> Using <code>BOSH_ALL_PROXY</code> can result in longer
back up and restore times because of network performance degradation. All operations must pass
through the proxy which means moving backup artifacts can be significantly slower.</p>
<div class="note warning"><strong>Warning:</strong> In BBR v1.5.0 and earlier,
the tunnel created by the BOSH CLI does not include the <code>ServerAliveInterval</code> flag.
This might result in your SSH connection timing out when transferring large artifacts.
In BBR v1.5.1, the <code>ServerAliveInterval</code> flag is included.
For more information,
see <a href="https://github.com/cloudfoundry-incubator/bosh-backup-and-restore/releases/tag/v1.5.1">bosh-backup-and-restore v1.5.1</a> on GitHub.
</div>
### <a id='back-up-clusters'></a> Back Up Kubernetes Clusters Provisioned by TKGI
Before backing up your TKGI cluster deployments, verify that they can be backed up.
#### <a id='verify-deployments'></a> Verify Your Provisioned Clusters
To verify that you can reach your TKGI cluster deployments and that the deployments can be backed up, follow the steps below.
1. SSH into your jump box. For more information about the jump box, see
[Configure Your Jump Box](bbr-install.html#jumpbox-setup) in _Installing BOSH Backup and Restore_.
1. To perform the BBR pre-backup check, run the following command from your jump box:
```
BOSH_CLIENT_SECRET=TKGI-UAA-CLIENT-SECRET bbr deployment \
--all-deployments --target BOSH-TARGET --username TKGI-UAA-CLIENT-NAME \
--ca-cert PATH-TO-BOSH-SERVER-CERT \
pre-backup-check
```
Where:
* `TKGI-UAA-CLIENT-SECRET` is the value you recorded for `uaa_client_secret` in
[Download the UAA Client Credentials](#cluster-creds) above.
* `BOSH-TARGET` is the value you recorded for the BOSH Director's address in
[Retrieve the BOSH Director Address](#bosh-address) above.
You must be able to reach the target address from the machine where you run `bbr` commands.
* `TKGI-UAA-CLIENT-NAME` is the value you recorded for `uaa_client_name` in
[Download the UAA Client Credentials](#cluster-creds) above.
* `PATH-TO-BOSH-SERVER-CERT` is the path to the root CA certificate that you downloaded in
[Download or Locate Root CA Certificate](#root-ca-cert) above.
For example:
```console
$ BOSH_CLIENT_SECRET=p455w0rd bbr deployment \
--all-deployments --target bosh.example.com --username pivotal-container-service-12345abcdefghijklmn \
--ca-cert /var/tempest/workspaces/default/root_ca_certificate \
pre-backup-check
```
1. If the pre-backup-check command is successful, the command returns a list of cluster
deployments that can be backed up.
<br>
For example:
```console
[21:51:23] Pending: service-instance_abcdeg-1234-5678-hijk-90101112131415
[21:51:23] -------------------------
[21:51:31] Deployment 'service-instance_abcdeg-1234-5678-hijk-90101112131415' can be backed up.
[21:51:31] -------------------------
[21:51:31] Successfully can be backed up: service-instance_abcdeg-1234-5678-hijk-90101112131415
```
In the output above, `service-instance_abcdeg-1234-5678-hijk-90101112131415` is the
BOSH deployment name of a TKGI cluster.
1. If the pre-backup-check command fails, do one or more of the following:
* Make sure you are using the correct Tanzu Kubernetes Grid Integrated Edition credentials.
* Run the command again, adding the `--debug` flag to enable debug logs. For more information,
see [BBR Logging](bbr-logging.html).
* Make the changes suggested in the output and run the pre-backup check again. For example,
the deployments might not have the correct back up scripts, or the connection to
the BOSH Director failed.
#### <a id='back-up-clusters-back-up'></a> Back Up Kubernetes Clusters Provisioned by TKGI
When backing up your TKGI cluster, you can choose to back up only one cluster or to back up all cluster deployments in scope.
The procedures to do this are the following:
* [Back up All Cluster Deployments](#back-up-all)
* [Back Up One Cluster Deployment](#back-up-one)
##### <a id='back-up-all'></a> Back Up All Kubernetes Clusters
The following procedure backs up all cluster deployments.
Make sure you use the TKGI UAA credentials that you recorded in
[Download the UAA Client Credentials](#cluster-creds).
These credentials limit the scope of the back up to cluster deployments only.
<p class="note"><strong>Note</strong>: The BBR back up command can take a long time to complete.
You can run it independently of the SSH session so that the process can continue running even if
your connection to the jump box fails.
The command above uses <code>nohup</code>, but you could also run the command in a
<code>screen</code> or <code>tmux</code> session.</p>
1. To back up all cluster deployments, run the following command from your jump box:
```
BOSH_CLIENT_SECRET=TKGI-UAA-CLIENT-SECRET nohup bbr deployment \
--all-deployments --target BOSH-TARGET --username TKGI-UAA-CLIENT-NAME \
--ca-cert PATH-TO-BOSH-SERVER-CERT \
backup [--with-manifest] [--artifact-path]
```
Where:
* `TKGI-UAA-CLIENT-SECRET` is the value you recorded for `uaa_client_secret` in
[Download the UAA Client Credentials](#cluster-creds) above.
* `BOSH-TARGET` is the value you recorded for the BOSH Director's address in
[Retrieve the BOSH Director Address](#bosh-address) above.
You must be able to reach the target address from the machine where you run `bbr` commands.
* `TKGI-UAA-CLIENT-NAME` is the value you recorded for `uaa_client_name` in
[Download the UAA Client Credentials](#cluster-creds) above.
* `PATH-TO-BOSH-SERVER-CERT` is the path to the root CA certificate that you downloaded in
[Download the Root CA Certificate](#root-ca-cert) above.
* `--with-manifest` is an optional `backup` parameter to include the manifest in the backup artifact.
If you use this flag, secure the backup artifact because it contains secret credentials.
* `--artifact-path` is an optional `backup` parameter to specify the output path for the backup artifact.
For example:
```console
$ BOSH_CLIENT_SECRET=p455w0rd \
nohup bbr deployment \
--all-deployments \
--target bosh.example.com \
--username pivotal-container-service-12345abcdefghijklmn \
--ca-cert /var/tempest/workspaces/default/root_ca_certificate \
backup
```
<p class="note"><strong>Note</strong>: The optional <code>--with-manifest</code> flag directs BBR to create a backup
containing credentials. Manage the generated backup artifact knowing it contains secrets for administering
your environment.</p>
1. If the `backup` command completes successfully, follow the steps in [Manage Your Backup Artifact](#good-practices) below.
1. If the back up command fails, the back up operation exits. BBR does not attempt to continue backing up any
non-backed up clusters. To troubleshoot a failing back up, do one or more of the following:
* Run the command again, adding the `--debug` flag to enable debug logs. For more information,
see [BBR Logging](bbr-logging.html).
* Follow the steps in [Recover from a Failing Command](#recover-from-failing-command) below.
##### <a id='back-up-one'></a> Back Up a Single Kubernetes Cluster
1. To back up a single, specific cluster deployment, run the following command from your jump box:
```
BOSH_CLIENT_SECRET=TKGI-UAA-CLIENT-SECRET \
nohup bbr deployment \
--deployment CLUSTER-DEPLOYMENT-NAME \
--target BOSH-DIRECTOR-IP \
--username TKGI-UAA-CLIENT-NAME \
--ca-cert PATH-TO-BOSH-SERVER-CERT \
backup [--with-manifest] [--artifact-path]
```
Where:
* `TKGI-UAA-CLIENT-SECRET` is the value you recorded for `uaa_client_secret` in
[Download the UAA Client Credentials](#cluster-creds) above.
* `CLUSTER-DEPLOYMENT-NAME` is the value you recorded in
[Retrieve your Cluster Deployment Name](#cluster-deployment-name) above.
* `BOSH-TARGET` is the value you recorded for the BOSH Director's address in
[Retrieve the BOSH Director Address](#bosh-address) above. You must be able to reach the
target address from the machine where you run `bbr` commands.
* `TKGI-UAA-CLIENT-NAME` is the value you recorded for `uaa_client_name` in
[Download the UAA Client Credentials](#cluster-creds) above.
* `PATH-TO-BOSH-SERVER-CERT` is the path to the root CA certificate that you downloaded in
[Download the Root CA Certificate](#root-ca-cert) above.
* `--with-manifest` is an optional `backup` parameter to include the manifest in the backup artifact.
If you use this flag, secure the backup artifact because it contains secret credentials.
* `--artifact-path` is an optional `backup` parameter to specify the output path for the backup artifact.
For example:
```console
$ BOSH_CLIENT_SECRET=p455w0rd nohup bbr deployment \
--deployment service-instance_abcdeg-1234-5678-hijk-9010111213141 \
--target bosh.example.com --username pivotal-container-service-12345abcdefghijklmn \
--ca-cert /var/tempest/workspaces/default/root_ca_certificate \
backup
```
<p class="note"><strong>Note</strong>: The optional <code>--with-manifest</code> flag directs BBR to create a backup
containing credentials. Manage the generated backup artifact knowing it contains secrets for administering
your environment.</p>
1. If the `backup` command completes successfully, follow the steps in [Manage Your Backup Artifact](#good-practices) below.
1. If the back up command fails, do one or more of the following:
* Run the command again, adding the `--debug` flag to enable debug logs. For more information,
see [BBR Logging](bbr-logging.html).
* Follow the steps in [Recover from a Failing Command](#recover-from-failing-command) below.
### <a id='cancel-backup'></a> Cancel a Cluster Back Up
Backing up can take a long time. If you realize that the back up is going to fail or that
your developers need to push an app immediately, you might need to cancel the back up.
To cancel a back up, perform the following steps:
1. Terminate the BBR process by pressing Ctrl-C and typing `yes` to confirm.
1. Because stopping a back up can leave the system in an unusable state and prevent additional
back ups, follow the procedures in [Clean up After a Failed Back Up](#manual-clean) below.
## <a id="after-backup"></a> After Backing Up Tanzu Kubernetes Grid Integrated Edition
After the back up has completed review and manage the generated backup artifacts.
### <a id="good-practices"></a> Manage Your Backup Artifact
The BBR-created backup consists of a directory containing the backup artifacts and metadata files.
BBR stores each completed backup directory within the current working directory.
<p class="note"><strong>Note</strong>: The optional <code>--with-manifest</code> flag directs BBR to create a backup
containing credentials. Manage the generated backup artifact knowing it contains secrets for administering
your environment.</p>
BBR backup artifact directories are named using the following formats:
* `DIRECTOR-IP-TIMESTAMP` for the BOSH Director backups.
* `DEPLOYMENT-TIMESTAMP` for the Control Plane backup.
* `DEPLOYMENT-TIMESTAMP` for the cluster deployment backups.
Keep your backup artifacts safe by following these steps:
1. Move the backup artifacts off the jump box to your storage space.
1. Compress and encrypt the backup artifacts when storing them.
1. Make redundant copies of your backup and store them in multiple locations. This minimizes the
risk of losing your backups in the event of a disaster.
1. Each time you redeploy Tanzu Kubernetes Grid Integrated Edition, test your backup artifact by following the procedures in:
* [Restore the Tanzu Kubernetes Grid Integrated Edition BOSH Director](bbr-restore.html#redeploy-restore-director)
* [Restore the Tanzu Kubernetes Grid Integrated Edition Control Plane](bbr-restore.html#redeploy-restore-control-plane)
* [Restore Tanzu Kubernetes Grid Integrated Edition Clusters](bbr-restore.html#redeploy-restore-clusters)
### <a id="recover-from-failing-command"></a> Recover from a Failing Command
If the back up fails, follow these steps:
1. Ensure that you set all the parameters in the back up command.
1. Ensure the credentials previously obtained are valid.
1. Ensure the deployment that you specify in the BBR command exists.
1. Ensure that the jump box can reach the BOSH Director.
1. Consult [BBR Logging](bbr-logging.html).
1. If you see the error message: `Directory /var/vcap/store/bbr-backup already exists on instance`,
run the appropriate cleanup command. See [Clean Up After a Failed Back Up](#manual-clean) below for more information.
1. If the backup artifact is corrupted, discard the failing artifacts and run the back up again.
### <a id="manual-clean"></a> Clean Up after a Failed Back Up
If your back up process fails, use the BBR cleanup script to clean up the failed run.
<p class="note warning"><strong>Warning</strong>: It is important to run the BBR cleanup script after a
failed BBR back up run. A failed back up run might leave the BBR back up directory on the instance,
causing any subsequent attempts to back up to fail. In addition, BBR might not have run the post-backup scripts,
leaving the instance in a locked state.</p>
* If the TKGI BOSH Director back up failed, run the following BBR cleanup script command to clean up:
```
bbr director --host BOSH-DIRECTOR-IP \
--username bbr --private-key-path PRIVATE-KEY-FILE \
backup-cleanup
```
Where:
* `BOSH-DIRECTOR-IP` is the address of the BOSH Director. If the BOSH Director is public,
`BOSH-DIRECTOR-IP` is a URL, such as `https://my-bosh.xxx.cf-app.com`. Otherwise, this is the internal
IP `BOSH-DIRECTOR-IP` which you can retrieve as show in [Retrieve the BOSH Director Address](#bosh-address) above.
* `PRIVATE-KEY-FILE` is the path to the private key file that you can create from `Bbr Ssh Credentials` as shown in
[Download the BBR SSH Credentials](#bbr-ssh-creds) above. Replace the placeholder text using the
information in the following table.
For example:
```console
$ bbr director --host 10.0.0.5 --username bbr \
--private-key-path private-key.pem \
backup-cleanup
```
* If backing up the TKGI control plane or TKGI clusters fails, run the following BBR cleanup script command to clean up:
```
BOSH_CLIENT_SECRET=BOSH-CLIENT-SECRET \
bbr deployment \
--target BOSH-TARGET \
--username BOSH-CLIENT \
--deployment DEPLOYMENT-NAME \
--ca-cert PATH-TO-BOSH-CA-CERT \
backup-cleanup
```
Where:
* `BOSH-CLIENT-SECRET` is your BOSH client secret. If you do not know your BOSH Client Secret,
open your BOSH Director tile, navigate to **Credentials > Bosh Commandline Credentials** and
record the value for `BOSH_CLIENT_SECRET`.
* `BOSH-TARGET` is your BOSH Environment setting. If you do not know your BOSH Environment setting,
open your BOSH Director tile, navigate to **Credentials > Bosh Commandline Credentials** and
record the value for `BOSH_ENVIRONMENT`. You must be able to reach the target address from the
workstation where you run `bbr` commands.
* `BOSH-CLIENT` is your BOSH Client Name. If you do not know your BOSH Client Name, open your BOSH Director tile,
navigate to **Credentials > Bosh Commandline Credentials** and record the value for `BOSH_CLIENT`.
* `DEPLOYMENT-NAME` is the Tanzu Kubernetes Grid Integrated Edition BOSH deployment name that you located in
the [Locate the Tanzu Kubernetes Grid Integrated Edition Deployment Names](#locate-deploy-name) section above.
* `PATH-TO-BOSH-CA-CERT` is the path to the root CA certificate that you downloaded in
[Download the Root CA Certificate](#root-ca-cert) above.
For example:
```console
$ BOSH_CLIENT_SECRET=p455w0rd bbr deployment \
--target bosh.example.com --username admin --deployment cf-acceptance-0 \
--ca-cert bosh.ca.crt \
backup-cleanup
```
If the cleanup script fails, consult the following table to match the exit codes to an error
message.
<table>
<tr>
<th>Value</th>
<th>Error</th>
</tr>
<tr>
<td>0</td>
<td>Success</td>
</tr>
<tr>
<td>1</td>
<td>General failure</td>
</tr>
<tr>
<td>8</td>
<td>The post-backup unlock failed. One of your deployments might be in a bad state and require
attention.</td>
</tr>
<tr>
<td>16</td>
<td>The cleanup failed. This is a non-fatal error indicating that the utility has been unable
to clean up open BOSH SSH connections to a deployment's VMs. Manual cleanup might be required
to clear any hanging BOSH users and connections.</td>
</tr>
</table>