forked from microsoft/aurora
-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathoutput.log
599 lines (599 loc) · 41.1 KB
/
output.log
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
nohup: ignoring input
GPU available: True (cuda), used: True
TPU available: False, using: 0 TPU cores
HPU available: False, using: 0 HPUs
/home/jgib124/aurora/aurora/a_env/lib/python3.12/site-packages/pytorch_lightning/callbacks/model_checkpoint.py:654: Checkpoint directory /home/jgib124/aurora/aurora/gfs_converter_ckpt exists and is not empty.
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
Processed GFS Data for this time range found
ERA5 Data for this time range found
Memory Available before DataLoader: 51.01 GB
Memory after Preparing Data: 51.01 GB
GFS Size: 85
Memory after Data Setup: 51.01 GB
Memory Available after DataLoader: 51.01 GB
Input Channels: 85 Output Channels: 69
Input Shape: torch.Size([85, 721, 1440]) Output Shape: torch.Size([69, 721, 1440])
Memory before Lightning Module: 51.01 GB
====================================================================================================
Layer (type:depth-idx) Output Shape Param #
====================================================================================================
GFSUnbiaser [1, 69, 721, 1440] --
├─Encoder: 1-1 [1, 256, 180, 360] --
│ └─ModuleList: 2-3 -- (recursive)
│ │ └─Conv2DBlock: 3-1 [1, 128, 721, 1440] 265,891,584
│ └─MaxPool2d: 2-2 [1, 128, 360, 720] --
│ └─ModuleList: 2-3 -- (recursive)
│ │ └─Conv2DBlock: 3-2 [1, 256, 360, 720] 133,021,952
│ └─MaxPool2d: 2-4 [1, 256, 180, 360] --
├─Decoder: 1-2 [1, 128, 721, 1440] --
│ └─ModuleList: 2-7 -- (recursive)
│ │ └─ConvTranspose2d: 3-3 [1, 256, 360, 720] 262,400
│ └─ModuleList: 2-8 -- (recursive)
│ │ └─Conv2DBlock: 3-4 [1, 128, 360, 720] 66,654,336
│ └─ModuleList: 2-7 -- (recursive)
│ │ └─ConvTranspose2d: 3-5 [1, 128, 720, 1440] 65,664
│ └─ModuleList: 2-8 -- (recursive)
│ │ └─Conv2DBlock: 3-6 [1, 128, 721, 1440] 265,941,120
├─Conv2d: 1-3 [1, 69, 721, 1440] 8,901
====================================================================================================
Total params: 731,845,957
Trainable params: 731,845,957
Non-trainable params: 0
Total mult-adds (Units.GIGABYTES): 554.07
====================================================================================================
Input size (MB): 353.00
Forward/backward pass size (MB): 8010.79
Params size (MB): 2927.38
Estimated Total Size (MB): 11291.18
====================================================================================================
Memory after Lightning Module: 48.07 GB
Starting Training...
Memory after Preparing Data: 48.07 GB
GFS Size: 85
Memory after Data Setup: 48.07 GB
┏━━━┳━━━━━━━━━━┳━━━━━━━━━━━━━┳━━━━━━━━┳━━━━━━━┓
┃ ┃ Name ┃ Type ┃ Params ┃ Mode ┃
┡━━━╇━━━━━━━━━━╇━━━━━━━━━━━━━╇━━━━━━━━╇━━━━━━━┩
│ 0 │ model │ GFSUnbiaser │ 731 M │ train │
│ 1 │ loss_fxn │ Loss │ 0 │ train │
└───┴──────────┴─────────────┴────────┴───────┘
Trainable params: 731 M
Non-trainable params: 0
Total params: 731 M
Total estimated model params size (MB): 2.9 K
Modules in train mode: 60
Modules in eval mode: 0
Epoch 0/24 ━━━━━━━━━━━━━━━━ 108/108 0:12:08 • 0:00:00 0.17it/s v_num: 2.000
train_ssim_loss:
0.315
train_pixel_loss:
0.250
train_total_loss:
0.565
val_ssim_loss:
0.315
val_pixel_loss:
0.250
val_total_loss:
0.565
Epoch 1/24 ━━━━━━━━━━━━━━━━ 108/108 0:12:09 • 0:00:00 0.17it/s v_num: 2.000
train_ssim_loss:
0.290
train_pixel_loss:
0.240
train_total_loss:
0.530
val_ssim_loss:
0.290
val_pixel_loss:
0.241
val_total_loss:
0.531
Epoch 2/24 ━━━━━━━━━━━━━━━━ 108/108 0:12:11 • 0:00:00 0.20it/s v_num: 2.000
train_ssim_loss:
0.288
train_pixel_loss:
0.239
train_total_loss:
0.527
val_ssim_loss:
0.288
val_pixel_loss:
0.240
val_total_loss:
0.528
Epoch 3/24 ━━━━━━━━━━━━━━━━ 108/108 0:12:12 • 0:00:00 0.17it/s v_num: 2.000
train_ssim_loss:
0.286
train_pixel_loss:
0.238
train_total_loss:
0.525
val_ssim_loss:
0.287
val_pixel_loss:
0.239
val_total_loss:
0.526
Epoch 4/24 ━━━━━━━━━━━━━━━━ 108/108 0:12:07 • 0:00:00 0.18it/s v_num: 2.000
train_ssim_loss:
0.286
train_pixel_loss:
0.238
train_total_loss:
0.524
val_ssim_loss:
0.287
val_pixel_loss:
0.239
val_total_loss:
0.526
Epoch 5/24 ━━━━━━━━━━━━━━━━ 108/108 0:12:12 • 0:00:00 0.18it/s v_num: 2.000
train_ssim_loss:
0.286
train_pixel_loss:
0.239
train_total_loss:
0.525
val_ssim_loss:
0.287
val_pixel_loss:
0.239
val_total_loss:
0.526
Epoch 6/24 ━━━━━━━━━━━━━━━━ 108/108 0:12:12 • 0:00:00 0.17it/s v_num: 2.000
train_ssim_loss:
0.267
train_pixel_loss:
0.235
train_total_loss:
0.502
val_ssim_loss:
0.268
val_pixel_loss:
0.235
val_total_loss:
0.504
Epoch 7/24 ━━━━━━━━━━━━━━━━ 108/108 0:12:13 • 0:00:00 0.18it/s v_num: 2.000
train_ssim_loss:
0.270
train_pixel_loss:
0.238
train_total_loss:
0.508
val_ssim_loss:
0.268
val_pixel_loss:
0.235
val_total_loss:
0.503
Epoch 8/24 ━━━━━━━━━━━━━━━━ 108/108 0:12:08 • 0:00:00 0.18it/s v_num: 2.000
train_ssim_loss:
0.268
train_pixel_loss:
0.237
train_total_loss:
0.505
val_ssim_loss:
0.265
val_pixel_loss:
0.234
val_total_loss:
0.500
Epoch 9/24 ━━━━━━━━━━━━━━━━ 108/108 0:12:09 • 0:00:00 0.17it/s v_num: 2.000
train_ssim_loss:
0.265
train_pixel_loss:
0.236
train_total_loss:
0.501
val_ssim_loss:
0.263
val_pixel_loss:
0.235
val_total_loss:
0.498
Epoch 10/24 ━━━━━━━━━━━━━━━━ 108/108 0:12:14 • 0.17it/s v_num: 2.000
0:00:00 train_ssim_loss:
0.260
train_pixel_loss:
0.235
train_total_loss:
0.495
val_ssim_loss:
0.259
val_pixel_loss:
0.233
val_total_loss:
0.492
Epoch 11/24 ━━━━━━━━━━━━━━━━ 108/108 0:12:14 • 0.17it/s v_num: 2.000
0:00:00 train_ssim_loss:
0.258
train_pixel_loss:
0.235
train_total_loss:
0.492
val_ssim_loss:
0.256
val_pixel_loss:
0.232
val_total_loss:
0.488
Epoch 12/24 ━━━━━━━━━━━━━━━━ 108/108 0:12:11 • 0.20it/s v_num: 2.000
0:00:00 train_ssim_loss:
0.256
train_pixel_loss:
0.233
train_total_loss:
0.490
val_ssim_loss:
0.255
val_pixel_loss:
0.233
val_total_loss:
0.488
Epoch 13/24 ━━━━━━━━━━━━━━━━ 108/108 0:12:11 • 0.17it/s v_num: 2.000
0:00:00 train_ssim_loss:
0.253
train_pixel_loss:
0.231
train_total_loss:
0.485
val_ssim_loss:
0.254
val_pixel_loss:
0.231
val_total_loss:
0.485
Epoch 14/24 ━━━━━━━━━━━━━━━━ 108/108 0:12:12 • 0.17it/s v_num: 2.000
0:00:00 train_ssim_loss:
0.254
train_pixel_loss:
0.233
train_total_loss:
0.487
val_ssim_loss:
0.253
val_pixel_loss:
0.231
val_total_loss:
0.484
Epoch 15/24 ━━━━━━━━━━━━━━━━ 108/108 0:12:08 • 0.15it/s v_num: 2.000
0:00:00 train_ssim_loss:
0.252
train_pixel_loss:
0.232
train_total_loss:
0.483
val_ssim_loss:
0.251
val_pixel_loss:
0.230
val_total_loss:
0.481
Epoch 16/24 ━━━━━━━━━━━━━━━━ 108/108 0:12:09 • 0.18it/s v_num: 2.000
0:00:00 train_ssim_loss:
0.249
train_pixel_loss:
0.229
train_total_loss:
0.478
val_ssim_loss:
0.250
val_pixel_loss:
0.230
val_total_loss:
0.480
Epoch 17/24 ━━━━━━━━━━━━━━━━ 108/108 0:12:09 • 0.18it/s v_num: 2.000
0:00:00 train_ssim_loss:
0.248
train_pixel_loss:
0.230
train_total_loss:
0.478
val_ssim_loss:
0.248
val_pixel_loss:
0.231
val_total_loss:
0.479
Epoch 18/24 ━━━━━━━━━━━━━━━━ 108/108 0:12:08 • 0.17it/s v_num: 2.000
0:00:00 train_ssim_loss:
0.246
train_pixel_loss:
0.229
train_total_loss:
0.475
val_ssim_loss:
0.246
val_pixel_loss:
0.229
val_total_loss:
0.475
Epoch 19/24 ━━━━━━━━━━━━━━━━ 108/108 0:12:09 • 0.18it/s v_num: 2.000
0:00:00 train_ssim_loss:
0.245
train_pixel_loss:
0.230
train_total_loss:
0.475
val_ssim_loss:
0.245
val_pixel_loss:
0.229
val_total_loss:
0.474
Epoch 20/24 ━━━━━━━━━━━━━━━━ 108/108 0:11:53 • 0.20it/s v_num: 2.000
0:00:00 train_ssim_loss:
0.242
train_pixel_loss:
0.229
train_total_loss:
0.471
val_ssim_loss:
0.242
val_pixel_loss:
0.230
val_total_loss:
0.472
Epoch 21/24 ━━━━━━━━━━━━━━━━ 108/108 0:11:55 • 0.20it/s v_num: 2.000
0:00:00 train_ssim_loss:
0.241
train_pixel_loss:
0.229
train_total_loss:
0.471
val_ssim_loss:
0.241
val_pixel_loss:
0.229
val_total_loss:
0.471
Epoch 22/24 ━━━━━━━━━━━━━━━━ 108/108 0:11:49 • 0.21it/s v_num: 2.000
0:00:00 train_ssim_loss:
0.240
train_pixel_loss:
0.229
train_total_loss:
0.469
val_ssim_loss:
0.241
val_pixel_loss:
0.229
val_total_loss:
0.470
Epoch 23/24 ━━━━━━━━━━━━━━━━ 108/108 0:11:50 • 0.20it/s v_num: 2.000
0:00:00 train_ssim_loss:
0.239
train_pixel_loss:
0.229
train_total_loss:
0.468
val_ssim_loss:
0.240
val_pixel_loss:
0.229
val_total_loss:
0.469
`Trainer.fit` stopped: `max_epochs=25` reached.
Epoch 24/24 ━━━━━━━━━━━━━━━━ 108/108 0:12:10 • 0.18it/s v_num: 2.000
0:00:00 train_ssim_loss:
0.239
train_pixel_loss:
0.228
train_total_loss:
0.467
val_ssim_loss:
0.240
val_pixel_loss:
0.229
val_total_loss:
0.468
/home/jgib124/aurora/aurora/inference/generate_outputs.py:534: UserWarning: This figure includes Axes that are not compatible with tight_layout, so results might be incorrect.
plt.tight_layout()
/home/jgib124/aurora/aurora/inference/generate_outputs.py:534: UserWarning: This figure includes Axes that are not compatible with tight_layout, so results might be incorrect.
plt.tight_layout()
/home/jgib124/aurora/aurora/inference/generate_outputs.py:534: UserWarning: This figure includes Axes that are not compatible with tight_layout, so results might be incorrect.
plt.tight_layout()
/home/jgib124/aurora/aurora/inference/generate_outputs.py:534: UserWarning: This figure includes Axes that are not compatible with tight_layout, so results might be incorrect.
plt.tight_layout()
/home/jgib124/aurora/aurora/inference/generate_outputs.py:534: UserWarning: This figure includes Axes that are not compatible with tight_layout, so results might be incorrect.
plt.tight_layout()
/home/jgib124/aurora/aurora/inference/generate_outputs.py:534: UserWarning: This figure includes Axes that are not compatible with tight_layout, so results might be incorrect.
plt.tight_layout()
/home/jgib124/aurora/aurora/inference/generate_outputs.py:534: UserWarning: This figure includes Axes that are not compatible with tight_layout, so results might be incorrect.
plt.tight_layout()
/home/jgib124/aurora/aurora/inference/generate_outputs.py:534: UserWarning: This figure includes Axes that are not compatible with tight_layout, so results might be incorrect.
plt.tight_layout()
/home/jgib124/aurora/aurora/inference/generate_outputs.py:534: UserWarning: This figure includes Axes that are not compatible with tight_layout, so results might be incorrect.
plt.tight_layout()
/home/jgib124/aurora/aurora/inference/generate_outputs.py:534: UserWarning: This figure includes Axes that are not compatible with tight_layout, so results might be incorrect.
plt.tight_layout()
/home/jgib124/aurora/aurora/inference/generate_outputs.py:534: UserWarning: This figure includes Axes that are not compatible with tight_layout, so results might be incorrect.
plt.tight_layout()
/home/jgib124/aurora/aurora/inference/generate_outputs.py:534: UserWarning: This figure includes Axes that are not compatible with tight_layout, so results might be incorrect.
plt.tight_layout()
/home/jgib124/aurora/aurora/inference/generate_outputs.py:534: UserWarning: This figure includes Axes that are not compatible with tight_layout, so results might be incorrect.
plt.tight_layout()
Batch 0
Input: <class 'torch.Tensor'>, torch.Size([1, 85, 721, 1440])
Truth: <class 'torch.Tensor'>, torch.Size([1, 69, 721, 1440])
Pred: <class 'torch.Tensor'>, torch.Size([1, 69, 721, 1440])
Input Slice: <class 'torch.Tensor'>, torch.Size([721, 1440])
Truth Slice: <class 'torch.Tensor'>, torch.Size([721, 1440])
Pred Slice: <class 'torch.Tensor'>, torch.Size([721, 1440])
Input Mean: 6.928704738616943
Input Std: 12.729884147644043
Pred Mean: 7.200839996337891
Pred Std: 12.909390449523926
Truth Mean: 7.013223648071289
Truth Std: 12.864507675170898
Tensor Range: -32.159336 62.840664
Batch 1
Input: <class 'torch.Tensor'>, torch.Size([1, 85, 721, 1440])
Truth: <class 'torch.Tensor'>, torch.Size([1, 69, 721, 1440])
Pred: <class 'torch.Tensor'>, torch.Size([1, 69, 721, 1440])
Input Slice: <class 'torch.Tensor'>, torch.Size([721, 1440])
Truth Slice: <class 'torch.Tensor'>, torch.Size([721, 1440])
Pred Slice: <class 'torch.Tensor'>, torch.Size([721, 1440])
Input Mean: 6.991284370422363
Input Std: 12.548856735229492
Pred Mean: 7.23396635055542
Pred Std: 12.78748607635498
Truth Mean: 7.069414138793945
Truth Std: 12.603857040405273
Tensor Range: -31.908676 61.791325
Batch 2
Input: <class 'torch.Tensor'>, torch.Size([1, 85, 721, 1440])
Truth: <class 'torch.Tensor'>, torch.Size([1, 69, 721, 1440])
Pred: <class 'torch.Tensor'>, torch.Size([1, 69, 721, 1440])
Input Slice: <class 'torch.Tensor'>, torch.Size([721, 1440])
Truth Slice: <class 'torch.Tensor'>, torch.Size([721, 1440])
Pred Slice: <class 'torch.Tensor'>, torch.Size([721, 1440])
Input Mean: 6.344651222229004
Input Std: 12.957876205444336
Pred Mean: 6.835390090942383
Pred Std: 12.54433822631836
Truth Mean: 6.377764701843262
Truth Std: 12.993325233459473
Tensor Range: -41.00444 72.375565
Batch 3
Input: <class 'torch.Tensor'>, torch.Size([1, 85, 721, 1440])
Truth: <class 'torch.Tensor'>, torch.Size([1, 69, 721, 1440])
Pred: <class 'torch.Tensor'>, torch.Size([1, 69, 721, 1440])
Input Slice: <class 'torch.Tensor'>, torch.Size([721, 1440])
Truth Slice: <class 'torch.Tensor'>, torch.Size([721, 1440])
Pred Slice: <class 'torch.Tensor'>, torch.Size([721, 1440])
Input Mean: 6.79832124710083
Input Std: 12.744214057922363
Pred Mean: 7.103590965270996
Pred Std: 12.757564544677734
Truth Mean: 6.853029251098633
Truth Std: 12.88857650756836
Tensor Range: -29.870378 64.38962
Batch 4
Input: <class 'torch.Tensor'>, torch.Size([1, 85, 721, 1440])
Truth: <class 'torch.Tensor'>, torch.Size([1, 69, 721, 1440])
Pred: <class 'torch.Tensor'>, torch.Size([1, 69, 721, 1440])
Input Slice: <class 'torch.Tensor'>, torch.Size([721, 1440])
Truth Slice: <class 'torch.Tensor'>, torch.Size([721, 1440])
Pred Slice: <class 'torch.Tensor'>, torch.Size([721, 1440])
Input Mean: 6.281869411468506
Input Std: 12.89881420135498
Pred Mean: 6.921074390411377
Pred Std: 12.673696517944336
Truth Mean: 6.3212690353393555
Truth Std: 13.017271995544434
Tensor Range: -31.290913 65.23909
Batch 5
Input: <class 'torch.Tensor'>, torch.Size([1, 85, 721, 1440])
Truth: <class 'torch.Tensor'>, torch.Size([1, 69, 721, 1440])
Pred: <class 'torch.Tensor'>, torch.Size([1, 69, 721, 1440])
Input Slice: <class 'torch.Tensor'>, torch.Size([721, 1440])
Truth Slice: <class 'torch.Tensor'>, torch.Size([721, 1440])
Pred Slice: <class 'torch.Tensor'>, torch.Size([721, 1440])
Input Mean: 6.9442009925842285
Input Std: 12.114151000976562
Pred Mean: 7.323346138000488
Pred Std: 12.309643745422363
Truth Mean: 6.976165294647217
Truth Std: 12.221766471862793
Tensor Range: -25.190966 61.009033
Batch 6
Input: <class 'torch.Tensor'>, torch.Size([1, 85, 721, 1440])
Truth: <class 'torch.Tensor'>, torch.Size([1, 69, 721, 1440])
Pred: <class 'torch.Tensor'>, torch.Size([1, 69, 721, 1440])
Input Slice: <class 'torch.Tensor'>, torch.Size([721, 1440])
Truth Slice: <class 'torch.Tensor'>, torch.Size([721, 1440])
Pred Slice: <class 'torch.Tensor'>, torch.Size([721, 1440])
Input Mean: 7.1056413650512695
Input Std: 12.188621520996094
Pred Mean: 7.348072052001953
Pred Std: 12.446011543273926
Truth Mean: 7.140449523925781
Truth Std: 12.369239807128906
Tensor Range: -26.60715 59.392853
Batch 7
Input: <class 'torch.Tensor'>, torch.Size([1, 85, 721, 1440])
Truth: <class 'torch.Tensor'>, torch.Size([1, 69, 721, 1440])
Pred: <class 'torch.Tensor'>, torch.Size([1, 69, 721, 1440])
Input Slice: <class 'torch.Tensor'>, torch.Size([721, 1440])
Truth Slice: <class 'torch.Tensor'>, torch.Size([721, 1440])
Pred Slice: <class 'torch.Tensor'>, torch.Size([721, 1440])
Input Mean: 7.077643871307373
Input Std: 11.962817192077637
Pred Mean: 7.3076395988464355
Pred Std: 12.184202194213867
Truth Mean: 7.156251907348633
Truth Std: 12.117000579833984
Tensor Range: -27.242271 68.387726
Batch 8
Input: <class 'torch.Tensor'>, torch.Size([1, 85, 721, 1440])
Truth: <class 'torch.Tensor'>, torch.Size([1, 69, 721, 1440])
Pred: <class 'torch.Tensor'>, torch.Size([1, 69, 721, 1440])
Input Slice: <class 'torch.Tensor'>, torch.Size([721, 1440])
Truth Slice: <class 'torch.Tensor'>, torch.Size([721, 1440])
Pred Slice: <class 'torch.Tensor'>, torch.Size([721, 1440])
Input Mean: 6.307222366333008
Input Std: 12.951495170593262
Pred Mean: 7.031097888946533
Pred Std: 13.102229118347168
Truth Mean: 6.461600303649902
Truth Std: 13.062854766845703
Tensor Range: -37.522354 58.457645
Batch 9
Input: <class 'torch.Tensor'>, torch.Size([1, 85, 721, 1440])
Truth: <class 'torch.Tensor'>, torch.Size([1, 69, 721, 1440])
Pred: <class 'torch.Tensor'>, torch.Size([1, 69, 721, 1440])
Input Slice: <class 'torch.Tensor'>, torch.Size([721, 1440])
Truth Slice: <class 'torch.Tensor'>, torch.Size([721, 1440])
Pred Slice: <class 'torch.Tensor'>, torch.Size([721, 1440])
Input Mean: 6.4545063972473145
Input Std: 12.851933479309082
Pred Mean: 6.912437438964844
Pred Std: 12.986324310302734
Truth Mean: 6.516557216644287
Truth Std: 13.066296577453613
Tensor Range: -28.933878 63.96612
Batch 10
Input: <class 'torch.Tensor'>, torch.Size([1, 85, 721, 1440])
Truth: <class 'torch.Tensor'>, torch.Size([1, 69, 721, 1440])
Pred: <class 'torch.Tensor'>, torch.Size([1, 69, 721, 1440])
Input Slice: <class 'torch.Tensor'>, torch.Size([721, 1440])
Truth Slice: <class 'torch.Tensor'>, torch.Size([721, 1440])
Pred Slice: <class 'torch.Tensor'>, torch.Size([721, 1440])
Input Mean: 6.434031009674072
Input Std: 12.986695289611816
Pred Mean: 7.096009731292725
Pred Std: 13.133240699768066
Truth Mean: 6.496395587921143
Truth Std: 13.07471752166748
Tensor Range: -35.891983 70.21802
Batch 11
Input: <class 'torch.Tensor'>, torch.Size([1, 85, 721, 1440])
Truth: <class 'torch.Tensor'>, torch.Size([1, 69, 721, 1440])
Pred: <class 'torch.Tensor'>, torch.Size([1, 69, 721, 1440])
Input Slice: <class 'torch.Tensor'>, torch.Size([721, 1440])
Truth Slice: <class 'torch.Tensor'>, torch.Size([721, 1440])
Pred Slice: <class 'torch.Tensor'>, torch.Size([721, 1440])
Input Mean: 6.937664985656738
Input Std: 12.176424026489258
Pred Mean: 7.170034408569336
Pred Std: 12.263463973999023
Truth Mean: 7.004157066345215
Truth Std: 12.333709716796875
Tensor Range: -32.52426 57.075737
Batch 12
Input: <class 'torch.Tensor'>, torch.Size([1, 85, 721, 1440])
Truth: <class 'torch.Tensor'>, torch.Size([1, 69, 721, 1440])
Pred: <class 'torch.Tensor'>, torch.Size([1, 69, 721, 1440])
Input Slice: <class 'torch.Tensor'>, torch.Size([721, 1440])
Truth Slice: <class 'torch.Tensor'>, torch.Size([721, 1440])
Pred Slice: <class 'torch.Tensor'>, torch.Size([721, 1440])
Input Mean: 6.885969161987305
Input Std: 12.533417701721191
Pred Mean: 7.186278820037842
Pred Std: 12.782161712646484
Truth Mean: 7.041639804840088
Truth Std: 12.601737022399902
Tensor Range: -34.447243 67.25276