-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathFinalPaper.tex
1656 lines (1214 loc) · 78.7 KB
/
FinalPaper.tex
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
\documentclass{article}
\usepackage[utf8]{inputenc}
\usepackage[english]{babel}
\usepackage{titlesec}
\usepackage{hanging}
\usepackage{indentfirst}
\usepackage{setspace}
\usepackage{float}
\usepackage{multirow}
\usepackage{mathrsfs}
\usepackage{caption}
\usepackage{tocbasic}
\usepackage[toc,page]{appendix}
\usepackage[table,xcdraw]{xcolor}
\usepackage{longtable}
\usepackage{listings}
\usepackage{amsmath}
\DeclareTOCStyleEntry[beforeskip=.1em plus 1pt, pagenumberformat=\textbf]{tocline}{section}
\usepackage{adjustbox}
\usepackage[english]{babel}
\setlength{\parindent}{4em}
\setlength{\parskip}{0.5em}
\usepackage{array}
\usepackage{hyperref}
\hypersetup{
colorlinks=false,
linkcolor=black,
filecolor=black,
urlcolor=black,
}
\newcommand{\MYhref}[3][blue]{\href{#2}{\color{#1}{#3}}}%
\urlstyle{same}
\usepackage[letterpaper, portrait, margin=1in]{geometry}
\usepackage{graphicx}
\graphicspath{ {images/} }
\usepackage{array}
\newcolumntype{L}[1]{>{\raggedright\let\newline\\\arraybackslash\hspace{0pt}}m{#1}}
\newcolumntype{C}[1]{>{\centering\let\newline\\\arraybackslash\hspace{0pt}}m{#1}}
\newcolumntype{R}[1]{>{\raggedleft\let\newline\\\arraybackslash\hspace{0pt}}m{#1}}
\titleclass{\subsubsubsection}{straight}[\subsection]
\newcounter{subsubsubsection}[subsubsection]
\renewcommand\thesubsubsubsection{\thesubsubsection.\arabic{subsubsubsection}}
\titleformat{\subsubsubsection}
{\normalfont\normalsize\bfseries}{\thesubsubsubsection}{1em}{}
\titlespacing*{\subsubsubsection}
{0pt}{3.25ex plus 1ex minus .2ex}{1.5ex plus .2ex}
\makeatletter
\renewcommand\paragraph{\@startsection{paragraph}{5}{\z@}%
{3.25ex \@plus1ex \@minus.2ex}%
{-1em}%
{\normalfont\normalsize\bfseries}}
\renewcommand\subparagraph{\@startsection{subparagraph}{6}{\parindent}%
{3.25ex \@plus1ex \@minus .2ex}%
{-1em}%
{\normalfont\normalsize\bfseries}}
\def\toclevel@subsubsubsection{4}
\def\toclevel@paragraph{5}
\def\toclevel@paragraph{6}
\def\l@subsubsubsection{\@dottedtocline{4}{7em}{4em}}
\def\l@paragraph{\@dottedtocline{5}{10em}{5em}}
\def\l@subparagraph{\@dottedtocline{6}{14em}{6em}}
\makeatother
\setcounter{secnumdepth}{4}
\setcounter{tocdepth}{4}
\renewcommand{\contentsname}{Table of Contents}
\renewcommand{\listtablename}{Tables}
\renewcommand{\listfigurename}{Figures}
%========================%
% Beginning of Document %
%========================%
\begin{document}
%========================%
% Title Page %
%========================%
\pagenumbering{gobble}
\begin{center}
\Huge{\textbf{Evaluation of reliability of a technical system}}
\\
\vspace{10mm}
\large{
Final project for the course of \\
Articial Neutral Networks 214350 \\
part of the faculty of\\
Electrical Engineering and Informatics\\
from the \\
Zittau/Görlitz University of Applied Sciences\\
}
\Large{
\vspace{15mm}
{\huge B. Eng. Automation and Mechatronics} \\
\vspace{15mm}
{\LARGE By:} \\
\vspace{5mm}
Jesús Jair Reyes Gutiérrez \\
\vspace {15mm}
{\LARGE Project Advisor:} \\
\vspace{5mm}
Prof. Dr.-Ing. W. Kästner \\
\vspace {15mm}
{\LARGE Project level:}\\
\vspace{5mm}
Bachelor\\
\vspace {15mm}
Date: January 2021 \\
}
\vspace{15mm}
\large{\textit{This report represents the work of an undergraduate student for a class part of his DHIK double degree program between the Mexican university "Instituto Tecnológico y de Estudios Superiores de Monterrey" and the German Hochschule "Hochschule Zittau/Görlitz".}}
\end{center}
%========================%
% Abstract %
%========================%
\newpage
\pagenumbering{roman}
\setcounter{page}{1}
\section*{Abstract}
\par Throughout the pages of this paper, the process followed to design and develop a Hybrid Markov Model by training a multilayer perceptron with a pre-established data set will be presented, explained and detailed. DataEngine software was used for the realization of the multilayer perceptron, Excel and MatLab were used for the preparation and pre-processing of the data, and Dynstar was used for the implementation of the hybrid Markov model using a Soft Computing model.
\par Additionally, a brief theoretical framework will be presented with all the necessary concepts to understand each part of the process, as well as the results of each stage, to finally compare the results of the hybrid model with the classical one. At each stage, the justifications for each decision will be included, along with its corresponding implementation.
%========================%
% Acknowledgements %
%========================%
\newpage
\section*{Acknowledgements}
\par I have to start by thanking my professor of Artificial Neutral Networks, Prof. Dr.-Ing. W. Kästner. For providing me with the necessary knowledge to understand what a Markov model is and thus subsequently to understand each step of the design process. Additionally, I also thank him for his mentoring during the development of the whole project.
%========================%
% Table of Contents %
%========================%
\newpage
\tableofcontents
\listoftables
\listoffigures
%========================%
% Introduction %
%========================%
\newpage
\begin{doublespacing}
\pagenumbering{arabic}
\setcounter{page}{1}
\section{Introduction}
\par Over time, the tools, machinery and systems of industry have facilitated mass production, thanks in large part to technological advances that have made them more complex, efficient and durable. However, this progress has also posed a great challenge for engineers and operators, since their complexity prevents the development of models that are 100\% adjusted to reality and forces them to be simplified. Therefore, the development of reliable models is one of the greatest challenges of modern industry, since with them it is not only possible to determine the scope of current equipment or processes, but it is also possible to make very accurate forecasts. One of the most relevant forecasts is the one that determines how often it is necessary to perform maintenance on a tool, machinery or system, because maintenance involves not only expenses in itself, but also in production due to downtime.
\subsection{Project Aim}
\par For this reason, this paper proposes to evaluate the reliability of a model based on a hybrid Markov model against that of a classical model. For this purpose, a set of data will be presented that will go through different stages to create a multilayer perceptron, which will be subsequently implemented in a computational Markov model, which, based on a stress scenario of temperature and humidity, will provide a forecast of the time required to provide maintenance to the process or system under analysis.
\par The following is a brief explanation of each of the stages through which the initial data set will pass until the objective is met.
\subsubsection{Data pre-processing}
\par Through this first stage, the first filter of the information received will be carried out, because although it is information received directly from the source, the measurements may contain errors that must be eliminated for the subsequent stages
\subsubsection{Data preparation}
\par In this second filtering stage, the data must be separated for the training and testing of the multilayer perceptron, but, in order to guarantee efficiency, it is necessary to distribute the data in an intelligent way.
\subsubsection{MLP design}
\par This third stage consists directly in designing a multilayer perceptron in the selected software, under certain parameters that guarantee the maximum and minimum error requirements established, in this case, of $ \pm 3\%$.
\subsubsection{Mathematical models}
\par This stage presents the design of the Markov model under ideal conditions, which will lead to the calculation of the expected or desired results.
\subsubsection{Simulation models}
\par The Markov model will be implemented in a simulation software. For this purpose, the classical and hybrid models will be included in order to generate a comparison.
\subsubsection{Simulation results}
\par At this stage, the results obtained in both models, the classical and the hybrid, will be presented,and it is going to be described their empirical properties, without any further or really deep analysis.
\subsubsection{Evaluation}
\par Finally, the results obtained will be evaluated, compared, analyzed and interpreted to issue a conclusion about the time required to maintain the analyzed process or equipment. Here the two models, classical, and hybrid will be compared.
\subsection{Background}
\par It is worth mentioning that this project is the result of the Articial Neutral Networks 214350 class given by Prof. Dr.-Ing. W. Kästner. Through this class all the necessary knowledge for the realization of this project was provided, as well as all the technological tools for its development. As already mentioned, DataEngine, MatLab, Excel and Dynstar were the software used in the development of the project.
%========================%
% Theory %
%========================%
\newpage
\section{Theory and Methodology}
\par First of all, we must begin by defining the importance and necessity of modelling and simulating physical phenomena. Thanks to these tools, it is possible to make prognoses about reality and generate knowledge about the causes that led to the result obtained, as well as what would have happened if something different had been done \cite{kaestner:basics}. Furthermore, modelling and simulating the behavior of a system is not only restricted to a single area, but its application framework ranges from engineering to medicine and social sciences \cite{kaestner:basics}.
\par With this in mind, in order to model any system, it is first necessary to take into account how much is known about it, as well as its characteristics.
\subsection{System}
{
\par A system is a set of components that interact with each other and it is thanks to these interactions that a system is more than just the sum of its components \cite{kaestner:basics}. In addition, a system must have a purpose that, together with the structure of its components, gives it an identity \cite{kaestner:basics}. And finally, some other characteristics or parts of systems are their inputs and outputs, both coming from or going out of the system boundaries \cite{kaestner:basics}. The inputs being totally independent and the outputs depending totally on the system process, which is characterized by its internal or state variables \cite{kaestner:basics}.
\begin{figure}[H]
\centering
\includegraphics[width=0.8\textwidth]{Images/Partes_del_sistema.JPG}
\caption{System representation \cite{kaestner:basics}}
\label{fig:SystemParts}
\end{figure}
\par As can be seen in Figure \ref{fig:SystemParts}, the system is contained by a boundary, in which the inputs enter, these pass between the elements of the system according to its structure, which can have feedback or changes in the system by the same change of the state variables \cite{kaestner:basics}. In general, a system works by causes and effects, an input arrives and depending on the organization and parameters of the system this is transformed into a desired output \cite{kaestner:basics}.
}
\subsection{Model}
{
\par On the other hand, a model is a simplified scale reproduction of some phenomenon \cite{kaestner:basics}. Models can be mathematical, conceptual or physical, although these will always be limited by the simplification, neglect, abstraction and aggregation they have made of reality \cite{kaestner:basics}. However, some minimum and indispensable requirements they must meet are verifiability and reproducibility in different environments \cite{kaestner:basics}. That is why, in order to favor the verifiability and reproducibility of the models, models can be described or classified in different ways:
\subsubsection{Linear or non-linear:}
\par If the input variable undergoes a proportional change with respect to the output, the superposition principle is satisfied and the steady state does not depend on the initial conditions \cite{kaestner:basics}.
\subsubsection{Static or dynamic:}
\par If the system does not change its behavior during the observation time \cite{kaestner:basics}.
\subsubsection{Externally influenced or autonomous:}
\par If the system is influenced by external variables, call it inputs \cite{kaestner:basics}.
\subsubsection{Location dependent or independent:}
\par If the state variables and parameters are interpreted as set, were their position in the system matter\cite{kaestner:basics}.
\subsubsection{Deterministic or stochastic:}
\par If random external influences on the model are excluded \cite{kaestner:basics}.
\subsubsection{Time-invariant or time-variant:}
\par If the same behavior can be observed with the same initial conditions \cite{kaestner:basics}.
\subsubsection{Time continuous or discrete:}
\par If the state of the system is defined at all points in time \cite{kaestner:basics}.
\vspace{5mm}
\par Now that one has a better understanding of systems and models, an important step to consider before modeling is to decide whether the model will be a model of behavior or a system model. Depending on the decision, one will face different challenges that will require certain knowledge about the system itself.
\subsubsection{Model of behavior}
\par It is a reproduction of the general relationship between input and output without really knowing the internal mechanisms that lead to that change, as in a black box model.
\begin{figure}[H]
\centering
\includegraphics[width=0.8\textwidth]{Images/ModelOfBehavior.JPG}
\caption{Real system vs. Model of behavior \cite{kaestner:basics}}
\label{fig:ModelOfBehavior}
\end{figure}
\par As can be seen in the Figure \ref{fig:ModelOfBehavior}, the model only tries to emulate the relationship between the input and output, so it is limited to the variations that may occur \cite{kaestner:basics}. To develop the model it is necessary to have a sufficiently large database of the input and output to ensure consistency and reproducibility \cite{kaestner:basics}.
\subsubsection{System model}
\par It is a reproduction of all the essential internal interactions of the system, so a deep knowledge of the system is required.
\begin{figure}[H]
\centering
\includegraphics[width=0.8\textwidth]{Images/SystemModel.JPG}
\caption{Real system vs. System model \cite{kaestner:basics}}
\label{fig:SystemModel}
\end{figure}
\par As can be seen in the Figure \ref{fig:SystemModel}, the most essential internal interactions have to be reproduced in order to approximate the behavior as closely as possible, so the model is limited to the accuracy of the reproduction of the system \cite{kaestner:basics}. The system model is most recommended for systems in which there are many non-linearities, the structure is very complex or unknown conditions are to be analyzed \cite{kaestner:basics}.
}
\subsection{Simulation}
\par For the next step, simulation, it is first necessary to design the model, defining the system from its purpose, structure, boundaries and interactions. Then choose the type of model, quantify it, program it and verify it. Then choose the software, implement it, analyze it and validate it. To finally analyze the behavior and look for possible optimizations. It should be clarified that errors can come from different sources, such as the model itself, data linearization, model discretization, iterations, programming or rounding \cite{kaestner:basics}.
\subsection{Soft Computing}
\par In the case of behavioral modeling, it is necessary to integrate certain data analysis techniques to correctly model the behavior of the system. Some of the most common techniques are artificial neural network and fuzzy set theory, as they allow the modeling of nonlinear multivariable systems \cite{kaestner:SCANN}.
\subsubsection{Artificial Neural Network (ANN)}
\par This method attempts to emulate biological neurons. Each neuron collects information which causes an excitatory or inhibitory effect that propagates through the neural network \cite{kaestner:SCANN}. This method is also characterized by the fixed number of neurons in the network, as well as the knowledge represented by weights \cite{kaestner:SCANN}. The data is processed by different operations.
\subsubsection{Multi-Layer Perceptron (MLP)}
\par A multilayer perceptron is a structuring of artificial neurons in a fixed network, which consists of several layers or stages of neurons that modify the inputs until they resemble the desired output \cite{kaestner:SCANN}. Generally, multilayer perceptrons consist of a first linear input stage, a second hidden stage that processes the information and a final output stage that delivers the results \cite{kaestner:SCANN}.
\begin{figure}[H]
\centering
\includegraphics[width=0.45\textwidth]{Images/ArtificialNeuron.JPG}
\caption{Multilayer perceptron \cite{kaestner:SCANN}}
\label{fig:ArtificialNeuron}
\end{figure}
\par In the Figure \ref{fig:ArtificialNeuron} it can be seen how the inputs are passed to the hidden layer, for this, the input is multiplied by a certain coefficient before entering the hidden stage, to later add a weight to it, process the data and go through the same process to pass it to the output layer \cite{kaestner:SCANN}.
\begin{figure}[H]
\centering
\includegraphics[width=0.45\textwidth]{Images/ArtificialNeuronProcess.JPG}
\caption{Artificial neuron \cite{kaestner:SCANN}}
\label{fig:ArtificialNeuronProcess}
\end{figure}
\par Figure \ref{fig:ArtificialNeuronProcess} shows graphically the process described above.
\par Multilayer perceptrons, depending on their structure, can also be classified in 3 different ways. Feed forward if neurons only point to the next layer as in Figure \ref{fig:FeedForward}, lateral if neurons in the same layer can also communicate with each other as in Figure \ref{fig:Lateral}, and recurrent if different layers can interact with each other as in Figure \ref{fig:Recurrent} \cite{kaestner:SCANN}.
\begin{figure}[H]
\centering
\includegraphics[width=0.35\textwidth]{Images/FeedForward.JPG}
\caption{Artificial neural networks in a feed forward configuration \cite{kaestner:SCANN}}
\label{fig:FeedForward}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=0.35\textwidth]{Images/Lateral.JPG}
\caption{Artificial neural networks in a lateral configuration \cite{kaestner:SCANN}}
\label{fig:Lateral}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=0.35\textwidth]{Images/Recurrent.JPG}
\caption{Artificial neural networks in a recurrent configuration \cite{kaestner:SCANN}}
\label{fig:Recurrent}
\end{figure}
\subsubsection{Multi-Layer Perceptron (MLP) Design}
\par For the design of a MLP in the selected software, DataEngine, it is necessary to first define some parameters related to the structure and the way of processing the information. First, it is important to determine the strategy for determining the values of the weights, also known as Backpropagation with Momentum-term \cite{kaestner:SCANN}. Subsequently, it is necessary to define the Bias neuron, which is the assignment of a threshold value that acts as an additional neuron, providing the interpolation capability \cite{kaestner:SCANN}. Next, it is necessary to define the Symmetry Breaking, which is the initialization of the variables with random values to avoid errors of identical values \cite{kaestner:SCANN}. Then, it is also necessary to define Weight Decay, which is the process to homogenize the distribution of weights, in order to avoid critical nodes that have too much weight in the final result \cite{kaestner:SCANN}. And finally, it is necessary to define Pruning, which is the process of eliminating non-relevant connections and simplifying the model \cite{kaestner:SCANN}.
\subsection{Markov Model}
\par However, one of the main components of this project is the Markov model. Which, in a nutshell, is a model that, depending on the stress conditions of a system, is able to estimate the lifetime. To understand the model, it is necessary to first define some concepts, such as reliability assessment, which is the prediction of system failures based on probabilities \cite{kaestner:Mark}. It is also important to define reliability, which is the description of system availability and the factors that influence it, such as functional capability, maintainability, etc \cite{kaestner:Mark}.
\par Another important concept is availability, which is the probability that the system will be in a functional state at a defined time \cite{kaestner:Mark}. Non-availability is the complementary value to this and permanent availability is the parameter for the description of a repairable system \cite{kaestner:Mark}.
\par Going into a little more detail, the Markov model starts from the idea that any system as in Figure \ref{fig:BinaryModel} can be found in one of two states, either intact or defect, where to pass from one to the other there is a probability of failure or repair \cite{kaestner:Mark}.
\begin{figure}[H]
\centering
\includegraphics[width=0.4\textwidth]{Images/BinaryModelSystemState.JPG}
\caption{Binary model system state \cite{kaestner:Mark}}
\label{fig:BinaryModel}
\end{figure}
\subsubsection{Failure rate}
\par The probability of failure describes the time that a non-faulty system, after a time t, will fail \cite{kaestner:Mark}. This can be calculated by dividing the density of the probability of failure by the probability of survival or reliability \cite{kaestner:Mark}.
\begin{figure}[H]
\centering
\includegraphics[width=0.6\textwidth]{Images/BathCurve.JPG}
\caption{Failure rate curve \cite{kaestner:Mark}}
\label{fig:BathCurve}
\end{figure}
\par As can be seen in Figure \ref{fig:BathCurve} the failure probability is a function of time describing a bath-tub curve \cite{kaestner:Mark} This curve can be divided into three main phases, the first one of early failure where, due to the early use of elements, they tend to fail, but then show a decline in this failure \cite{kaestner:Mark}. A second stage of normal failure where the highest reliability of the system is reached and this is the desired period of operation \cite{kaestner:Mark}. Finally, there is the wear-out phase where there is a systematic failure due to aging, fatigue, wear-out of the components and it is the phase where maintenance or corrective measures should be provided \cite{kaestner:Mark}. The life time of a component or system can also be obtained or visualized from the graph . Some of the most important characteristics of the probability of failure are that it has a continuous distribution over the lifetime, this being, mathematically, an exponential distribution \cite{kaestner:Mark}.
$$ \textrm{Failure rate} = \lambda $$
$$ \textrm{Repair rate} = \mu $$
\begin{itemize}
\item Reliability or probability of survival $ R(t) $
\begin{equation}
R(t) = e^{-\lambda t}
\label{eq:Rt1}
\end{equation}
\item Probability of failure $F(t)$
\begin{equation}
F(t) = 1 - e^{-\lambda t}
\label{eq:Ft1}
\end{equation}
\item Life time $\overline{T}$
\begin{equation}
\overline{T} = \int_{0}^{\infty} R(t) \,dt = \frac{1}{\lambda}
\label{eq:T1}
\end{equation}
\end{itemize}
\begin{figure}[H]
\centering
\includegraphics[width=0.6\textwidth]{Images/FailurevSurvival.JPG}
\caption{Failure vs survival rate on a single component w/without repair \cite{kaestner:Mark}}
\label{fig:FailurevSurvival}
\end{figure}
\subsubsection{Redundancy}
\par Another important concept to take into account is redundancy, which is, in general terms, the arrangement of multiple elements in the same form \cite{kaestner:Mark}. This provides a system with protection against random independent failures \cite{kaestner:Mark}.
\begin{figure}[H]
\centering
\includegraphics[width=0.3\textwidth]{Images/SingleComponent.JPG}
\caption{Single component \cite{kaestner:Mark}}
\label{fig:SingleComponent}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=0.5\textwidth]{Images/TwoComponents.JPG}
\caption{Two components in parallel \cite{kaestner:Mark}}
\label{fig:TwoComponents}
\end{figure}
$$\textrm{Single component}$$
$$W(a)=W(e)=p=0.9$$
$$\textrm{Two components}$$
$$W(a)=1-\prod_{i=1}^{2} (1-p_{i})$$
$$W(A)=1-(1-2p+p^2) $$
\begin{equation}
W(a)=2p-p^2=0.99
\end{equation}
\par As can be seen in Figures \ref{fig:SingleComponent} and \ref{fig:TwoComponents}, thanks to redundancy it is possible to increase the reliability probability of a component with a reliability of 90\% by an additional 9\% \cite{kaestner:Mark}.
\par This translated into the equations of \ref{eq:Rt1}, \ref{eq:Ft1} and \ref{eq:T1} gives that the lifetime of a system can be increased by up to 50\% with respect to the original model \cite{kaestner:Mark}.
\begin{itemize}
\item Reliability or probability of survival $ R(t) $
\begin{equation}
R(t) = 1- F(t)=2e^{-\lambda t}-e^{-2 \lambda t}
\label{eq:Rt2}
\end{equation}
\item Probability of failure $F(t)$
\begin{equation}
F(t) = (1 - e^{-\lambda t})^2 = 1-2e^{-\lambda t}+e^{-2 \lambda t}
\label{eq:Ft2}
\end{equation}
\item Life time $\overline{T}$
\begin{equation}
\overline{T} = \int_{0}^{\infty} R(t) \,dt = \frac{2}{\lambda} - \frac{1}{2 \lambda}=1.5 \frac{1}{\lambda}
\label{eq:T2}
\end{equation}
\end{itemize}
\subsubsection{Markov model: Evaluation of reliability}
\par Within the Markov model, the probabilities of the states of the binary model are used to evaluate the probability of the state in which the system is found \cite{kaestner:Mark}. The Markov model is governed by the probability of failure and repair, since with these a vector of differential equations of order k is constructed, where k is the number of states of the system \cite{kaestner:Mark}.
\begin{figure}[H]
\centering
\includegraphics[width=0.6\textwidth]{Images/Markov.JPG}
\caption{Markov model \cite{kaestner:Mark}}
\label{fig:Markov}
\end{figure}
\subsection{Methodology}
\par Regarding the process to be followed for the construction of the multilayer perceptron and its use in the Markov model. For the construction of the MLP, it is necessary to first pre-process the data or filter it from any erroneous data. Then a second filtering stage is required to separate the data for testing and training. Then it is required to first train the MLP with pruning in order to determine a more optimized model, apply the optimizations, remove pruning and train it, then test the test data and evaluate if the results are satisfactory. Finally, it is necessary to implement the MLP in a simulation software with the connections configured according to the Markov model, simulate, obtain the results, analyze them and issue a conclusion.
%========================%
% Data-preprocessing %
%========================%
\newpage
\section{Data pre-processing}
\par As already mentioned, we are going to start with an initial data set, which, being directly collected from the measurement system, may or may not come with errors. The origin of these errors may well be directly the sensor, which made some erroneous detection by some internal error, these errors will be called measuring errors. Another possible origin is the processing of the sensor signal and they manifest themselves in the form of out-of-range values or sensor defect. And the last possible origin are duplicate or redundant values that only hinder the processing
\subsection{Atypical values}
\par As already mentioned, we are going to start with a data set, see table \ref{table:InitialData} in appendix \ref{appendix:tables}. These data were taken directly from the measurement system, so a first filtering is required in order to deliver only valid data to the next stage. To better visualize what we are talking about, in the Figure \ref{fig:RawData2} and \ref{fig:RawData1} it can be seen how there are certain values that stand out for having a completely different behavior from the rest and that if processed would result in an inconsistent model.
\begin{figure}[H]
\centering
\includegraphics[width=0.5\textwidth]{Images/RawData2.JPG}
\caption{Raw data visualization}
\label{fig:RawData2}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=0.5\textwidth]{Images/RawData1.JPG}
\caption{Raw data on a mesh-grid}
\label{fig:RawData1}
\end{figure}
\subsubsection{Sensor defect}
\par The purpose of this first filter is to eliminate all those values that are physically impossible to occur in real life. These can be easily detected by the out-of-range values, in this case, the measurements being between 0 and 1, 2 values with -1 will be eliminated. The remaining values give the behavior of Figure \ref{fig:FirstFilter1}.
\begin{figure}[H]
\centering
\includegraphics[width=0.5\textwidth]{Images/FirstFilter1.JPG}
\caption{Data after first filter}
\label{fig:FirstFilter1}
\end{figure}
\par As can be seen, in spite of having eliminated the most important outliers, the graph still has points where the general trend has a sudden change, but this will be analyzed later.
\subsubsection{Redundant data}
\noindent This next stage of filtering is a little more complicated to visualize graphically, as most of the graphing software eliminates these values by itself and it is only possible to detect them manually. For the given data set, there are 2 measurements for the same value of X1 = 0.88. Therefore, it will be decided to eliminate 5 repeated values, which despite having more significant figures, do not bring more precision.
\subsubsection{Measuring errors}
\par Finally, the Figure \ref{fig:FirstFilter1} shows how there are two values that bring an abrupt change to the general behavior of the data. To eliminate them, the behavior of the data is simply analyzed and if a value that changes the gradient is found, it will be eliminated. After running the code, 2 data were removed and the remaining data are shown in Figures \ref{fig:Processed2} and \ref{fig:Processed1} .
\begin{figure}[H]
\centering
\includegraphics[width=0.5\textwidth]{Images/ProcessedData2.JPG}
\caption{Final processed data graph}
\label{fig:Processed2}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=0.5\textwidth]{Images/ProcessedData1.JPG}
\caption{Final processed data mesh-grid}
\label{fig:Processed1}
\end{figure}
%========================%
% Data-preparation %
%========================%
\newpage
\section{Data preparation}
\par For both the pre-processing and data preparation sections, the Code \ref{code:MatLab} in Appendix \ref{appendix:codes} was designed, programmed and tested. This code performed the filtering described in the previous section, as well as the separation of the data into two sets, one for training and one for testing.
\subsection{Considerations}
\par For this stage we sought to implement a process that would guarantee not only a maximum error within the limits of 3\% for MLP training and testing, but that would also be flexible and allow several possible training and testing sets. For this purpose, a variable called tolerance was created, which, in short, sets aside the maximum and minimum values minus or plus the tolerance and leaves the remaining data available for random selection, where the user determines the amount of data for testing.
\subsection{Data}
\par In the case of this project, a tolerance of 20\% was selected with a test data set of maximum 20 data. The program ran, performed the filtering and separation of the data, to finally deliver the two data sets.
\subsubsection{Training data}
\par As can be seen, the data set obtained contains the maximum and minimum data minus or plus the tolerance of the three variables, X1, X2 and Y. This decision will ensure that the largest gradients or groups of data that would provide a greater error are already considered and therefore the resulting error is minimal. There is a total of 41 data for training.
\begin{longtable}[c]{|c|c|c|}
\hline
X1 & X2 & Y \\ \hline
\endfirsthead
%
\multicolumn{3}{c}%
{{\bfseries Table \thetable\ continued from previous page}} \\
\endhead
%
0.29 & 0.1 & 0.06 \\ \hline
0.41 & 0.1 & 0.08 \\ \hline
0.29 & 0.3 & 0.115 \\ \hline
0.59 & 0.1 & 0.125 \\ \hline
0.29 & 0.5 & 0.155 \\ \hline
0.71 & 0.1 & 0.17 \\ \hline
0.29 & 0.75 & 0.19 \\ \hline
0.29 & 1 & 0.21 \\ \hline
0.82 & 0.1 & 0.22 \\ \hline
0.88 & 0.1 & 0.25 \\ \hline
0.94 & 0.1 & 0.27 \\ \hline
0.97 & 0.1 & 0.275 \\ \hline
1 & 0.1 & 0.28 \\ \hline
0.41 & 1 & 0.285 \\ \hline
0.59 & 1 & 0.45 \\ \hline
0.88 & 0.3 & 0.49 \\ \hline
0.94 & 0.3 & 0.535 \\ \hline
0.97 & 0.3 & 0.545 \\ \hline
1 & 0.3 & 0.55 \\ \hline
0.71 & 1 & 0.605 \\ \hline
0.88 & 0.5 & 0.66 \\ \hline
0.94 & 0.5 & 0.72 \\ \hline
0.97 & 0.5 & 0.735 \\ \hline
1 & 0.5 & 0.74 \\ \hline
0.82 & 1 & 0.795 \\ \hline
0.88 & 0.75 & 0.8 \\ \hline
0.94 & 0.75 & 0.87 \\ \hline
0.88 & 1 & 0.89 \\ \hline
0.97 & 0.75 & 0.895 \\ \hline
1 & 0.75 & 0.9 \\ \hline
0.94 & 1 & 0.97 \\ \hline
0.97 & 1 & 0.995 \\ \hline
1 & 1 & 1 \\ \hline
0.35 & 0.2 & 0.12 \\ \hline
0.35 & 0.4 & 0.16 \\ \hline
0.41 & 0.3 & 0.16 \\ \hline
0.5 & 0.2 & 0.16 \\ \hline
0.35 & 0.6 & 0.2 \\ \hline
0.41 & 0.5 & 0.21 \\ \hline
0.35 & 0.8 & 0.22 \\ \hline
0.65 & 0.2 & 0.22 \\ \hline
\caption{Training data set}
\label{table:TrainingData}
\end{longtable}
\subsubsection{Test data}
\par As can be seen, the data obtained for the testing are average and very standard values for the three parameters X1, X2 and Y. All the maximum and minimum values together with the tolerance were separated, so that the separation of these data to form part of the test should not alter the general behavior of the system. There is a total of 20 data for testing.
\begin{longtable}[c]{|c|c|c|}
\hline
\textbf{X1} & \textbf{X2} & \textbf{Y} \\ \hline
\endfirsthead
%
\multicolumn{3}{c}%
{{\bfseries Table \thetable\ continued from previous page}} \\
\endhead
%
0.5 & 0.4 & 0.25 \\ \hline
0.59 & 0.3 & 0.25 \\ \hline
0.41 & 0.75 & 0.26 \\ \hline
0.5 & 0.6 & 0.3 \\ \hline
0.5 & 0.8 & 0.33 \\ \hline
0.59 & 0.5 & 0.335 \\ \hline
0.71 & 0.3 & 0.335 \\ \hline
0.65 & 0.4 & 0.35 \\ \hline
0.85 & 0.2 & 0.36 \\ \hline
0.59 & 0.75 & 0.41 \\ \hline
0.65 & 0.6 & 0.43 \\ \hline
0.82 & 0.3 & 0.435 \\ \hline
0.71 & 0.5 & 0.45 \\ \hline
0.65 & 0.8 & 0.48 \\ \hline
0.71 & 0.75 & 0.555 \\ \hline
0.85 & 0.4 & 0.56 \\ \hline
0.82 & 0.5 & 0.59 \\ \hline
0.85 & 0.6 & 0.69 \\ \hline
0.82 & 0.75 & 0.715 \\ \hline
0.85 & 0.8 & 0.78 \\ \hline
\caption{Test data set}
\label{table:TestData}
\end{longtable}
%========================%
% MLP design %
%========================%
\newpage
\section{MLP design}
\par As already mentioned, the design of an MLP requires the definition of several variables. These variables range from the number of inputs and outputs, the number of hidden layers, back-propagation, momentum, etc.... The definition of these variables is of vital importance, because as mentioned in the methodology, they will allow us to calculate a first model with pruning, which will give us the optimal number of connections, to later make a second model without pruning that really optimally models the data set without losing interpolation capacity.The overall goal that must be achieved is to create a MLP with a maximum error lower than $\pm 3\%$. In Appendix \ref{appendix:steps} it can be seen the whole design process within the software's windows.
\subsection{Parameters}
\par There are some parameters that will not change between the models with and without pruning, these being the learning parameters:
\begin{itemize}
\item Learning parameters
\begin{longtable}[c]{|c|c|c|c|}
\hline
& \textbf{First hidden layer} & \textbf{Second hidden layer} & \textbf{Output layer} \\ \hline
\endfirsthead
%
\multicolumn{4}{c}%
{{\bfseries Table \thetable\ continued from previous page}} \\
\endhead
%
\textbf{Learn ratio} & 0,1 & 0,1 & 0,9999999 \\ \hline
\textbf{Momentum} & 0,1 & 0,1 & 0,9999999 \\ \hline
\textbf{Weight-decay} & 0,1 & 0,1 & 0,9999999 \\ \hline
\caption{Learning parameters}
\end{longtable}
\item Weights initialization
\begin{longtable}[c]{|l|l|}
\hline
\textbf{Lower limit:} & -0,1 \\ \hline
\endfirsthead
%
\multicolumn{2}{c}%
{{\bfseries Table \thetable\ continued from previous page}} \\
\endhead
%
\textbf{Upper limit:} & 0,1 \\ \hline
\textbf{Initial value:} & 1234567890 \\ \hline
\caption{Weights initialization}
\end{longtable}
\item Stop conditions
\begin{longtable}[c]{|c|c|}
\hline
\textbf{\textless max. training error} & 0,0001 \\ \hline
\endfirsthead
%
\multicolumn{2}{c}%
{{\bfseries Table \thetable\ continued from previous page}} \\
\endhead
%
\textbf{\textless{}RMS training error} & 0,0001 \\ \hline
\textbf{\textless{}max. training error} & 0,0001 \\ \hline
\textbf{\textless{}RMS training error} & 0,0001 \\ \hline
\textbf{number of iterations is divisible by} & 100.000 \\ \hline
\textbf{iterations} & 100.000 \\ \hline
\textbf{error measurement} & 0,9 RMS error + 0,1 max. error \\ \hline
\textbf{Based on} & test error \\ \hline
\caption{Stop conditions}
\end{longtable}
\end{itemize}
\par As mentioned in the methodology, the MLP development process requires a first design phase with pruning and a second phase without pruning. However, both share the same data and the parameters described above. The main differences come from the network structure itself and the learning process used.
\subsubsection{Before pruning}
\par Before pruning, the network structure must have more connections than the data used. So if by default you have two inputs and one output, that means that the other connections must come from the hidden layers, for this reason, it was also decided to use two hidden layers instead of one, because if that had been done, the hidden layer would have been very large compared to its neighbors.
\begin{longtable}[c]{|c|c|c|}
\hline
\textbf{} & \textbf{Number of neurons} & \textbf{Transfer function} \\ \hline
\endfirsthead
%
\multicolumn{3}{c}%
{{\bfseries Table \thetable\ continued from previous page}} \\
\endhead
%
\textbf{Input:} & 2 & Linear \\ \hline
\textbf{First hidden layer:} & 6 & Tanh \\ \hline
\textbf{Second hidden layer:} & 5 & Tanh \\ \hline
\textbf{Output:} & 1 & Linear \\ \hline
\caption{Architecture with pruning}
\end{longtable}
\begin{figure}[H]
\centering
\includegraphics[width=0.8\textwidth]{Images/Photos/GraphWP.JPG}
\caption{Graph of artificial neural network }
\label{fig:GraphWP}
\end{figure}
\par Finally, with respect to the main topic of the section, in the learning process pruning had to be activated along with certain parameters for it, like threshold, or time constant, so the algorithm can determine which connections to eliminate. Once run, the optimal number of connections was obtained, in this case only 9 connections are really necessary. This structure was then applied to the next model and optimized.
\begin{longtable}[c]{|c|c|}
\hline
\textbf{Process} & Backpropagation \\ \hline
\endfirsthead
%
\multicolumn{2}{c}%
{{\bfseries Table \thetable\ continued from previous page}} \\
\endhead
%
\textbf{Momentum} & ON \\ \hline
\textbf{Weight-decay} & ON \\ \hline
\textbf{Learning strategy} & Single step (delta) \\ \hline
\textbf{Presentation sequence} & Sequential \\ \hline
\textbf{Pruning} & ON \\ \hline
\textbf{Relevance threshold} & 0,02 \\ \hline
\textbf{Time constant} & 10 \\ \hline
\caption{Learning process with pruning}
\end{longtable}
\par Once all the variables were set in the DataEngine software, the results shown in Figure \ref{fig:BeforePruning1} and \ref{fig:BeforePruning2} were obtained. As can be seen in them, the maximum and minimum errors, although they come from a single data, are greater than expected and therefore exceed the imposed limit of 3\% as maximum. Nevertheless, since the most optimal model should have 9 connections, this adjustment will be made to check if the model has improved. Also the error graph shows how this varied.
\begin{figure}[H]
\centering
\includegraphics[width=0.8\textwidth]{Images/Photos/ErrorWP.JPG}
\caption{Error development for network with pruning}
\label{fig:ErrorWP}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=0.8\textwidth]{Images/Fotos/Diapositiva5.PNG}
\caption{Results before optimization}
\label{fig:BeforePruning1}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=1.0\textwidth]{Images/Fotos/Diapositiva6.PNG}
\caption{Error scope before optimization}
\label{fig:BeforePruning2}
\end{figure}
\par For the optimization, it is also obtained, that the most relevant connections are the following:
\begin{figure}[H]
\centering
\includegraphics[width=0.8\textwidth]{Images/Photos/WeightsValuesWP.JPG}
\caption{Weights values with pruning}
\label{fig:WVWP}
\end{figure}
\subsubsection{After pruning}
\par Once the value of connections is optimized, it is implemented in the model already created, changing the values of the hidden layers, because as mentioned, the inputs and outputs are predefined.
\begin{longtable}[c]{|c|c|c|}
\hline
\textbf{} & \textbf{Number of neurons} & \textbf{Transfer function} \\ \hline
\endfirsthead
%
\multicolumn{3}{c}%
{{\bfseries Table \thetable\ continued from previous page}} \\
\endhead
%
\textbf{Input:} & 2 & Linear \\ \hline
\textbf{First hidden layer:} & 3 & Tanh \\ \hline
\textbf{Second hidden layer:} & 2 & Tanh \\ \hline
\textbf{Output:} & 1 & Linear \\ \hline
\caption{Architecture after pruning}
\end{longtable}
\begin{figure}[H]
\centering
\includegraphics[width=0.8\textwidth]{Images/Photos/GraphOP.JPG}
\caption{Graph of artificial neural network without pruning}
\label{fig:GraphOP}
\end{figure}
\par Finally, only the pruning option is deactivated to calculate the final model.
\begin{longtable}[c]{|c|c|}
\hline
\textbf{Process} & Backpropagation \\ \hline
\endfirsthead
%
\multicolumn{2}{c}%
{{\bfseries Table \thetable\ continued from previous page}} \\
\endhead
%
\textbf{Momentum} & ON \\ \hline
\textbf{Weight-decay} & ON \\ \hline
\textbf{Learning strategy} & Single step (delta) \\ \hline
\textbf{Presentation sequence} & Sequential \\ \hline
\textbf{Pruning} & OFF \\ \hline
\textbf{Relevance threshold} & - \\ \hline
\textbf{Time constant} & - \\ \hline
\caption {Learning process without pruning}
\end{longtable}
\par As can be seen in Figures \ref{fig:AfterPruning1} and \ref{fig:AfterPruning2}, the results of the training data are much more satisfactory and are within the established limit of $\pm 3\%$, as they are of -1.0\% and 1.2\%. Also the error graph showed an improvement.
\begin{figure}[H]
\centering
\includegraphics[width=1.0\textwidth]{Images/Photos/ErrorOP.JPG}
\caption{Error progression without pruning}
\label{fig:ErrorOP}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=1.0\textwidth]{Images/Fotos/Diapositiva8.PNG}
\caption{Results after optimization}
\label{fig:AfterPruning1}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=0.95\textwidth]{Images/Fotos/Diapositiva10.PNG}
\caption{Error scope after optimization}
\label{fig:AfterPruning2}
\end{figure}
\par Finally, with the new model, it is possible to run the training data and verify if the model is really valid or if it is necessary to make changes in the parameters. As can be seen in Figure \ref{fig:ResultsTest}, the errors and minimums are 0.3\% and 2.6\%, so they are effectively within the limit and the model is valid for the next stage.
\begin{figure}[H]
\centering
\includegraphics[width=0.8\textwidth]{Images/Photos/ErrorOP2.JPG}
\caption{Error progression without pruning for test}
\label{fig:ErrorOP2}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=0.9\textwidth]{Images/Fotos/Diapositiva12.PNG}
\caption{Results test}
\label{fig:ResultsTest}
\end{figure}
%========================%
% Math Models %
%========================%
\newpage
\section{Mathematical models}
\par For the calculations, we will first define our Markov model, obtain the equations that will model the process and determine the connections between the elements that need to be made. To begin with, we will start from the idea that the model will have parallel active redundancy, which means that two components will be responsible for delivering the final result. As a consequence, instead of there being only two states, one of failure and the other of operation, now there are 4 for the combinations of the two components, because in case one fails, the other is still in operation. It is also going to be assumed that both components cannot fail at the same time. Having said that, it is necessary to define some variables, as well as to clarify what is being assumed from the model.
\subsection{Calculations}
\par First, we begin by defining our inputs, outputs and ranges:
\begin{equation}
x_1 = \beta_1 \textit{; first stress parameter, which will be the enviromental temperature}
\end{equation}
\begin{equation}
x_2 = \beta_2 \textit{; second stress parameter, which will be the moisture or humidity in the enviroment}
\end{equation}
\begin{equation}
y = f \textit{; Output of the system}
\end{equation}
\par Also it is important to mention that according to the Markov model, it is assumed that $\tilde{\lambda}$ can be calculated through the following equation.
\begin{equation}
\tilde{\lambda}(t) = \lambda_0 + \lambda f(t)
\label{eq:Lambda}
\end{equation}
$$\lambda_0 = 0.04 \frac{1}{Tu}$$
$$\lambda = 0.05\frac{1}{Tu}$$
$$ f(t) =Y= \textit{output of the model} $$
\par Which states that if the value of $\lambda$ is above $0.05$ then the system will be under a bigger stress than usual. Therefore, its life time could be seen shorten
\par Once established the variables above, now it is time to develop the Markov model under the desired specifications of an active redundant model.
\par The number of states is going to be 4, since there are two components A and B, which could be on either one of two states, failure or operation.
$$\textit{State1} = P_1 (t)= (A,B)$$
$$\textit{State2} = P_2 (t)= (\overline{A} ,B)$$
$$\textit{State3} = P_3 (t)= (A,\overline{B})$$
$$\textit{State4} = P_4 (t)= (\overline{A},\overline{B})$$
\begin{figure}[H]
\centering
\includegraphics[width=0.5\textwidth]{Images/MarkovDiagram.JPG}
\caption{Model diagram}