-
Notifications
You must be signed in to change notification settings - Fork 15
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
OHE doc, Activation functions+doc #102
base: main
Are you sure you want to change the base?
Conversation
#include <math.h> | ||
//sigmoid | ||
double sigmoid(double x) { | ||
return 1 / (1 + exp(-x)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
use std::expl
(or std::exp
) instead of exp
everywhere.
} | ||
//tan inverse | ||
double arctan(double x) { | ||
return atan(x); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Use std::atan(...)
instead of atan(...)
docs/methods/activation_functions.md
Outdated
|
||
## Example | ||
|
||
``` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
``` | |
```cpp |
|
||
| Name | Definition | Return value | | ||
|----------------------------------------|-----------------------------------------------|---------------| | ||
| `oneHotEncoder(vector<T> data, nClasses)` | To encode the data into numerical values. | `vector<int>` | |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
type of nClasses
?
|
||
## Example | ||
|
||
``` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
``` | |
```cpp |
|
||
|
||
|
||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove these empty lines.
template<class T> | ||
|
||
double sigmoid(double x); | ||
double tanh(double x); | ||
double ReLU(double x); | ||
double leakyReLU(double x, double alpha); | ||
std::vector<double> softmax(const std::vector<double> &x); | ||
double arctan(double x); | ||
double binaryStep(double x); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add doc comments above each function..
Example for sigmoid
/**
* @brief To calculate sigmoid(x)
* @param x: Number whose sigmoid value is to be calculated
* @return a double value representing sigmoid(x)
*/
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Only a few changes required and then we're good to go
docs/methods/activation_functions.md
Outdated
|
||
tanh- The Tanh activation function is a hyperbolic tangent sigmoid function that has a range of -1 to 1. It is often used in deep learning models for its ability to model nonlinear boundaries | ||
|
||
tan-1h-The inverse of tanh.The ArcTan function is a sigmoid function to model accelerating and decelerating outputs but with useful output ranges. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Arctan is not the inverse of tanh but the normal trignometric tan(x) function
|
||
| Name | Definition | Type | | ||
|--------------|--------------------------------------------|--------------| | ||
| data | The data that has to be encoded is passed as the data parameter in the oneHotEncoder function. | `vector<int>` | |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Data would be vector<string>
and not vector<int>
|
||
| Name | Definition | Return value | | ||
|----------------------------------------|-----------------------------------------------|---------------| | ||
| `oneHotEncoder(vector<T> data, nClasses)` | To encode the data into numerical values. | `vector<int>` | |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Return value of this method is vector<vector<int>>
#include "activation_functions.hpp" | ||
template<class T> | ||
// sigmoid | ||
double sigmoid(double x) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You need to write these functions for a vector and not a single value
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You need to write these functions for a vector and not a single value
I'm changing all the functions
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You need to write these functions for a vector and not a single value
all changes updated
} | ||
<<<<<<< HEAD | ||
// leaky ReLU | ||
double leakyReLU(double x, double alpha) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Set default value of alpha to 0.1
} else { | ||
return alpha * x; | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add a function to convert binary to bipolar and vice-versa
<<<<<<< HEAD | ||
* @param x {double x} - double value on which the function is applied. | ||
* | ||
* @param x {vector<double>} - vector containing 'double' values of x for | ||
* softmax activation function implementation. | ||
* | ||
* @return {double value} - double value after putting x in the functions gets | ||
* returned. | ||
======= |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
<<<<<<< HEAD | |
* @param x {double x} - double value on which the function is applied. | |
* | |
* @param x {vector<double>} - vector containing 'double' values of x for | |
* softmax activation function implementation. | |
* | |
* @return {double value} - double value after putting x in the functions gets | |
* returned. | |
======= |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@Ishwarendra sir the changes which you've asked to make, these are not visible in my vs code, those unwanted lines of code are already not there.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i formatted all the source files using gitbash, since the clang error was there, but after commiting and pushing those changes, both the checks are coming wrong, how do i fix it?
======= | ||
//leaky ReLU | ||
double leakyReLU(double x, double alpha) { | ||
if (x >= 0) { | ||
return x; | ||
} else { | ||
return alpha * x; | ||
} | ||
} | ||
>>>>>>> 5eebc29054fab6686e728aca29e64e1c53dd7a8c |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
======= | |
//leaky ReLU | |
double leakyReLU(double x, double alpha) { | |
if (x >= 0) { | |
return x; | |
} else { | |
return alpha * x; | |
} | |
} | |
>>>>>>> 5eebc29054fab6686e728aca29e64e1c53dd7a8c |
Co-authored-by: Ishwarendra Jha <75680424+Ishwarendra@users.noreply.github.com>
Co-authored-by: Ishwarendra Jha <75680424+Ishwarendra@users.noreply.github.com>
Co-authored-by: Ishwarendra Jha <75680424+Ishwarendra@users.noreply.github.com>
@NandiniGera Run formatter and commit again. |
all changes updated |
really sorry for all the silly mistakes:/ |
Arey np with that...you guys are in the learning stage rn so obv you'll commit mistakes in the beginning...just try not to repeat a mistake |
* Implementation of activation functions | ||
*/ | ||
#include "activation_functions.hpp" | ||
template<class T> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove this line, we do not require templates for these functions
template<class T> |
std::vector<double> leakyReLU(const std::vector<double> &x) | ||
{ | ||
std::vector<double> y(x.size()); | ||
double alpha = 0.1; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add alpha
as a parameter instead with default value 0.1, do the change in hpp file as well
std::vector<double> leakyReLU(const std::vector<double> &x) | |
{ | |
std::vector<double> y(x.size()); | |
double alpha = 0.1; | |
std::vector<double> leakyReLU(const std::vector<double> &x, double alpha = 0.1) | |
{ | |
std::vector<double> y(x.size()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add
alpha
as a parameter instead with default value 0.1, do the change in hpp file as well
done
std::vector<double> y(x.size()); | ||
for (int i = 0; i < x.size(); i++) | ||
{ | ||
y[i] = 1 / (1 + exp(-x[i])); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Use std::exp
instead of exp
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A few more changes
#include "activation_functions.hpp" | ||
template<class T> | ||
// sigmoid | ||
std::vector<double> sigmoid(const std::vector<double> &x) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Instead of initialising a new vector and returning it....change the return type to void
and do the changes in vector x
itself wherever possible
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Instead of initialising a new vector and returning it....change the return type to
void
and do the changes in vectorx
itself wherever possible
done
std::vector<double> y(x.size()); | ||
for (int i = 0; i < x.size(); i++) | ||
{ | ||
y[i] = atan(x[i]); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Use std::atan
instead of atan
as told earlier
You don't need to mention what the function returns in the doc comments if the function returns void, rest lgtm |
okayy done |
I'm unable to resolve the build issue, how do i fix it? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Lgtm
We're working on it...will let you know soon |
Resolved issue #92 and #102 (Doc added)
Doc added of OneHotEncoder