Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

OHE doc, Activation functions+doc #102

Open
wants to merge 30 commits into
base: main
Choose a base branch
from

Conversation

NandiniGera
Copy link
Contributor

@NandiniGera NandiniGera commented Feb 11, 2023

Resolved issue #92 and #102 (Doc added)
Doc added of OneHotEncoder

#include <math.h>
//sigmoid
double sigmoid(double x) {
return 1 / (1 + exp(-x));
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

use std::expl (or std::exp) instead of exp everywhere.

}
//tan inverse
double arctan(double x) {
return atan(x);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Use std::atan(...) instead of atan(...)

src/slowmokit/methods/activation_functions.hpp Outdated Show resolved Hide resolved

## Example

```
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
```
```cpp


| Name | Definition | Return value |
|----------------------------------------|-----------------------------------------------|---------------|
| `oneHotEncoder(vector<T> data, nClasses)` | To encode the data into numerical values. | `vector<int>` |
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

type of nClasses?


## Example

```
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
```
```cpp

Comment on lines 59 to 62




Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Remove these empty lines.

Comment on lines 20 to 28
template<class T>

double sigmoid(double x);
double tanh(double x);
double ReLU(double x);
double leakyReLU(double x, double alpha);
std::vector<double> softmax(const std::vector<double> &x);
double arctan(double x);
double binaryStep(double x);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add doc comments above each function..

Example for sigmoid
/**
 * @brief To calculate sigmoid(x)
 * @param x: Number whose sigmoid value is to be calculated
 * @return a double value representing sigmoid(x)
 */

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

docs/methods/activation_functions.md Outdated Show resolved Hide resolved
Copy link
Collaborator

@uttammittal02 uttammittal02 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Only a few changes required and then we're good to go


tanh- The Tanh activation function is a hyperbolic tangent sigmoid function that has a range of -1 to 1. It is often used in deep learning models for its ability to model nonlinear boundaries

tan-1h-The inverse of tanh.The ArcTan function is a sigmoid function to model accelerating and decelerating outputs but with useful output ranges.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Arctan is not the inverse of tanh but the normal trignometric tan(x) function


| Name | Definition | Type |
|--------------|--------------------------------------------|--------------|
| data | The data that has to be encoded is passed as the data parameter in the oneHotEncoder function. | `vector<int>` |
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Data would be vector<string> and not vector<int>


| Name | Definition | Return value |
|----------------------------------------|-----------------------------------------------|---------------|
| `oneHotEncoder(vector<T> data, nClasses)` | To encode the data into numerical values. | `vector<int>` |
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Return value of this method is vector<vector<int>>

#include "activation_functions.hpp"
template<class T>
// sigmoid
double sigmoid(double x)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You need to write these functions for a vector and not a single value

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You need to write these functions for a vector and not a single value

I'm changing all the functions

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You need to write these functions for a vector and not a single value

all changes updated

}
<<<<<<< HEAD
// leaky ReLU
double leakyReLU(double x, double alpha)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Set default value of alpha to 0.1

} else {
return alpha * x;
}
}
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add a function to convert binary to bipolar and vice-versa

Comment on lines 19 to 27
<<<<<<< HEAD
* @param x {double x} - double value on which the function is applied.
*
* @param x {vector<double>} - vector containing 'double' values of x for
* softmax activation function implementation.
*
* @return {double value} - double value after putting x in the functions gets
* returned.
=======
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
<<<<<<< HEAD
* @param x {double x} - double value on which the function is applied.
*
* @param x {vector<double>} - vector containing 'double' values of x for
* softmax activation function implementation.
*
* @return {double value} - double value after putting x in the functions gets
* returned.
=======

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Ishwarendra sir the changes which you've asked to make, these are not visible in my vs code, those unwanted lines of code are already not there.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i formatted all the source files using gitbash, since the clang error was there, but after commiting and pushing those changes, both the checks are coming wrong, how do i fix it?

src/slowmokit/methods/activation_functions.hpp Outdated Show resolved Hide resolved
src/slowmokit/methods/activation_functions.cpp Outdated Show resolved Hide resolved
Comment on lines 74 to 83
=======
//leaky ReLU
double leakyReLU(double x, double alpha) {
if (x >= 0) {
return x;
} else {
return alpha * x;
}
}
>>>>>>> 5eebc29054fab6686e728aca29e64e1c53dd7a8c
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
=======
//leaky ReLU
double leakyReLU(double x, double alpha) {
if (x >= 0) {
return x;
} else {
return alpha * x;
}
}
>>>>>>> 5eebc29054fab6686e728aca29e64e1c53dd7a8c

NandiniGera and others added 7 commits February 12, 2023 17:08
Co-authored-by: Ishwarendra Jha <75680424+Ishwarendra@users.noreply.github.com>
Co-authored-by: Ishwarendra Jha <75680424+Ishwarendra@users.noreply.github.com>
Co-authored-by: Ishwarendra Jha <75680424+Ishwarendra@users.noreply.github.com>
@Ishwarendra
Copy link
Collaborator

@NandiniGera Run formatter and commit again.
"How to run formatter?" is already mentioned in README.md point-9

@NandiniGera
Copy link
Contributor Author

@NandiniGera Run formatter and commit again. "How to run formatter?" is already mentioned in README.md point-9

all changes updated

@NandiniGera
Copy link
Contributor Author

really sorry for all the silly mistakes:/
will try to do better next time!

@uttammittal02
Copy link
Collaborator

really sorry for all the silly mistakes:/
will try to do better next time!

Arey np with that...you guys are in the learning stage rn so obv you'll commit mistakes in the beginning...just try not to repeat a mistake

* Implementation of activation functions
*/
#include "activation_functions.hpp"
template<class T>
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Remove this line, we do not require templates for these functions

Suggested change
template<class T>

Comment on lines 90 to 93
std::vector<double> leakyReLU(const std::vector<double> &x)
{
std::vector<double> y(x.size());
double alpha = 0.1;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add alpha as a parameter instead with default value 0.1, do the change in hpp file as well

Suggested change
std::vector<double> leakyReLU(const std::vector<double> &x)
{
std::vector<double> y(x.size());
double alpha = 0.1;
std::vector<double> leakyReLU(const std::vector<double> &x, double alpha = 0.1)
{
std::vector<double> y(x.size());

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add alpha as a parameter instead with default value 0.1, do the change in hpp file as well

done

std::vector<double> y(x.size());
for (int i = 0; i < x.size(); i++)
{
y[i] = 1 / (1 + exp(-x[i]));
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Use std::exp instead of exp

Copy link
Collaborator

@uttammittal02 uttammittal02 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A few more changes

#include "activation_functions.hpp"
template<class T>
// sigmoid
std::vector<double> sigmoid(const std::vector<double> &x)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Instead of initialising a new vector and returning it....change the return type to void and do the changes in vector x itself wherever possible

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Instead of initialising a new vector and returning it....change the return type to void and do the changes in vector x itself wherever possible

done

std::vector<double> y(x.size());
for (int i = 0; i < x.size(); i++)
{
y[i] = atan(x[i]);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Use std::atan instead of atan as told earlier

@ken1000minus7
Copy link
Collaborator

You don't need to mention what the function returns in the doc comments if the function returns void, rest lgtm

@NandiniGera
Copy link
Contributor Author

You don't need to mention what the function returns in the doc comments if the function returns void, rest lgtm

okayy done

@NandiniGera
Copy link
Contributor Author

I'm unable to resolve the build issue, how do i fix it?

Copy link
Collaborator

@uttammittal02 uttammittal02 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Lgtm

docs/methods/activation_functions.md Outdated Show resolved Hide resolved
@uttammittal02
Copy link
Collaborator

I'm unable to resolve the build issue, how do i fix it?

We're working on it...will let you know soon

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants