Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[*.py] Rename "Arguments:" to "Args:" #870

Open
wants to merge 1 commit into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion models/official/detection/modeling/architecture/nn_ops.py
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,7 @@ class BatchNormalization(tf.layers.BatchNormalization):
def __init__(self, fused=False, max_shards_for_local=8, **kwargs):
"""Builds the batch normalization layer.

Arguments:
Args:
fused: If `False`, use the system recommended implementation. Only support
`False` in the current implementation.
max_shards_for_local: The maximum number of TPU shards that should use
Expand Down
2 changes: 1 addition & 1 deletion models/official/efficientnet/condconv/condconv_layers.py
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ def get_condconv_initializer(initializer, num_experts, expert_shape):
is correctly initialized with the given initializer before being flattened
into the correctly shaped CondConv variable.

Arguments:
Args:
initializer: The initializer to apply for each individual expert.
num_experts: The number of experts to be initialized.
expert_shape: The original shape of each individual expert.
Expand Down
2 changes: 1 addition & 1 deletion models/official/efficientnet/imagenet_input.py
Original file line number Diff line number Diff line change
Expand Up @@ -137,7 +137,7 @@ def mixup(self, batch_size, alpha, images, labels):
Mixup: Beyond Empirical Risk Minimization.
ICLR'18, https://arxiv.org/abs/1710.09412

Arguments:
Args:
batch_size: The input batch size for images and labels.
alpha: Float that controls the strength of Mixup regularization.
images: A batch of images of shape [batch_size, ...]
Expand Down
2 changes: 1 addition & 1 deletion models/official/mask_rcnn/distributed_executer.py
Original file line number Diff line number Diff line change
Expand Up @@ -71,7 +71,7 @@ def build_model_parameters(self, unused_mode, unused_run_config):
def build_mask_rcnn_estimator(self, params, run_config, mode):
"""Creates TPUEstimator/Estimator instance.

Arguments:
Args:
params: A dictionary to pass to Estimator `model_fn`.
run_config: RunConfig instance specifying distribution strategy
configurations.
Expand Down
2 changes: 1 addition & 1 deletion models/official/mask_rcnn/tpu_normalization.py
Original file line number Diff line number Diff line change
Expand Up @@ -94,7 +94,7 @@ def cross_replica_batch_normalization(inputs,
For detailed information of arguments and implementation, refer to:
https://www.tensorflow.org/api_docs/python/tf/keras/layers/BatchNormalization

Arguments:
Args:
inputs: Tensor input.
training: Either a Python boolean, or a TensorFlow boolean scalar tensor
(e.g. a placeholder). Whether to return the output in training mode
Expand Down