diff --git a/site/en/hub/tutorials/semantic_similarity_with_tf_hub_universal_encoder_lite.ipynb b/site/en/hub/tutorials/semantic_similarity_with_tf_hub_universal_encoder_lite.ipynb index 78d4eebadb..2345c91ff7 100644 --- a/site/en/hub/tutorials/semantic_similarity_with_tf_hub_universal_encoder_lite.ipynb +++ b/site/en/hub/tutorials/semantic_similarity_with_tf_hub_universal_encoder_lite.ipynb @@ -366,7 +366,7 @@ "source": [ "## Evaluation: STS (Semantic Textual Similarity) Benchmark\n", "\n", - "The [**STS Benchmark**](https://ixa2.si.ehu.es/stswiki/index.php/STSbenchmark) provides an intristic evaluation of the degree to which similarity scores computed using sentence embeddings align with human judgements. The benchmark requires systems to return similarity scores for a diverse selection of sentence pairs. [Pearson correlation](https://en.wikipedia.org/wiki/Pearson_correlation_coefficient) is then used to evaluate the quality of the machine similarity scores against human judgements." + "The [**STS Benchmark**](https://ixa2.si.ehu.eus/stswiki/stswiki.html#STS_benchmark) provides an intristic evaluation of the degree to which similarity scores computed using sentence embeddings align with human judgements. The benchmark requires systems to return similarity scores for a diverse selection of sentence pairs. [Pearson correlation](https://en.wikipedia.org/wiki/Pearson_correlation_coefficient) is then used to evaluate the quality of the machine similarity scores against human judgements." ] }, {