diff --git a/docs/core/install/templates.md b/docs/core/install/templates.md index e54fa00689453..f54bf034ab500 100644 --- a/docs/core/install/templates.md +++ b/docs/core/install/templates.md @@ -208,23 +208,23 @@ For example, the .NET 7 SDK includes templates for a console app targeting .NET dotnet new install Microsoft.DotNet.Common.ProjectTemplates.3.1 ``` -01. Try creating the app a second time. +1. Try creating the app a second time. - ```dotnetcli - dotnet new console --framework netcoreapp3.1 - ``` + ```dotnetcli + dotnet new console --framework netcoreapp3.1 + ``` - And you should see a message indicating the project was created. + And you should see a message indicating the project was created. - > The template "Console Application" was created successfully. - > - > Processing post-creation actions... - > Running 'dotnet restore' on path-to-project-file.csproj... - > Determining projects to restore... - > Restore completed in 1.05 sec for path-to-project-file.csproj. - > - > Restore succeeded. + > The template "Console Application" was created successfully. + > + > Processing post-creation actions... + > Running 'dotnet restore' on path-to-project-file.csproj... + > Determining projects to restore... + > Restore completed in 1.05 sec for path-to-project-file.csproj. + > + > Restore succeeded. ## See also diff --git a/docs/machine-learning/tutorials/object-detection-model-builder.md b/docs/machine-learning/tutorials/object-detection-model-builder.md index ca6a67dfa8d8e..3ff142410b6f9 100644 --- a/docs/machine-learning/tutorials/object-detection-model-builder.md +++ b/docs/machine-learning/tutorials/object-detection-model-builder.md @@ -261,33 +261,33 @@ When you add a web API to your solution, you're prompted to name the project. 1. If successful, the output should look similar to the following text. - ```powershell - boxes labels scores boundingBoxes - ----- ------ ------ ------------- - {339.97797, 154.43184, 472.6338, 245.0796} {1} {0.99273646} {} - ``` + ```powershell + boxes labels scores boundingBoxes + ----- ------ ------ ------------- + {339.97797, 154.43184, 472.6338, 245.0796} {1} {0.99273646} {} + ``` - - The `boxes` column gives the bounding box coordinates of the object that was detected. The values here belong to the left, top, right, and bottom coordinates respectively. - - The `labels` are the index of the predicted labels. In this case, the value 1 is a stop sign. - - The `scores` defines how confident the model is that the bounding box belongs to that label. + - The `boxes` column gives the bounding box coordinates of the object that was detected. The values here belong to the left, top, right, and bottom coordinates respectively. + - The `labels` are the index of the predicted labels. In this case, the value 1 is a stop sign. + - The `scores` defines how confident the model is that the bounding box belongs to that label. - > [!NOTE] - > **(Optional)** The bounding box coordinates are normalized for a width of 800 pixels and a height of 600 pixels. To scale the bounding box coordinates for your image in further post-processing, you need to: - > - > 1. Multiply the top and bottom coordinates by the original image height, and multiply the left and right coordinates by the original image width. - > 1. Divide the top and bottom coordinates by 600, and divide the left and right coordinates by 800. - > - > For example, given the original image dimensions,`actualImageHeight` and `actualImageWidth`, and a `ModelOutput` called `prediction`, the following code snippet shows how to scale the `BoundingBox` coordinates: - > - > ```csharp - > var top = originalImageHeight * prediction.Top / 600; - > var bottom = originalImageHeight * prediction.Bottom / 600; - > var left = originalImageWidth * prediction.Left / 800; - > var right = originalImageWidth * prediction.Right / 800; - > ``` - > - > An image can have more than one bounding box, so the same process needs to be applied to each of the bounding boxes in the image. + > [!NOTE] + > **(Optional)** The bounding box coordinates are normalized for a width of 800 pixels and a height of 600 pixels. To scale the bounding box coordinates for your image in further post-processing, you need to: + > + > 1. Multiply the top and bottom coordinates by the original image height, and multiply the left and right coordinates by the original image width. + > 1. Divide the top and bottom coordinates by 600, and divide the left and right coordinates by 800. + > + > For example, given the original image dimensions,`actualImageHeight` and `actualImageWidth`, and a `ModelOutput` called `prediction`, the following code snippet shows how to scale the `BoundingBox` coordinates: + > + > ```csharp + > var top = originalImageHeight * prediction.Top / 600; + > var bottom = originalImageHeight * prediction.Bottom / 600; + > var left = originalImageWidth * prediction.Left / 800; + > var right = originalImageWidth * prediction.Right / 800; + > ``` + > + > An image can have more than one bounding box, so the same process needs to be applied to each of the bounding boxes in the image. Congratulations! You've successfully built a machine learning model to detect stop signs in images using Model Builder. You can find the source code for this tutorial at the [dotnet/machinelearning-samples](https://github.com/dotnet/machinelearning-samples/tree/main/samples/modelbuilder/ObjectDetection_StopSigns) GitHub repository.