-
Notifications
You must be signed in to change notification settings - Fork 644
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Access NVIDIA GPUs in K8s in a non-privileged container #605
Comments
This issue is stale because it has been open 90 days with no activity. This issue will be closed in 30 days unless new comments are made or the stale label is removed. |
Hey @elezar - I see that you're assigned to this. Is this feasible in any way that you know of? |
This issue is stale because it has been open 90 days with no activity. This issue will be closed in 30 days unless new comments are made or the stale label is removed. |
Hey @elezar gentle ping :) |
This issue is stale because it has been open 90 days with no activity. This issue will be closed in 30 days unless new comments are made or the stale label is removed. |
Isn't this more appropriate for either the DCGM or DCGM Exporter repositories? If this refers to deploying DCGM Exporter, then the DaemonSet used to do so is neither privileged nor does it request any GPUs to do so. It does use node labels to affine the DCGM Exporter pods to only those nodes with an NVIDIA GPU(s). |
Hello - I'm trying to see if it's possible to deploy NVIDIA DCGM on K8s with the
securityContext.privileged
field set tofalse
for security reasons.I was able to get this working by setting the container's resource requests as the following:
However, this is not ideal for a few reasons:
Is there any way to permit the container device access without reserving more of the resource requests via
nvida.com/gpu
?Thanks for any help you can provide.
The text was updated successfully, but these errors were encountered: