Giter Club home page Giter Club logo

Comments (7)

sam-berning avatar sam-berning commented on September 22, 2024

I can reproduce this, launched a g3.8x instance with bottlerocket-aws-ecs-2-nvidia-x86_64-v1.19.2-29cc92cc and followed the repro steps, docker logs <container> gives me:

GPU 0: Tesla M60 (UUID: GPU-963a4569-ebb4-d876-14a7-c3c8491f8682)
GPU 1: Tesla M60 (UUID: GPU-cf45c479-5e3d-5316-42b5-1301bc4f4f6a)
Failed to initialize NVML: Unknown Error
Failed to initialize NVML: Unknown Error

Because the instance I was using had 2 GPUs, I also tried exposing each of the GPUs to two different containers:

docker run -d --rm --runtime=nvidia -e NVIDIA_VISIBLE_DEVICES=0 -e NVIDIA_DRIVER_CAPABILITIES=all \
 nvcr.io/nvidia/cuda:12.0.0-base-ubuntu20.04 bash -c "while [ true ]; do nvidia-smi -L; sleep 5; done"
docker run -d --rm --runtime=nvidia -e NVIDIA_VISIBLE_DEVICES=1 -e NVIDIA_DRIVER_CAPABILITIES=all \
 nvcr.io/nvidia/cuda:12.0.0-base-ubuntu20.04 bash -c "while [ true ]; do nvidia-smi -L; sleep 5; done"

The result was the same for both containers, but it showed a new error message in the logs:

GPU 0: Tesla M60 (UUID: GPU-963a4569-ebb4-d876-14a7-c3c8491f8682)
Unable to determine the device handle for gpu 0000:00:1D.0: Unknown Error
Failed to initialize NVML: Unknown Error
Failed to initialize NVML: Unknown Error

from bottlerocket.

arnaldo2792 avatar arnaldo2792 commented on September 22, 2024

Hi @modelbitjason , thanks for reporting this.

I just want to add some context on why the steps you follow trigger this failure.

When you enable-admin-container/disable-admin-container, a systemctl command will be issued to reload containerd's configurations so that the admin container is started/stopped. Whenever this happens, systemd will undo any cgroups modifications done by libnvidia-container while creating the containers and granting them access to the GPUs. This is a know issue, and the solution that was given was to run nvidia-ctk whenever the GPUs are loaded. We already do that today in all Bottlerocket variants. There seems to be another fix in newer versions of libnvidia-container, however when we tried to update to v1.14.X, it broke the ECS aarch64 variant. I'll ask my coworkers to give that new fix a spin, and check if the problem persists (after we figure out why the new libnvidia-container version broke the ECS aarch64 variant).

That said, could you please expand on what do you mean with this?

If I tell ECS about the GPU, it won't let me gracefully roll a new instance on that host.

Are you having problems with task deployments? Or are you trying to "over-subscribe" the node so that you can run multiple tasks in the same host and share the one GPU?

from bottlerocket.

modelbitjason avatar modelbitjason commented on September 22, 2024

Thanks for the explanation @arnaldo2792 -- I saw some other tickets about cgroups but those fixes didn't help. This explains why.

re: ECS + GPUs, yes, but more specifically, I need to have the old instance hand-off data to the new one and do a graceful shutdown. I only have one ECS-managed task per EC2 instance (except during rollout).

When I roll a new task version, the newly started version finds the existing container, asks for some state, then tells it to drain + eventually shutdown. This way I can both be sure the new version comes up healthy before the old version on that host is removed.

This service manages some long-lived containers on the host and proxies requests to them. The old version passes control of the containers to the newly started version, but needs to stick around until all the outstanding requests it is currently proxying are complete.

from bottlerocket.

arnaldo2792 avatar arnaldo2792 commented on September 22, 2024

@modelbitjason, I'm sorry for the very late response.

When I roll a new task version, the newly started version finds the existing container, asks for some state, then tells it to drain + eventually shutdown.

Just so that I have the full picture of your architecture, when/why do you use the admin container? Do you that to check if the old container is done draining?

from bottlerocket.

modelbitjason avatar modelbitjason commented on September 22, 2024

Thanks @arnaldo2792 -- These days I only use the admin containers to debug. Previously it was also used to manually run docker prune to free up space.

We've had other cases where the GPU goes away, so I log in to try and see what happened. One or two times it seemed permanent and I just cycled to a new instance.

The current workaround is to have the admin container start at boot via settings. I'm just worried about other things triggering this behavior somehow.

from bottlerocket.

arnaldo2792 avatar arnaldo2792 commented on September 22, 2024

Previously it was also used to manually run docker prune to free up space.

There are options in ECS to free up space, 👍 I can give you pointers to the ones we support if you need them.

The current workaround is to have the admin container start at boot via settings.

Do you do enable the admin container on boot only to debug instances? Or in all your instances, regardless if you will debug in them?

I'm just worried about other things triggering this behavior somehow.

The only path that I've seen that triggers this behavior is the systemctl daemon-reload command, and the NVIDIA folks mentioned that moving towards CDI could help with this problem. I'll check with the ECS folks if they will plan to support CDI soon.

from bottlerocket.

modelbitjason avatar modelbitjason commented on September 22, 2024

Previously it was also used to manually run docker prune to free up space.

There are options in ECS to free up space, 👍 I can give you pointers to the ones we support if you need them.

We already have the docker socket mapped in, so doing it from our control plane based on disk usage. We only use ECS to run the main control plane, that container then starts other containers directly via docket socket.

ECS is helpful for the networking and stuff, but we don't have enough control over placement and it's sometimes too slow to start tasks. So this partial usage has been pretty good, especially for tasks that don't need to have their own ENIs.

The current workaround is to have the admin container start at boot via settings.

Do you do enable the admin container on boot only to debug instances? Or in all your instances, regardless if you will debug in them?

Well we do it for all of them on boot since we don't know when we need to debug. Previously, we'd only create the admin instance when needed. Our system is still pretty new and we run into weird bugs like the GPU going away, so we'll need the ability to debug for the foreseeable future.

I'm just worried about other things triggering this behavior somehow.

The only path that I've seen that triggers this behavior is the systemctl daemon-reload command, and the NVIDIA folks mentioned that moving towards CDI could help with this problem. I'll check with the ECS folks if they will plan to support CDI soon.

That's reassuring to hear! We haven't noticed any problems since having the container start at boot. We use it rarely, it's a 'break glass' measure.

from bottlerocket.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.