Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

WGS processing running out of space #87

Open
gilsonmm opened this issue Dec 2, 2024 · 9 comments
Open

WGS processing running out of space #87

gilsonmm opened this issue Dec 2, 2024 · 9 comments

Comments

@gilsonmm
Copy link

gilsonmm commented Dec 2, 2024

I am still attempting to run the pipeline on WGS data and when I get to the Mutect2 process I get an error saying the process could not write metrics file as there is no space left on device. On my end I have plenty of disk space where I am running the pipeline, I am thinking it could be a space issue within the singularity image, as this issue looked similar to mine: apptainer/singularity#1165

I attached some files below:

.nextflow.log
Mutect2_work.tar.gz -> had to remove 2 interval lists in order to upload into github

@riederd
Copy link
Member

riederd commented Dec 2, 2024

Hi,

There should be nothing that tries to write into the singularity image. The image is read-only anyway. All paths for generating output are bind mounted (-B) into the image. Are you sure you do not hit a disk quota ? Another possibility could be that you are running out of inodes on that filesystem you are trying to write (check with df -i).

You may want to check:
/vf/users/gilsonmm/neoantigen/nextNEOpi and /tmp

@gilsonmm
Copy link
Author

gilsonmm commented Dec 2, 2024

I am certain I am not hitting a disk quota, and looking at the inodes on the file system I am not close to any maximums. I am not writing to /tmp, I have the temporary files directed to /vf/users/gilsonmm/neoantigen/nextNEOpi/temporary which does contain the temporary files that get created and is not running into a disk quota issue.

I am re-running the pipeline in case the issue is a one-time thing, but are there any other suggestions you have?

@Raindr0p
Copy link

Raindr0p commented Dec 5, 2024

Hi gilsonmm,

Have you solved this issue?

@gilsonmm
Copy link
Author

gilsonmm commented Dec 5, 2024

Hi,

I have not. I ran the pipeline again and I am running into the same issue. I still have plenty of disk space to handle the files that are being created, both temporary and final. Attached is the newest nextflow log file:

.nextflow.log

@riederd
Copy link
Member

riederd commented Dec 5, 2024

Can you try to change into /vf/users/gilsonmm/neoantigen/nextNEOpi/work/6e/fe1289efb29a6006164e375621c8dc then run bash .command.run and monitor the disk space and the files created in that directory and /vf/users/gilsonmm/neoantigen/nextNEOpi/temporary as well as /tmp

@gilsonmm
Copy link
Author

gilsonmm commented Dec 6, 2024

I can do that, but with the size of my sample, it takes over 12 hours to complete this step so I won't be able to monitor the directories the entire time. Also, I do not have access to my HPC's /tmp directory, nothing should be going there.

This is an image of my disk space and file count, it shows that I have roughly 9TB of free space and maybe 2million files. I really do not think this is a disk space issue.
image

@gilsonmm
Copy link
Author

I just wanted to follow up, I am still having this issue.

@riederd
Copy link
Member

riederd commented Dec 11, 2024

that's bad, I have no idea how I could reproduce this.
The error comes from mutect2 (gatk). You might try to run the command in .command.sh using a local installation of GATK 4.5.0.0, and see if you also hit the issue.

@gilsonmm
Copy link
Author

I ran mutect2 with a local installation and it worked. However, if I restart the pipeline and use --resume, once it gets to that step it stops again with the same error. Is there a way I can bypass mutect2 or does this give you a better idea as to what is wrong?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants