-
Notifications
You must be signed in to change notification settings - Fork 6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
This tool is BADASS - nicely done! Quick question #25
Comments
Thanks! Rewriting the files should work ( Thank you for the offer, but I'm currently not accepting donations. If you like, you can make a pledge on Patreon. By the way, it looks like your terminal emulator is having some trouble drawing special characters. The UI should look a bit different. |
I just came to write a post basically just like OP's so I will just +1 this sentiment. I had 250GB of data on a 1TB disk and had been watching my free space constantly shrinking inexplicably, with hundreds of gigabytes unaccountable. While doing some work, somehow in the space of half an hour went from 20% free to 20GB free, with the remainder.....somewhere??!?!?! and KDE issuing me a disk space warning left me with an "I've been avoiding this for weeks now and I'm about to run out of space completely and this is going to be hell to fix when I lose data" panic moment. Searching around, I too found continual recommendations to this tool on stackoverflow and reddit both, and eventually thought "Lots of people are talking about this, maybe I should give it a look". Had a little trouble getting it to mount the right volume, and when I finally did, I learned that a 90GB file with CoW disabled was being copied in full 90GB exclusive data glory, to every one of my snapshots, even if I'd only changed it by a few KB, since its CoW was disabled. Used this tool to delete the offending few dozen 90GB files and got all my mystery disk space back. Crisis averted. Created a new subvolume for the no-COW data so it wouldn't get snapshotted once an hour, and problem solved forever. BTRFS is amazing, but if you use snapshots (isn't that kinda the whole point of it?!) the quotas are completely broken and useless, and demand disabling unless you like your machine freezing up at random.... and managing disk space without quotas and without this tool was a complete impossibility (or at least, wildly impractical). This thing should be part of the official BTRFS tools, it's not complete without it. You made my day, man, thank you. I don't do patreon, so I'll send you a Happy New Year and some 🤗 or 🍻 or something 😆 Jokes aside, my sincere thanks. It's not my data you saved, but my contributions to three other FOSS projects, so hopefully I've been able to pay it forward, and it will all circle back around to you one day. Just to contribute something more concrete than my sincere appreciation, I document every part of building and operating my system (in case I ever need to rebuild it from scratch or forget something or whatever.... I like docs....) So here is the section that ended up in my build docs (it is intended for my own eyes so some censorship was required hahaha):
I hope this is helpful to someone, and most of all, I hope you have a great night tonight and this year. You really saved my hide, thank you! |
FWIW, I do have some experience in packaging for OpenSUSE, and if a publicly available OpenSUSE package would be something you'd like to have, I will 100% deliver it. Frankly, I don't really feel like it's necessary given that it's a single binary and so lightweight, but if you want it, say the word, and I'll make it a thing! |
Thank you for the kind words! I agree that a package file provided by us wouldn't be too helpful in practice. What would be helpful though is to get btdu packaged in your distro. If that's something you'd be interested in doing, it will make the tool more accessible to that distro's users. I'm also happy to help with packaging issues. |
Thanks a lot for the great tool! Just having quite a similar issue like with OP, but copying a few terabytes is quite a time consuming operation ;) |
You can make btdu save a list of files on exit with |
First thank you so much for building this app. I have been pulling my hair out (what's left of it) trying to figure out why my du -sh was showing only 4 TiB usage, but my df -lh was showing 8 TiB usage. Your tool showed me that about 3.47 TiB is unreachable.
I appreciate the great explanations you provided in the tool. Any idea how to clean up this unreachable part? NOTE: I am trying the defrag now to see if that works. Lifesaving tool --there are lots of people with similar questions on Stackoverflow which I will link this tool to answer.
Also please add a donation link I'd love to buy you a coffee. I'm adding this tool as a standard issue app to my servers now.
The text was updated successfully, but these errors were encountered: