Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fuzzer builds: .zip file too big is not caught immediately #12835

Open
jreiser opened this issue Dec 11, 2024 · 0 comments
Open

fuzzer builds: .zip file too big is not caught immediately #12835

jreiser opened this issue Dec 11, 2024 · 0 comments

Comments

@jreiser
Copy link

jreiser commented Dec 11, 2024

re: https://issues.oss-fuzz.com/issues/382954464

TLDR: out of disk space; not detected by the creator of the .zip file. Workaround: multiply disk space by factor of 1.6

The build log https://www.google.com/url?q=https://oss-fuzz-build-logs.storage.googleapis.com/log-749b825c-b93c-46fc-a1ab-acc39c836808.txt&sa=D&source=buganizer&usg=AOvVaw3YOhsm010CN53Bk-fRGhDk (almost 1MB of log)
shows a failure in unzip:
===== line 3732
Starting Step #5
Step #5: Already have image (with digest): gcr.io/oss-fuzz-base/base-runner
Step #5: [/corpus/decompress_packed_file_fuzzer.zip]
Step #5: End-of-central-directory signature not found. Either this file is not
Step #5: a zipfile, or it constitutes one disk of a multi-part archive. In the
Step #5: latter case the central directory and zipfile comment will be found on
Step #5: the last disk(s) of this archive.
Step #5: unzip: cannot find zipfile directory in one of /corpus/decompress_packed_file_fuzzer.zip or
Step #5: /corpus/decompress_packed_file_fuzzer.zip.zip, and cannot find /corpus/decompress_packed_file_fuzzer.zip.ZIP, period.
Step #5: Failed to unpack the corpus for decompress_packed_file_fuzzer. This usually means that corpus backup for a particular fuzz target does not exist. If a fuzz target was added in the last 24 hours, please wait one more day. Otherwise, something is wrong with the fuzz target or the infrastructure, and corpus pruning task does not finish successfully

The problem originated at creation (writing) of the .zip file, but was detected only later, upon attempted use (reading). The problem is that Disk Quota was exceeded (EDQUOT in /usr/include/asm-generic/errno.h). Linux does not detect EDQUOT on every write() system call, but only upon close() of the file descriptor. It is a common programming error to omit the close() of the usual single output file, because exiting the process performs an implicit close() of all file descriptors. Unfortunately, in such a case the Linux kernel's implicit close() ignores all errors, and thus EDQUOT is silently ignored. The fix is that each app should explicitly call close() on every output file, AND check for an error return from close(). It is a common programming error to believe "What could possibly go wrong on a close()?" The answer is: EDQUOT or ENOSPC.

The workaround is to increase the disk space allocation by a factor of 1.6.
The fix is for EVERY app to close() each output file explicitly, and check for errors. An early check can be done with fsync(), which also increases data integrity.
The C library could help by calling close_range() from exit(). For efficiency, only the lowest dozen or so file descriptors would be enough for nearly all apps.
The Linux kernel could help by diagnosing EDQUOT from implicit close() on exit(). If too much of the process state has been erased to deliver a signal to the process, then the kernel should log the first 5 instances per day.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant