You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
TLDR: out of disk space; not detected by the creator of the .zip file. Workaround: multiply disk space by factor of 1.6
The build log https://www.google.com/url?q=https://oss-fuzz-build-logs.storage.googleapis.com/log-749b825c-b93c-46fc-a1ab-acc39c836808.txt&sa=D&source=buganizer&usg=AOvVaw3YOhsm010CN53Bk-fRGhDk (almost 1MB of log)
shows a failure in unzip:
===== line 3732
Starting Step #5
Step #5: Already have image (with digest): gcr.io/oss-fuzz-base/base-runner
Step #5: [/corpus/decompress_packed_file_fuzzer.zip]
Step #5: End-of-central-directory signature not found. Either this file is not
Step #5: a zipfile, or it constitutes one disk of a multi-part archive. In the
Step #5: latter case the central directory and zipfile comment will be found on
Step #5: the last disk(s) of this archive.
Step #5: unzip: cannot find zipfile directory in one of /corpus/decompress_packed_file_fuzzer.zip or
Step #5: /corpus/decompress_packed_file_fuzzer.zip.zip, and cannot find /corpus/decompress_packed_file_fuzzer.zip.ZIP, period.
Step #5: Failed to unpack the corpus for decompress_packed_file_fuzzer. This usually means that corpus backup for a particular fuzz target does not exist. If a fuzz target was added in the last 24 hours, please wait one more day. Otherwise, something is wrong with the fuzz target or the infrastructure, and corpus pruning task does not finish successfully
The problem originated at creation (writing) of the .zip file, but was detected only later, upon attempted use (reading). The problem is that Disk Quota was exceeded (EDQUOT in /usr/include/asm-generic/errno.h). Linux does not detect EDQUOT on every write() system call, but only upon close() of the file descriptor. It is a common programming error to omit the close() of the usual single output file, because exiting the process performs an implicit close() of all file descriptors. Unfortunately, in such a case the Linux kernel's implicit close() ignores all errors, and thus EDQUOT is silently ignored. The fix is that each app should explicitly call close() on every output file, AND check for an error return from close(). It is a common programming error to believe "What could possibly go wrong on a close()?" The answer is: EDQUOT or ENOSPC.
The workaround is to increase the disk space allocation by a factor of 1.6.
The fix is for EVERY app to close() each output file explicitly, and check for errors. An early check can be done with fsync(), which also increases data integrity.
The C library could help by calling close_range() from exit(). For efficiency, only the lowest dozen or so file descriptors would be enough for nearly all apps.
The Linux kernel could help by diagnosing EDQUOT from implicit close() on exit(). If too much of the process state has been erased to deliver a signal to the process, then the kernel should log the first 5 instances per day.
The text was updated successfully, but these errors were encountered:
re: https://issues.oss-fuzz.com/issues/382954464
TLDR: out of disk space; not detected by the creator of the .zip file. Workaround: multiply disk space by factor of 1.6
The build log https://www.google.com/url?q=https://oss-fuzz-build-logs.storage.googleapis.com/log-749b825c-b93c-46fc-a1ab-acc39c836808.txt&sa=D&source=buganizer&usg=AOvVaw3YOhsm010CN53Bk-fRGhDk (almost 1MB of log)
shows a failure in unzip:
===== line 3732
Starting Step #5
Step #5: Already have image (with digest): gcr.io/oss-fuzz-base/base-runner
Step #5: [/corpus/decompress_packed_file_fuzzer.zip]
Step #5: End-of-central-directory signature not found. Either this file is not
Step #5: a zipfile, or it constitutes one disk of a multi-part archive. In the
Step #5: latter case the central directory and zipfile comment will be found on
Step #5: the last disk(s) of this archive.
Step #5: unzip: cannot find zipfile directory in one of /corpus/decompress_packed_file_fuzzer.zip or
Step #5: /corpus/decompress_packed_file_fuzzer.zip.zip, and cannot find /corpus/decompress_packed_file_fuzzer.zip.ZIP, period.
Step #5: Failed to unpack the corpus for decompress_packed_file_fuzzer. This usually means that corpus backup for a particular fuzz target does not exist. If a fuzz target was added in the last 24 hours, please wait one more day. Otherwise, something is wrong with the fuzz target or the infrastructure, and corpus pruning task does not finish successfully
The problem originated at creation (writing) of the .zip file, but was detected only later, upon attempted use (reading). The problem is that Disk Quota was exceeded (EDQUOT in /usr/include/asm-generic/errno.h). Linux does not detect EDQUOT on every write() system call, but only upon close() of the file descriptor. It is a common programming error to omit the close() of the usual single output file, because exiting the process performs an implicit close() of all file descriptors. Unfortunately, in such a case the Linux kernel's implicit close() ignores all errors, and thus EDQUOT is silently ignored. The fix is that each app should explicitly call close() on every output file, AND check for an error return from close(). It is a common programming error to believe "What could possibly go wrong on a close()?" The answer is: EDQUOT or ENOSPC.
The workaround is to increase the disk space allocation by a factor of 1.6.
The fix is for EVERY app to close() each output file explicitly, and check for errors. An early check can be done with fsync(), which also increases data integrity.
The C library could help by calling close_range() from exit(). For efficiency, only the lowest dozen or so file descriptors would be enough for nearly all apps.
The Linux kernel could help by diagnosing EDQUOT from implicit close() on exit(). If too much of the process state has been erased to deliver a signal to the process, then the kernel should log the first 5 instances per day.
The text was updated successfully, but these errors were encountered: