You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello,
I am trying to use the hdr functionality of V2E. It seems that saving frame files as PNG loses the benefit of hdr. In "Stage 2/3:turning npy frame files to png" all input frames are converted into png. This results in the frames being compressed to integer values. This creates the problem that if you have an event threshold of 0.3 it will not output any events until the input signal increases by an integer of the natural log then output three events at once. My simple workaround was to use the .npy frame file instead. Attached is a plot of adding events using .npy frames vs .png frames for a single pixel. The ASSET signal is calculated by taking the log of the input frame signal subtracting the log of the first frame then dividing it by the event threshold to get the sum of events. When using the current implementation with png frames we can see clumping of events. When npy frame files in stage 2/3 of V2E, the output of V2E matches what is expected. Please let me know if this is not a bug, but rather user error. Thanks!
The text was updated successfully, but these errors were encountered:
Sorry for very late reply. I guess you are OK with this. v2e supports HDR input for synthetic input frames that can be generated as float 2d arrays. I have used this to make HDR synthetic input for moving small particles, for example. But you are correct that PNG only supports 8-bit pixel values so it sucks for HDR input. I have not looked at native image representations that are HDR, do you know what they are? Can you even get HDR videos to play with from youtube or other places?
Hello,
I am trying to use the hdr functionality of V2E. It seems that saving frame files as PNG loses the benefit of hdr. In "Stage 2/3:turning npy frame files to png" all input frames are converted into png. This results in the frames being compressed to integer values. This creates the problem that if you have an event threshold of 0.3 it will not output any events until the input signal increases by an integer of the natural log then output three events at once. My simple workaround was to use the .npy frame file instead. Attached is a plot of adding events using .npy frames vs .png frames for a single pixel. The ASSET signal is calculated by taking the log of the input frame signal subtracting the log of the first frame then dividing it by the event threshold to get the sum of events. When using the current implementation with png frames we can see clumping of events. When npy frame files in stage 2/3 of V2E, the output of V2E matches what is expected. Please let me know if this is not a bug, but rather user error. Thanks!
The text was updated successfully, but these errors were encountered: