-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Configurability: Frame Averaging During Acquisition #106
Comments
Hi there Jeremy. I have a few points to make regarding your thoughts on this. Yes, people do tend to default to the 'image as fast as possible and deal with it later' method. I'm even one of those people. In the case where generating too much data is burdensome, I totally get what you mean. Imaging at 15Hz is probably fine for calcium imaging. Certainly even 10Hz is good enough depending on what you're doing. With that in mind, there are a few points to be made regarding the system (and other systems as well, as the principles should be the same on other scopes).
Please let me know if this helps at all. I really do appreciate the care with which you put together these posts. -Kevin |
Hey Kevin! Thank you so much for this thorough reply about this!
That reminds me of what Michael and Steve told me! They said that it runs at a fixed 8kHz frequency if I remember correctly... Thank you for the clarification.
I like this idea! It's along the lines of what we'd like to do. The data isn't (yet) too large for us to hold onto/use, but there are certain things that will run faster if it has to churn through less data! I talked with Austin yesterday about this and he said he plans to continue recording at 30Hz and will likely continue not to perform frame averaging before putting the data into suite2p.
Honestly never occurred to me! It might be worth trying out this modification and see how images come out. I think trying it first by modifying data we have would be interesting. It can't be too hard to rescale things to that resolution right? May as well find out!
This is something that Deryn does for samples with very low SNR/very sparse recordings (something like 20 neurons only seen in the FOV!). Something that we've discussed here in the lab so far is how we can document when we decide to do frame averaging like that. Is there an average brightness or something that we should use as a cutoff? We don't want to just say, "Well in these experiments we did averaging because it seemed like it helps." Ideally we could describe numerical cutoffs of some kind through empirical testing of things. |
awesome I'm glad to help. I also like the forum. it's hopefully a good
place for people to ask questions and talk about options. I admit I've
never used github that way before.
oh also gentle nudge to sign that document. I can also send it directly to
Kay, but professors often are hard to get to respond to things like that.
cheers!
…On Fri, Jun 24, 2022 at 4:28 PM Jeremy Delahanty ***@***.***> wrote:
Hey Kevin! Thank you so much for this thorough reply about this!
Resonant scanning speed cannot be changed. This is a result of the fact
that the resonant scanner runs at a fixed frequency.
That reminds me of what Michael and Steve told me! They said that it runs
at a fixed 8kHz frequency if I remember correctly... Thank you for the
clarification.
You can, however, simply use frame averaging. In this case, the system
still scans at 30Hz, but averages frames before dumping them. This will
reduce your noise and also your file size, and seems to be what you want to
do.
I like this idea! It's along the lines of what we'd like to do. The data
isn't (yet) too large for us to hold onto/use, but there are certain things
that will run faster if it has to churn through less data! I talked with
Austin yesterday about this and he said he plans to continue recording at
30Hz and will likely continue not to perform frame averaging before putting
the data into suite2p.
reducing your X spatial resolution while in resonant mode (looks like you
do 512x512?) can also reduce noise by changing the dwell time per pixel...
Maybe you don't actually need 512 X pixels just as you don't need 30 frames
per second. It'll help with both issues here.
Honestly never occurred to me! It might be worth trying out this
modification and see how images come out. I think trying it first by
modifying data we have would be interesting. It can't be too hard to
rescale things to that resolution right? May as well find out!
In situations where you have a very dim sample and want to increase the
brightness, you can certainly sum pixels post-hoc. This doesn't get around
the issue of generating more data than desired, but perhaps early in the
pipeline every n frames can be summed and then turned into a downsampled
t-series, just like with frame averaging.
This is something that Deryn does for samples with very low SNR/very
sparse recordings (something like 20 neurons only seen in the FOV!).
Something that we've discussed here in the lab so far is how we can
document when we decide to do frame averaging like that. Is there an
average brightness or something that we should use as a cutoff? We don't
want to just say, "Well in these experiments we did averaging because it
seemed like it helps." Ideally we could describe numerical cutoffs of some
kind through empirical testing of things.
—
Reply to this email directly, view it on GitHub
<#106 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ACEND34LZTYJJSZBWRLWP2LVQYK43ANCNFSM5ZGGX44A>
.
You are receiving this because you commented.Message ID:
***@***.***>
|
I actually have the discussions tab enabled here in case stuff like this ever gets brought up! We can move it there if you'd like. I think you're the first person other than me in the whole world that has commented here! I sent along the signed form to your Bruker email I think, I can resend if it got lost! It went out just after 1PM Pacific. |
I talked to members of Bruker's team at SfN and Jimmy Fong told me that all we need to do is use a EDIT 1/11/23: I don't think it actually would harm the way video data is collected with the current setup since the frame triggers are output every time given what Kevin Mann, PhD taught us in this thread above. I also just tested in Prairie View with the oscilloscope and the framerate is the exact same even when doing averaging. It's all software averaging so it has no effect on when the images are actually taken. Cool to validate. |
From #137's Kameron Clayton, PhD, here's his thoughts on framerates to use:
|
From Dr. Ryoma Hattori in the Komiyama Lab at UCSD last year:
|
Renamed it since you can't actually change the framerate of the scope but can do software averaging during acquisition which people might want to do. |
I asked about max speed and what it really means/when it'll actually matter for recordings and here's what Kevin taught me over a couple small communications. It would be relevant for multiplane imaging if we had a piezo for going between planes, not sure if this influences things when using an ETL which the new scope could use:
|
At the moment, we have the microscope collect data as quickly as possible. In the Ultima Investigator's case, this happens to be quite close to 30FPS.
The end result of this is that our data in the end, while certainly usable and yields traces that are reasonable, is quite noisy.
This issue is being created based off of two things:
30 FPS Might Be Unnecessary
Among the reasons for performing scans at the fastest possible rate with our imaging include, from my memory:
An additional convenience of using a 30FPS rate is that the camera recording the face of the subject is also recording at a speed fast enough to capture much of the motion of the subject's face. Note that the camera takes an image each time the microscope does as it's triggered by the microscope's output TTL start of frame triggers.
Potential Drawbacks of Continuing this Behavior
There are multiple potential drawbacks to continuing this practice of fastest possible scan acquisitions.
Faster scanning speeds means noisier data
The way multi-photon scopes capture data in resonant scanning is by the rapid movement of multiple mirrors that direct a laser light path into extremely specific points in space on the order of microsecond precision. Each pixel in an image is acquired according to the dwell time specified in Prairie View's software. However long the dwell time is specified is how long the laser remains at a point in the field of view. As the laser stimulates each of these points, photons are emitted from the genetically encoded calcium indicators. These photons are received by the Photo-Multiplier Tube (PMT) Gallium Arsenic Phosphate (GaAsP) surface that is highly sensitive to light. This triggers a cascade down the PMT in which the electrons excited by the incoming photons are amplified exponentially as they are drawn down the tube by the high voltage supplied to the tube's surfaces. As these electrons are pulled through the tube, a current is generated which is then sampled by the Data Acquisition Card (DAQ). The software then turns the measurements from the card into a pixel intensity within the uint16 (meaning 16bit) range of numbers.
A consequence of moving as fast as possible through these points in the field of view (FOV) is that the system samples few photons from each point in space. As noted here there are several kinds of noise that are introduced in these types of imaging sessions. If one samples only very quickly through each point in space, the relative amount of signal to noise will be smaller. In other words, if you sample each position only very quickly, you are collecting smaller amounts of real signal (biological fluorescence).
Although temporally smoothing things via a running average can be helpful in reducing the relative amounts of signal to noise and it does indeed produce images that are clearer in appearance, from what multiple people have told me it can be better practice to simply slow down the frame rate of the imaging session and acquire better SNR at experimental runtime.
One scientist online I briefly messaged with, Dr. Masayuki Sakamoto from Kyoto University had this to say:
Another scientist I messaged, Dr. Nuné Martiros (who seemed to know Kay and Romy interestingly!) had this to say:
There are other filtering steps that some programs/people have suggested to perform on the data as well but that can be part of a different issue somewhere else.
Faster scanning means bigger data
Simply put, the more samples you take the more datapoints you have! The more datapoints you have, the more data you have to stuff into a file! At the moment, a single channel raw binary from Bruker at 30Hz puts out approximately 75GB. If we were to run the scope at 15Hz we would divide that in half!
Currently an approximately 25 minute recording at 30Hz yields about 45k tiff images totaling about 22GB. We could end up with higher quality imaging that only has 11GB total for a given subject's recording day! The savings long term could be quite high if we modify this parameter and just collect what we need for imaging calcium indicators.
If we were to use Python and Dask to go to H5/Zarr with lossless compression those file sizes could be reduced down to just 7/8GB per recording! Note that 2-photon data doesn't compress very well because it's inherently noisy/there is shot noise that's always there.
Configurability of Scope Frame Rate
This is something that should be allowed to have configurations made for it inside the configuration file. Something as simple as:
"frame_rate" : 15
in the configuration .json file could be how it's stored and implemented. In order for this to succeed, should we choose to do it, would be to take that frame rate in the file and then:
prairieview_utils
function that does this conversionprairieview_utils
function that updates the software's settings viasetstate
commands (need to ensure that this is possible, I believe it is)The text was updated successfully, but these errors were encountered: