You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The reason X series NI cards with 4 counters are required is because the clock outputs for scanning and counting are finite in length - requiring two counters.
When does this occur?
For any NIDAQ counting or scanning operation.
Where on the platform does it happen?
How do we replicate the issue?
Expected behavior (i.e. solution)
Change the clock outputs to be continuous, and make the analog output, and counter input that depend on the clock timing have finite timing. This allows all M series cards to be used. I was going to rewrite the nidaq hardware file as such, but I soon learnt that the number of counters was too tied the confocal logic to make the effort worth the pay-off.
Other Comments
I understnad this is probably not a major priority, but if the the moniscan project is a major rethink of the confocal scanning behaviour then this would be a good time to implement what I think is an obviously good idea.
Here is an example from the Stuttgart code that uses 2 counters for all scanning/counting - it works well.
classCounterBoard:
"""nidaq Counter board. """_CountAverageLength=10_MaxCounts=1e7_DefaultCountLength=1000_RWTimeout=1.0def__init__(self, CounterIn, CounterOut, TickSource, SettlingTime=2e-3, CountTime=8e-3):
self._CODevice=CounterOutself._CIDevice=CounterInself._PulseTrain=self._CODevice+'InternalOutput'# counter bins are triggered by CTR1self._TickSource=TickSource#the signal: ticks coming from the APDs# nidaq Tasksself.COTask=ctypes.c_ulong()
self.CITask=ctypes.c_ulong()
CHK( dll.DAQmxCreateTask('', ctypes.byref(self.COTask)) )
CHK( dll.DAQmxCreateTask('', ctypes.byref(self.CITask)) )
f=1./ ( CountTime+SettlingTime )
DutyCycle=CountTime*f# ctr1 generates a continuous square wave with given duty cycle. This serves simultaneously# as sampling clock for AO (update DAC at falling edge), and as gate for counter (count between# rising and falling edge)CHK( dll.DAQmxCreateCOPulseChanFreq( self.COTask,
self._CODevice, '',
DAQmx_Val_Hz, DAQmx_Val_Low, ctypes.c_double(0),
ctypes.c_double(f),
ctypes.c_double(DutyCycle) ) )
# ctr0 is used to count photons. Used to count ticks in N+1 gatesCHK( dll.DAQmxCreateCIPulseWidthChan( self.CITask,
self._CIDevice, '',
ctypes.c_double(0),
ctypes.c_double(self._MaxCounts*DutyCycle/f),
DAQmx_Val_Ticks, DAQmx_Val_Rising, '') )
CHK( dll.DAQmxSetCIPulseWidthTerm( self.CITask, self._CIDevice, self._PulseTrain ) )
CHK( dll.DAQmxSetCICtrTimebaseSrc( self.CITask, self._CIDevice, self._TickSource ) )
self._SettlingTime=Noneself._CountTime=Noneself._DutyCycle=Noneself._f=Noneself._CountSamples=self._DefaultCountLengthself.setTiming(SettlingTime, CountTime)
self._CINread=ctypes.c_int32()
self.setCountLength(self._DefaultCountLength)
defsetCountLength(self, N, BufferLength=None, SampleLength=None):
""" Set the number of counter samples / length of pulse train. If N is finite, a finite pulse train of length N is generated and N count samples are acquired. If N is infinity, an infinite pulse train is generated. BufferLength and SampleLength specify the length of the buffer and the length of a sample that is read in one read operation. In this case, always the most recent samples are read. """ifN<numpy.inf:
CHK( dll.DAQmxCfgImplicitTiming( self.COTask, DAQmx_Val_ContSamps, ctypes.c_ulonglong(N)) )
CHK( dll.DAQmxCfgImplicitTiming( self.CITask, DAQmx_Val_FiniteSamps, ctypes.c_ulonglong(N)) )
# read samples from beginning of acquisition, do not overwriteCHK( dll.DAQmxSetReadRelativeTo(self.CITask, DAQmx_Val_CurrReadPos) )
CHK( dll.DAQmxSetReadOffset(self.CITask, 0) )
CHK( dll.DAQmxSetReadOverWrite(self.CITask, DAQmx_Val_DoNotOverwriteUnreadSamps) )
self._CountSamples=Nself._TaskTimeout=4*N/self._felse:
CHK( dll.DAQmxCfgImplicitTiming( self.COTask, DAQmx_Val_ContSamps, ctypes.c_ulonglong(BufferLength)) )
CHK( dll.DAQmxCfgImplicitTiming( self.CITask, DAQmx_Val_ContSamps, ctypes.c_ulonglong(BufferLength)) )
# read most recent samples, overwrite bufferCHK( dll.DAQmxSetReadRelativeTo(self.CITask, DAQmx_Val_MostRecentSamp) )
CHK( dll.DAQmxSetReadOffset(self.CITask, -SampleLength) )
CHK( dll.DAQmxSetReadOverWrite(self.CITask, DAQmx_Val_OverwriteUnreadSamps) )
self._CountSamples=SampleLengthself._CountLength=Nself._CIData=numpy.empty((self._CountSamples,), dtype=numpy.uint32)
defCountLength(self):
returnself._CountLengthdefsetTiming(self, SettlingTime, CountTime):
ifSettlingTime!=self._SettlingTimeorCountTime!=self._CountTime:
f=1./ ( CountTime+SettlingTime )
DutyCycle=CountTime*fCHK( dll.DAQmxSetCOPulseFreq( self.COTask, self._CODevice, ctypes.c_double(f) ) )
CHK( dll.DAQmxSetCOPulseDutyCyc( self.COTask, self._CODevice, ctypes.c_double(DutyCycle) ) )
self._SettlingTime=SettlingTimeself._CountTime=CountTimeself._f=fself._DutyCycle=DutyCycleifself._CountSamplesisnotNone:
self._TaskTimeout=4*self._CountSamples/self._fdefgetTiming(self):
returnself._SettlingTime, self._CountTimedefStartCO(self):
CHK( dll.DAQmxStartTask(self.COTask) )
defStartCI(self):
CHK( dll.DAQmxStartTask(self.CITask) )
defStopCO(self):
CHK( dll.DAQmxStopTask(self.COTask) )
defStopCI(self):
CHK( dll.DAQmxStopTask(self.CITask) )
defReadCI(self):
CHK( dll.DAQmxReadCounterU32(self.CITask
, ctypes.c_int32(self._CountSamples)
, ctypes.c_double(self._RWTimeout)
, self._CIData.ctypes.data_as(c_uint32_p)
, ctypes.c_uint32(self._CountSamples)
, ctypes.byref(self._CINread), None) )
returnself._CIDatadefWaitCI(self):
CHK( dll.DAQmxWaitUntilTaskDone(self.CITask, ctypes.c_double(self._TaskTimeout)) )
defstartCounter(self, SettlingTime, CountTime):
ifself.CountLength() !=numpy.inf:
self.setCountLength(numpy.inf, max(1000, self._CountAverageLength), self._CountAverageLength)
self.setTiming(SettlingTime, CountTime)
self.StartCI()
self.StartCO()
time.sleep(self._CountSamples/self._f)
defCount(self):
"""Return a single count."""returnself.ReadCI().mean() *self._f/self._DutyCycledefstopCounter(self):
self.StopCI()
self.StopCO()
# def __del__(self):# CHK( dll.DAQmxClearTask(self.CITask) )# CHK( dll.DAQmxClearTask(self.COTask) )classMultiBoard( CounterBoard ):
"""nidaq Multifuntion board."""_DefaultAOLength=1000def__init__(self, CounterIn, CounterOut, TickSource, AOChannels, v_range=(0.,10.)):
CounterBoard.__init__(self, CounterIn, CounterOut, TickSource)
self._AODevice=AOChannelsself.AOTask=ctypes.c_ulong()
CHK( dll.DAQmxCreateTask('', ctypes.byref(self.AOTask)) )
CHK( dll.DAQmxCreateAOVoltageChan( self.AOTask,
self._AODevice, '',
ctypes.c_double(v_range[0]),
ctypes.c_double(v_range[1]),
DAQmx_Val_Volts,'') )
self._AONwritten=ctypes.c_int32()
self.setAOLength(self._DefaultAOLength)
defsetAOLength(self, N):
ifN==1:
CHK( dll.DAQmxSetSampTimingType( self.AOTask, DAQmx_Val_OnDemand) )
else:
CHK( dll.DAQmxSetSampTimingType( self.AOTask, DAQmx_Val_SampClk) )
ifN<numpy.inf:
CHK( dll.DAQmxCfgSampClkTiming( self.AOTask,
self._PulseTrain,
ctypes.c_double(self._f),
DAQmx_Val_Falling, DAQmx_Val_FiniteSamps,
ctypes.c_ulonglong(N)) )
self._AOLength=NdefAOLength(self):
returnself._AOLengthdefStartAO(self):
CHK( dll.DAQmxStartTask(self.AOTask) )
defStopAO(self):
CHK( dll.DAQmxStopTask(self.AOTask) )
defWriteAO(self, data, start=False):
CHK( dll.DAQmxWriteAnalogF64( self.AOTask,
ctypes.c_int32(self._AOLength),
start,
ctypes.c_double(self._RWTimeout),
DAQmx_Val_GroupByChannel,
data.ctypes.data_as(c_float64_p),
ctypes.byref(self._AONwritten), None) )
returnself._AONwritten.valueclassAOBoard():
"""nidaq Multifuntion board."""def__init__(self, AOChannels):
self._AODevice=AOChannelsself.Task=ctypes.c_ulong()
CHK( dll.DAQmxCreateTask('', ctypes.byref(self.Task)) )
CHK( dll.DAQmxCreateAOVoltageChan( self.Task,
self._AODevice, '',
ctypes.c_double(0.),
ctypes.c_double(10.),
DAQmx_Val_Volts,'') )
CHK( dll.DAQmxSetSampTimingType( self.Task, DAQmx_Val_OnDemand) )
self._Nwritten=ctypes.c_int32()
defWrite(self, data):
CHK( dll.DAQmxWriteAnalogF64(self.Task,
ctypes.c_long(1),
1,
ctypes.c_double(1.0),
DAQmx_Val_GroupByChannel,
data.ctypes.data_as(c_float64_p),
ctypes.byref(self._Nwritten),
None) )
defStart(self):
CHK( dll.DAQmxStartTask(self.Task) )
defWait(self, timeout):
CHK( dll.DAQmxWaitUntilTaskDone(self.Task, ctypes.c_double(timeout)) )
defStop(self):
CHK( dll.DAQmxStopTask(self.Task) )
def__del__(self):
CHK( dll.DAQmxClearTask(self.Task) )
classScanner( MultiBoard ):
def__init__(self, CounterIn, CounterOut, TickSource, AOChannels,
x_range, y_range, z_range, v_range=(0.,10.),
invert_x=False, invert_y=False, invert_z=False, swap_xy=False, TriggerChannels=None):
MultiBoard.__init__(self, CounterIn=CounterIn,
CounterOut=CounterOut,
TickSource=TickSource,
AOChannels=AOChannels,
v_range=v_range)
ifTriggerChannelsisnotNone:
self._trigger_task=DOTask(TriggerChannels)
self.xRange=x_rangeself.yRange=y_rangeself.zRange=z_rangeself.vRange=v_rangeself.x=0.0self.y=0.0self.z=0.0self.invert_x=invert_xself.invert_y=invert_yself.invert_z=invert_zself.swap_xy=swap_xydefgetXRange(self):
returnself.xRangedefgetYRange(self):
returnself.yRangedefgetZRange(self):
returnself.zRangedefsetx(self, x):
"""Move stage to x, y, z """ifself.AOLength() !=1:
self.setAOLength(1)
self.WriteAO(self.PosToVolt((x, self.y, self.z)), start=True)
self.x=xdefsety(self, y):
"""Move stage to x, y, z """ifself.AOLength() !=1:
self.setAOLength(1)
self.WriteAO(self.PosToVolt((self.x, y, self.z)), start=True)
self.y=ydefsetz(self, z):
"""Move stage to x, y, z """ifself.AOLength() !=1:
self.setAOLength(1)
self.WriteAO(self.PosToVolt((self.x, self.y, z)), start=True)
self.z=zdefscanLine(self, Line, SecondsPerPoint, return_speed=None):
"""Perform a line scan. If return_speed is not None, return to beginning of line with a speed 'return_speed' times faster than the speed currently set. """self.setTiming(SecondsPerPoint*0.1, SecondsPerPoint*0.9)
N=Line.shape[1]
ifself.AOLength() !=N: # set buffers of nidaq Tasks, data read buffer and timeout if neededself.setAOLength(N)
ifself.CountLength() !=N+1:
self.setCountLength(N+1)
# send line start triggerifhasattr(self, '_trigger_task'):
self._trigger_task.Write(numpy.array((1,0), dtype=numpy.uint8) )
time.sleep(0.001)
self._trigger_task.Write(numpy.array((0,0), dtype=numpy.uint8) )
# acquire lineself.WriteAO( self.PosToVolt(Line) )
self.StartAO()
self.StartCI()
self.StartCO()
self.WaitCI()
# send line stop triggerifhasattr(self, '_trigger_task'):
self._trigger_task.Write(numpy.array((0,1), dtype=numpy.uint8) )
time.sleep(0.001)
self._trigger_task.Write(numpy.array((0,0), dtype=numpy.uint8) )
data=self.ReadCI()
self.StopAO()
self.StopCI()
self.StopCO()
ifreturn_speedisnotNone:
self.setTiming(SecondsPerPoint*0.5/return_speed, SecondsPerPoint*0.5/return_speed)
self.WriteAO( self.PosToVolt(Line[:,::-1]) )
self.StartAO()
self.StartCI()
self.StartCO()
self.WaitCI()
self.StopAO()
self.StopCI()
self.StopCO()
self.setTiming(SecondsPerPoint*0.1, SecondsPerPoint*0.9)
returndata[1:] *self._f/self._DutyCycledefsetPosition(self, x, y, z):
"""Move stage to x, y, z"""ifself.AOLength() !=1:
self.setAOLength(1)
self.WriteAO(self.PosToVolt((x, y, z)), start=True)
self.x, self.y, self.z=x, y, zdefPosToVolt(self, r):
x=self.xRangey=self.yRangez=self.zRangev=self.vRangev0=v[0]
dv=v[1]-v[0]
ifself.invert_x:
vx=v0+(x[1]-r[0])/(x[1]-x[0])*dvelse:
vx=v0+(r[0]-x[0])/(x[1]-x[0])*dvifself.invert_y:
vy=v0+(y[1]-r[1])/(y[1]-y[0])*dvelse:
vy=v0+(r[1]-y[0])/(y[1]-y[0])*dvifself.invert_z:
vz=v0+(z[1]-r[2])/(z[1]-z[0])*dvelse:
vz=v0+(r[2]-z[0])/(z[1]-z[0])*dvifself.swap_xy:
vt=vxvx=vyvy=vtreturnnumpy.vstack( (vx,vy,vz) )
The text was updated successfully, but these errors were encountered:
Thank you very much for your input, @michaelb1886.
The way scanning is currently realized using NI cards is indeed not the optimal solution. This has already been discussed and is on our agenda for omniscan. In principle it should not matter HOW exactly the hardware module is performing the scan as long as it complies with the interface to the controlling logic module. The problem is that the current scanner logic is explicitly handling hardware specific implementation details which is going against the qudi idea of abstracted hardware.
What is affected by this bug?
The reason X series NI cards with 4 counters are required is because the clock outputs for scanning and counting are finite in length - requiring two counters.
When does this occur?
For any NIDAQ counting or scanning operation.
Where on the platform does it happen?
How do we replicate the issue?
Expected behavior (i.e. solution)
Change the clock outputs to be continuous, and make the analog output, and counter input that depend on the clock timing have finite timing. This allows all M series cards to be used. I was going to rewrite the nidaq hardware file as such, but I soon learnt that the number of counters was too tied the confocal logic to make the effort worth the pay-off.
Other Comments
I understnad this is probably not a major priority, but if the the moniscan project is a major rethink of the confocal scanning behaviour then this would be a good time to implement what I think is an obviously good idea.
Here is an example from the Stuttgart code that uses 2 counters for all scanning/counting - it works well.
The text was updated successfully, but these errors were encountered: