Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Type Parameters for ObservationProcess #59

Open
THargreaves opened this issue Dec 5, 2024 · 10 comments
Open

Type Parameters for ObservationProcess #59

THargreaves opened this issue Dec 5, 2024 · 10 comments

Comments

@THargreaves
Copy link
Collaborator

It looks like at some point we've diverged on the interpretation of the type parameter for ObservationProcess.

As a recap, we have just the one type parameter

abstract type ObservationProcess{T} end

This is used in two places with contradictory meanings. First it is used to determine the output container type for forward simulation:

T_dyn = eltype(LD)
T_obs = eltype(OP)
xs = Vector{T_dyn}(undef, T)
ys = Vector{T_obs}(undef, T)

It is also used to define the main type parameter of the SSM which I believe corresponds to the type of the log-likelihoods generated by the SSM:

T = promote_type(eltype(OPT), eltype(LDT))
return new{T,typeof(dyn),typeof(obs)}(dyn, obs)

I feel like both of these use-cases have a place. Is it worth having two type parameters?

One for the element type (which could be some weird abstract thing, e.g. a collection of Lévy jumps, rather than just a Vector{T}). One for the type returned by the logdensity function?

If we are doing this, the same should probably hold for the LatentDynamics. One for the state type, one for the type returned by logdensity.

Alternatively, it might be overkill to store the type of the state/observation. Are there any serious drawbacks that come from sampling x1, y1 and then constructing a vector from their empirical types?

Would especially appreciate thoughts from @charlesknipp as I believe it was your desire to store the log-likelihood type.

@charlesknipp
Copy link
Collaborator

charlesknipp commented Dec 26, 2024

I think you raise a very valid criticism of my idea. First of all, I agree that it doesn't make a whole lot of sense in terms of type parameterization. They both have a place, but I feel like we more commonly use the log-likelihood type for inference algorithms, so I focused on that element. Honestly, there is a world in which they coexist, but I think adding an additional parameter would just be extra clutter.

In Distributions.jl, the return type of logpdf is a translation of a state $x$ from some typed distribution. In essence, we can think of the pdf as $x \mapsto [0,1]$ where the result is a floating point representation; if things are type consistent in Distributions.jl, the return type of logpdf/pdf should be a floating point whose number of bits is equivalent to the state type XT.

Most of this is to say that we can instead of using the type parameter to determine the logpdf, we can use some type promotion given XT, since this shouldn't be a specification from the user. I will clear this up with a nicer explanation later, but for now, those are my very disorganized thoughts.

@THargreaves
Copy link
Collaborator Author

I'm not sure I'm quite following what you're saying but I agree with the general idea that it is clear cluttered to add loads of type parameters to handle both the log-likelihood type and state types.

My latest thoughts on this that the type parameters for the state/observation types are perhaps the overkill bit. Yes it's a bit more messy in the code, but the following is (I think—would appreciate confirmation) completely valid and type stable

x0 = initialise(rng model)
xs = Vector{typeof(x0)}(undef, T + 1)
xs[1] = x0

or something like that.

There would therefore only be one type parameter which would correspond to the log-likelihood type but more generally the floating point type that all fields in the model use (e.g. Q::Matrix{T}).

If you're happy with this in terms of autodiff, I think that's likely sufficient and avoids over-complication.

@charlesknipp
Copy link
Collaborator

the above is not necessarily type stable. Consider the scanario in which the initialization is Normal(0,1) and the state transition is Normal(A*x, 1) where we autodiff with respect to A. ForwardDiff.jl would fail here since the initialization doesn't account for the type of A

@THargreaves
Copy link
Collaborator Author

THargreaves commented Dec 26, 2024

Ah I think I see what you mean. I guess I was suggesting this is the users' responsibility.

The would have to define

struct MyDynamics{T}
    mu_init::Vector{T}
    A::Matrix{T}
end

so that the type of A is known and is guaranteed to match the initialisation.

Is this what you mean?

@charlesknipp
Copy link
Collaborator

yeah, although it would have to be the other way around. The type of A is known and initialization is guaranteed to match it. In that case, we're kosher.

@THargreaves
Copy link
Collaborator Author

Exactly, though on reflection maybe that is a tough ask in general. I think I've shared with you before that Distributions isn't always type consistent. E.g. Uniform{T} always returns Float64 when sampling regardless of T.

@charlesknipp
Copy link
Collaborator

Uniform{T} always returns Float64 when sampling regardless of T.

Well that sucks. I honestly might raise an issue.

@charlesknipp
Copy link
Collaborator

I checked just to confirm and oh my goodness 🤮

julia> test = Uniform(0f0,1f0)
Uniform{Float32}(a=0.0f0, b=1.0f0)

julia> rand(test)
0.07834174250700543

julia> test = Uniform(BigFloat(0),BigFloat(1))
Uniform{BigFloat}(a=0.0, b=1.0)

julia> rand(test)
0.68051738282173801497521026249160058796405792236328125

julia> rand(test, 2)
2-element Vector{Float64}:
 0.8438465943304864
 0.821132972814556

@THargreaves
Copy link
Collaborator Author

Ikr. There's actually a long history of issues on this topic. Think the latest is here:
JuliaStats/Distributions.jl#1905

Think there has been some resistance to it for historical reasons, though some progress is being made. Normal used to have this behaviour but is now type-preserving.

But most other distributions can't be relied on right now.

@THargreaves
Copy link
Collaborator Author

I've been thinking of a possible third approach which whilst even more complicated, provides all the expressivity we need without the awkward T = promote_type(eltype(OPT), eltype(LDT)) step (which doesn't generalise to different state types).

The proposal would be to have two types for each of dyn and obs. The first would be the "arithmetic type" and be something like Float32, Float64, Dual etc. The second type would be the state/observation type—and it would be up to the user to make sure this matches the arithmetic type.

We then obtain the SSM type by promoting the two arithmetic types (or forcing them to be the same—I think this makes more sense).

With this approach, we can still preallocate memory for states/observations.

Maybe overkill, but let me know what you think.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants