Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fitting functions for spectral mismatch factor models #2242

Open
RDaxini opened this issue Oct 8, 2024 · 3 comments
Open

Fitting functions for spectral mismatch factor models #2242

RDaxini opened this issue Oct 8, 2024 · 3 comments

Comments

@RDaxini
Copy link
Contributor

RDaxini commented Oct 8, 2024

Is your feature request related to a problem? Please describe.
The spectrum.spectral_factor.* functions offer model coefficients but pvlib currently has no functionality for users to derive model coefficients from their own data.

Describe the solution you'd like
I'd like to implement surface fitting functions for:

spectrum.spectral_factor_pvspec()
spectrum.spectral_factor_jrc()
spectrum.spectral_factor_firstsolar()

and a polynomial fitting function for:

spectrum.spectral_factor_sapm

Additional context
Some questions:

  1. Following on from the discussion about references and fitting functions in Add method to fit Huld model #1979 (related: What is the standard for inclusion of features in pvlib? #1898), would the fitting functions here need to reproduce the fitting methods adopted in the original publications, or would a generic fitting tool be okay?
    By generic tool, I mean, for example, a common method like Ordinary Least Squares using established python packages such as scipy.optimize and statsmodels in order to fit to the published model parameterisation.

  2. If the former (reproducing the published method), but the precise method is not mentioned in the reference, would communication from the author confirming the adopted method be sufficient?

  3. If a generic tool is okay then what would a suitable reference be? A maths/stats papers corroborating the method's validity? Reputable examples (PV or non-PV?) of its application?

@kandersolar
Copy link
Member

I don't think there is consensus on this topic. I'll give my 2 cents. In general, I think determining model parameter values is just as deserving of reference and validation as the models themselves are, and in pvlib we should strive for high rigor for both models and parameter estimation methods.

would the fitting functions here need to reproduce the fitting methods adopted in the original publications, or would a generic fitting tool be okay?

I'd say both approaches can result in worthy additions. We just would need to make sure the function is documented and named accordingly. Either way, a suitable reference is needed.

If the former (reproducing the published method), but the precise method is not mentioned in the reference, would communication from the author confirming the adopted method be sufficient?

Lacking a reference means no specification and no validation. Communication with the author could potentially address the former, but the latter would likely remain unresolved. I think the answer here is that it probably depends on the method, with some amount of case-by-case judgement call being required.

If a generic tool is okay then what would a suitable reference be? A maths/stats papers corroborating the method's validity?Reputable examples (PV or non-PV?) of its application?

I don't think a reference validating OLS (or whatever) is very helpful. I'd want to see something that somehow relates the underlying math (OLS, etc) with the application (e.g. the PVSPEC model), for example by showing that it produces reasonable values for a range of PV technologies, climates, etc, or why some transformation (e.g. performing OLS in log space) was chosen, or what motivated those specific optimization bounds, or...

@adriesse
Copy link
Member

adriesse commented Dec 3, 2024

Perhaps a gallery example?

@cwhanse
Copy link
Member

cwhanse commented Dec 3, 2024

In my opinion, validating a parameter estimation method should be done in a manner that demonstrates that the method has two properties:

  1. the model fitting does not introduce a prediction bias, and
  2. the fitting is robust in the presence of measurement error.

One way to demonstrate the first property is as follows:

  1. select model parameters
  2. calculate model output
  3. fit the model to the calculated output
  4. and re-predict the model output with the fitted parameters.
    Comparison of the selected and fitted parameters, and of the calculated and predicted output, should reveal if the method itself is a source of bias.

The second property can be shown by constructing a model that generates representative measurement error, repeating steps #1 and #2 above, generating many realizations of calculated model output with error applied, fitting the model to each realization, and examining statistics for the fitted model.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants