diff --git a/dev/.documenter-siteinfo.json b/dev/.documenter-siteinfo.json index 0a30666..ed8eb39 100644 --- a/dev/.documenter-siteinfo.json +++ b/dev/.documenter-siteinfo.json @@ -1 +1 @@ -{"documenter":{"julia_version":"1.10.5","generation_timestamp":"2024-09-29T00:17:28","documenter_version":"1.7.0"}} \ No newline at end of file +{"documenter":{"julia_version":"1.10.5","generation_timestamp":"2024-09-29T02:16:16","documenter_version":"1.7.0"}} \ No newline at end of file diff --git a/dev/api/index.html b/dev/api/index.html index 728cbb7..e637bc9 100644 --- a/dev/api/index.html +++ b/dev/api/index.html @@ -12,7 +12,7 @@ julia> y = rand(Float32, 2, 5); julia> size(first(nomad((u, y), ps, st))) -(8, 5)source
NeuralOperators.DeepONetType
DeepONet(branch, trunk, additional)

Constructs a DeepONet from a branch and trunk architectures. Make sure that both the nets output should have the same first dimension.

Arguments

  • branch: Lux network to be used as branch net.
  • trunk: Lux network to be used as trunk net.

Keyword Arguments

  • additional: Lux network to pass the output of DeepONet, to include additional operations for embeddings, defaults to nothing

References

[1] Lu Lu, Pengzhan Jin, George Em Karniadakis, "DeepONet: Learning nonlinear operators for identifying differential equations based on the universal approximation theorem of operators", doi: https://arxiv.org/abs/1910.03193

Example

julia> branch_net = Chain(Dense(64 => 32), Dense(32 => 32), Dense(32 => 16));
+(8, 5)
source
NeuralOperators.DeepONetType
DeepONet(branch, trunk, additional)

Constructs a DeepONet from a branch and trunk architectures. Make sure that both the nets output should have the same first dimension.

Arguments

  • branch: Lux network to be used as branch net.
  • trunk: Lux network to be used as trunk net.

Keyword Arguments

  • additional: Lux network to pass the output of DeepONet, to include additional operations for embeddings, defaults to nothing

References

[1] Lu Lu, Pengzhan Jin, George Em Karniadakis, "DeepONet: Learning nonlinear operators for identifying differential equations based on the universal approximation theorem of operators", doi: https://arxiv.org/abs/1910.03193

Example

julia> branch_net = Chain(Dense(64 => 32), Dense(32 => 32), Dense(32 => 16));
 
 julia> trunk_net = Chain(Dense(1 => 8), Dense(8 => 8), Dense(8 => 16));
 
@@ -25,7 +25,7 @@
 julia> y = rand(Float32, 1, 10, 5);
 
 julia> size(first(deeponet((u, y), ps, st)))
-(10, 5)
source
NeuralOperators.FourierNeuralOperatorType
FourierNeuralOperator(
+(10, 5)
source
NeuralOperators.FourierNeuralOperatorType
FourierNeuralOperator(
     σ=gelu; chs::Dims{C}=(2, 64, 64, 64, 64, 64, 128, 1), modes::Dims{M}=(16,),
     permuted::Val{perm}=False, kwargs...) where {C, M, perm}

Fourier neural operator is a operator learning model that uses Fourier kernel to perform spectral convolutions. It is a promising way for surrogate methods, and can be regarded as a physics operator.

The model is comprised of a Dense layer to lift (d + 1)-dimensional vector field to n-dimensional vector field, and an integral kernel operator which consists of four Fourier kernels, and two Dense layers to project data back to the scalar field of interest space.

Arguments

  • σ: Activation function for all layers in the model.

Keyword Arguments

  • chs: A Tuple or Vector of the 8 channel size.
  • modes: The modes to be preserved. A tuple of length d, where d is the dimension of data.
  • permuted: Whether the dim is permuted. If permuted = Val(false), the layer accepts data in the order of (ch, x_1, ... , x_d , batch). Otherwise the order is (x_1, ... , x_d, ch, batch).

Example

julia> fno = FourierNeuralOperator(gelu; chs=(2, 64, 64, 128, 1), modes=(16,));
 
@@ -34,19 +34,19 @@
 julia> u = rand(Float32, 2, 1024, 5);
 
 julia> size(first(fno(u, ps, st)))
-(1, 1024, 5)
source

Building blocks

NeuralOperators.OperatorConvType
OperatorConv(ch::Pair{<:Integer, <:Integer}, modes::Dims,
+(1, 1024, 5)
source

Building blocks

NeuralOperators.OperatorConvType
OperatorConv(ch::Pair{<:Integer, <:Integer}, modes::Dims,
     ::Type{<:AbstractTransform}; init_weight=glorot_uniform,
     permuted=Val(false))

Arguments

  • ch: A Pair of input and output channel size ch_in => ch_out, e.g. 64 => 64.
  • modes: The modes to be preserved. A tuple of length d, where d is the dimension of data.
  • ::Type{TR}: The transform to operate the transformation.

Keyword Arguments

  • init_weight: Initial function to initialize parameters.
  • permuted: Whether the dim is permuted. If permuted = Val(false), the layer accepts data in the order of (ch, x_1, ... , x_d, batch). Otherwise the order is (x_1, ... , x_d, ch, batch).

Example

julia> OperatorConv(2 => 5, (16,), FourierTransform{ComplexF32});
 
 julia> OperatorConv(2 => 5, (16,), FourierTransform{ComplexF32}; permuted=Val(true));
-
source
NeuralOperators.SpectralConvFunction
SpectralConv(args...; kwargs...)

Construct a OperatorConv with FourierTransform{ComplexF32} as the transform. See OperatorConv for the individual arguments.

Example

julia> SpectralConv(2 => 5, (16,));
+
source
NeuralOperators.SpectralConvFunction
SpectralConv(args...; kwargs...)

Construct a OperatorConv with FourierTransform{ComplexF32} as the transform. See OperatorConv for the individual arguments.

Example

julia> SpectralConv(2 => 5, (16,));
 
 julia> SpectralConv(2 => 5, (16,); permuted=Val(true));
-
source
NeuralOperators.OperatorKernelType
OperatorKernel(ch::Pair{<:Integer, <:Integer}, modes::Dims, transform::Type{TR},
+
source
NeuralOperators.OperatorKernelType
OperatorKernel(ch::Pair{<:Integer, <:Integer}, modes::Dims, transform::Type{TR},
     act::A=identity; permuted=Val(false), kwargs...) where {TR <: AbstractTransform, A}

Arguments

  • ch: A Pair of input and output channel size ch_in => ch_out, e.g. 64 => 64.
  • modes: The modes to be preserved. A tuple of length d, where d is the dimension of data.
  • ::Type{TR}: The transform to operate the transformation.

Keyword Arguments

  • σ: Activation function.
  • permuted: Whether the dim is permuted. If permuted = Val(true), the layer accepts data in the order of (ch, x_1, ... , x_d , batch). Otherwise the order is (x_1, ... , x_d, ch, batch).

All the keyword arguments are passed to the OperatorConv constructor.

Example

julia> OperatorKernel(2 => 5, (16,), FourierTransform{ComplexF64});
 
 julia> OperatorKernel(2 => 5, (16,), FourierTransform{ComplexF64}; permuted=Val(true));
-
source
NeuralOperators.SpectralKernelFunction
SpectralKernel(args...; kwargs...)

Construct a OperatorKernel with FourierTransform{ComplexF32} as the transform. See OperatorKernel for the individual arguments.

Example

julia> SpectralKernel(2 => 5, (16,));
+
source
NeuralOperators.SpectralKernelFunction
SpectralKernel(args...; kwargs...)

Construct a OperatorKernel with FourierTransform{ComplexF32} as the transform. See OperatorKernel for the individual arguments.

Example

julia> SpectralKernel(2 => 5, (16,));
 
 julia> SpectralKernel(2 => 5, (16,); permuted=Val(true));
-
source

Transform API

NeuralOperators.AbstractTransformType
AbstractTransform

Interface

  • Base.ndims(<:AbstractTransform): N dims of modes
  • transform(<:AbstractTransform, x::AbstractArray): Apply the transform to x
  • truncate_modes(<:AbstractTransform, x_transformed::AbstractArray): Truncate modes that contribute to the noise
  • inverse(<:AbstractTransform, x_transformed::AbstractArray): Apply the inverse transform to x_transformed
source
+source

Transform API

NeuralOperators.AbstractTransformType
AbstractTransform

Interface

  • Base.ndims(<:AbstractTransform): N dims of modes
  • transform(<:AbstractTransform, x::AbstractArray): Apply the transform to x
  • truncate_modes(<:AbstractTransform, x_transformed::AbstractArray): Truncate modes that contribute to the noise
  • inverse(<:AbstractTransform, x_transformed::AbstractArray): Apply the inverse transform to x_transformed
source
diff --git a/dev/index.html b/dev/index.html index 79fcb07..7eedf5d 100644 --- a/dev/index.html +++ b/dev/index.html @@ -317,4 +317,4 @@ [8e850b90] libblastrampoline_jll v5.11.0+0 [8e850ede] nghttp2_jll v1.52.0+1 [3f19e933] p7zip_jll v17.4.0+2 -Info Packages marked with have new versions available but compatibility constraints restrict them from upgrading. To see why use `status --outdated -m`

You can also download the manifest file and the project file.

+Info Packages marked with have new versions available but compatibility constraints restrict them from upgrading. To see why use `status --outdated -m`

You can also download the manifest file and the project file.

diff --git a/dev/tutorials/deeponet/7fa59876.png b/dev/tutorials/deeponet/7fa59876.png deleted file mode 100644 index bb6694a..0000000 Binary files a/dev/tutorials/deeponet/7fa59876.png and /dev/null differ diff --git a/dev/tutorials/deeponet/caeb4943.png b/dev/tutorials/deeponet/caeb4943.png new file mode 100644 index 0000000..d30eb2e Binary files /dev/null and b/dev/tutorials/deeponet/caeb4943.png differ diff --git a/dev/tutorials/deeponet/index.html b/dev/tutorials/deeponet/index.html index b10e78a..3dda314 100644 --- a/dev/tutorials/deeponet/index.html +++ b/dev/tutorials/deeponet/index.html @@ -46,4 +46,4 @@ losses = train!(deeponet, ps, st, data; epochs=1000) -lines(losses)Example block output +lines(losses)Example block output diff --git a/dev/tutorials/fno/54a08af1.png b/dev/tutorials/fno/54a08af1.png new file mode 100644 index 0000000..0aae8ab Binary files /dev/null and b/dev/tutorials/fno/54a08af1.png differ diff --git a/dev/tutorials/fno/a58d8e20.png b/dev/tutorials/fno/a58d8e20.png deleted file mode 100644 index 8dd19e4..0000000 Binary files a/dev/tutorials/fno/a58d8e20.png and /dev/null differ diff --git a/dev/tutorials/fno/index.html b/dev/tutorials/fno/index.html index 4984db7..944f687 100644 --- a/dev/tutorials/fno/index.html +++ b/dev/tutorials/fno/index.html @@ -38,4 +38,4 @@ losses = train!(fno, ps, st, data; epochs=100) -lines(losses)Example block output +lines(losses)Example block output diff --git a/dev/tutorials/nomad/71b11004.png b/dev/tutorials/nomad/71b11004.png deleted file mode 100644 index 8dd1ab2..0000000 Binary files a/dev/tutorials/nomad/71b11004.png and /dev/null differ diff --git a/dev/tutorials/nomad/8a628a07.png b/dev/tutorials/nomad/8a628a07.png new file mode 100644 index 0000000..5890a18 Binary files /dev/null and b/dev/tutorials/nomad/8a628a07.png differ diff --git a/dev/tutorials/nomad/index.html b/dev/tutorials/nomad/index.html index 6248804..674af81 100644 --- a/dev/tutorials/nomad/index.html +++ b/dev/tutorials/nomad/index.html @@ -44,4 +44,4 @@ losses = train!(nomad, ps, st, data; epochs=1000) -lines(losses)Example block output +lines(losses)Example block output