-
-
Notifications
You must be signed in to change notification settings - Fork 851
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Color conversion with ICC profiles #1567
base: main
Are you sure you want to change the base?
Conversation
I wonder why the test |
@brianpopow It'll be an accuracy issue most likely. (I hope it's not a JIT issue). It should be possible to inspect the result and see. |
Codecov Report
@@ Coverage Diff @@
## main #1567 +/- ##
======================================
Coverage ? 87%
======================================
Files ? 1023
Lines ? 55212
Branches ? 7052
======================================
Hits ? 48227
Misses ? 5768
Partials ? 1217
Flags with carried forward coverage won't be shown. Click here to find out more. 📣 We’re building smart automated test selection to slash your CI/CD build times. Learn more |
The issue only happens with a Release build. I think i found the reason, but it seems very weird. This can be seen with the testoutput:
|
@brianpopow Woah! That's bonkers! |
I have reported this issue: dotnet/runtime#55623 They confirmed the issue, but they say its unlikely to be fixed because netcore2.1 is out of support in august. |
@JimBobSquarePants It would be really nice, if we could bring this PR forward. This would be a good addition to ImageSharp. I thought, I may ask you, if you know what the PR needs (besides tests) to be finished? What would be the right way to apply an embedded color profile? My first attempt was:
Here is an example image with adobe rgb color profile: This does not seems to work, the colors seem to be wrong. Here are more example images |
@brianpopow Honestly..... I don't know. I was hoping the OP would come back to finish things off. I've just kept things updated over the years and hadn't gotten involved at all in the implementation as yet. Judging from the old comments in the previous PR I believe the code is based somewhat based on the following https://github.com/InternationalColorConsortium/DemoIccMAX/tree/master/IccProfLib As for accuracy. That result looks like it's just spitting out the sRGB values again. I do agree that it would be an awesome addition to the library and would save a ton of headaches. I hoped we'd get it in V3 but that's a mighty big ask. |
I think we definitely need a reference implementation to compare the results against. I tried BABL which gnome is using, but i could not get it to work on windows. I will take a look at DemoIccMAX |
@waacton I did some cleanup and added ICC profiles to LFS since they're quite chunky. It's great to see tests passing! I think we should try to expand on these tests to cover as many conversion types as possible (e.g CMYK to RGB is important) and then once confident it works, I can start wiring up the transforms to the individual formats.
This is something that @saucecontrol has brought up before. I don't actually know what we might encounter. AFAIK image formats are very unlikely to contain non-standard profiles but that doesn't mean we should not potentially consider them. At the moment, as you've spotted, we're limited to 4 channels since we use |
@JimBobSquarePants the expanded tests uncovered some tricky things around perceptual rendering intent for v2 profiles (which seems to be the most common type of profile) that I've tried to address. I plan to work on it more later this week but I've focused on getting tests passing over optimised code for now. Notably I've introduced an extra step in the ICC conversion, where all conversions go via XYZ - even if both profiles have LAB PCS - in case a PCS adjustment needs to be made (performed in XYZ space). Shouldn't be too hard to bypass some of the work when unnecessary if the additional overhead is problematic, just more branching surely. I also feel like I've butchered the use of vectors a bit, would appreciate a review there and a point in the right direction 🙏 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just added some notes as I'm a little confused. I've followed up with some changes to improve readability when working with Vector4
@@ -33,64 +34,103 @@ internal static TTo ConvertUsingIccProfile<TFrom, TTo>(this ColorProfileConverte | |||
MemoryAllocator = converter.Options.MemoryAllocator, | |||
|
|||
// TODO: Double check this but I think these are normalized values. | |||
SourceWhitePoint = CieXyz.FromScaledVector4(new(converter.Options.SourceIccProfile.Header.PcsIlluminant, 1F)), | |||
TargetWhitePoint = CieXyz.FromScaledVector4(new(converter.Options.TargetIccProfile.Header.PcsIlluminant, 1F)), | |||
SourceWhitePoint = new CieXyz(converter.Options.SourceIccProfile.Header.PcsIlluminant), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are the illuminant values not given using the ICC scaling? I would have assumed they were given we need to pass them as such.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This has sent me down a rabbit hole and I don't feel any closer to understanding why, but the illuminant values are stored in the header in the range [0, 1], no scaling needed.
I can't find solid information why the scaling is even needed for XYZ LUTs other than "that's what the DemoIccMAX code does". The closest thing I can find in the v4 spec itself is this footnote in Annex F.3 page 102:
NOTE A three-component Matrix-based model can alternatively be represented in a lutAToBType tag with M curves, a matrix with zero offsets, and identity B curves. While the M curves are set to the corresponding TRC curves, matrix values from the three-component Matrix-based model need to be scaled by (32 768/65 535) before being stored in the lutAToBType matrix in order to produce equivalent PCS values. (32 768/65 535) represents the encoding factor for the PCS PCSXYZ encoding.
(The spec is so cumbersome, the information I'm looking for could easily be buried elsewhere...)
At this point I'm assuming either
- XYZ LUT data is in [0, ~0.5] by convention (or by something in the spec I can't find)
- XYZ LUT data range is profile-specific, and I've not encountered one that isn't [0, ~0.5] (or DemoIccMAX doesn't account for the possibility)
🤕
One other note, as far as I understand the PCS illuminant must be D50 (in case that enables any further optimisation)
7.2.16 PCS illuminant field (Bytes 68 to 79)
The PCS illuminant field shall contain the nCIEXYZ values of the illuminant of the PCS, encoded as an
XYZNumber. The value, when rounded to four decimals, shall be X = 0,9642, Y = 1,0 and Z = 0,8249.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thinking about it a bit more, it's going to be related to LUTs storing uInt16
[0, 65535] but XYZ values being encoded as s15Fixed16
[-32768, ~32768], and needing to account for that.
IccVersion targetVersion = targetHeader.Version; | ||
|
||
// all conversions are funnelled through XYZ in case PCS adjustments need to be made | ||
CieXyz xyz; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reading below we only need this if adjustPcsForPerceptual
is true. I'd rather avoid the overhead of additional conversions when not necessary. We'll be using this code in our decoder which must be fast.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'll take a shot at avoiding the overhead when unnecessary. I expect it will result in functions that look very similar like ConvertIcc()
and ConvertIccWithPerceptualAdjustment()
- I can't see a natural if (adjustmentNeeded) { PerformExtraStep() }
at the moment
{ | ||
// Convert from Lab v4 to Lab v2. | ||
pcs = LabToLabV2(pcs); | ||
Vector3 iccXyz = xyz.ToScaledVector4().AsVector128().AsVector3(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It looks like we're mixing up normalized and standard values here and it's a little confusing.
// We use the original ref values here...
Vector3 scale = Vector3.One - Vector3.Divide(refBlack.ToVector3(), refWhite.ToVector3());
// But scale them here?
Vector3 offset = refBlack.ToScaledVector4().AsVector128().AsVector3();
I would extract the methods out with an explanation of the theory behind them also. For example, I don't understand why the math for source and targeted PCS adjustments is different. We're going to need to vectorize these also. (Which may mean providing your reference colors as Vector4
)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yep, happy to refactor to methods with explanations. I think I need to do some reading on best practices regarding Vectors
etc.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Tried to make this clearer and have also precalculated the scale and offset vectors.
Also realised converting XYZ values to XYZ-scaled values (the other type of scaling 😃) was unnecessary - a small saving I can take back over to Unicolour.
CieLab lab = pcsConverter.Convert<CieXyz, CieLab>(in xyz); | ||
pcs = lab.ToScaledVector4(); | ||
case IccColorSpaceType.CieLab: | ||
if (sourceConverter.Is16BitLutEntry) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why is a 16bit LUT calculator treated differently and why is that not version specific?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LAB encodings only changed for 16-bit representations:
- 16-bit max values for 100 & 127 were
FF00
in v2 and becameFFFF
in v4 - 8-bit max values for 100 & 127 were
FF
in v2 and stayedFF
in v4
But for the LUTs, the 16-bit type continues to use the legacy encoding:
For colour values that are in the PCSLAB colour space on the PCS side of the tag, this tag uses the legacy 16-
bit PCSLAB encoding defined in Tables 42 and 43, not the 16-bit PCSLAB encoding defined in 6.3.4.2. This
encoding is retained for backwards compatibility with profile version 2.
@JimBobSquarePants a couple more changes since your review. What I plan to look at next:
|
@@ -39,13 +39,16 @@ public ColorTrcCalculator( | |||
[MethodImpl(MethodImplOptions.AggressiveInlining)] | |||
public Vector4 Calculate(Vector4 value) | |||
{ | |||
// uses the descaled XYZ as DemoMaxICC IccCmm.cpp : CIccXformMatrixTRC::Apply() | |||
Vector4 xyz = new(CieXyz.FromScaledVector4(value).ToVector3(), 1); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Amusingly the matrix is applied to the actual XYZ not the scaled XYZ. Is there a better way to descale this? (Apologies for my rookie vector wrangling)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmmmm… this means we are scaling then descaling each pixel. Do all other calculators use scaled values? If so can we avoid the initial scaling or update this to match?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is the only calculator I've encountered so far that requires the non-scaled values. Won't be sure until we find profiles with the other types of curves but it's likely to be just this one, for the matrix multiplication to apply correctly.
We might be able to do something similar to the Is16BitLutEntry
check and avoid scaling XYZ if we know we're using ColorTrcCalculator
, or add a IVector4Calculator.NeedsScaled
property for calculators themselves to define? It would add more complexity to the target PCS calculation but save on a couple of potential scaling roundtrips.
Also, I'm aware that in general I'm flopping between e.g. Vector
and CieXyz
quite a bit, is that problematic?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should try to avoid switching between the types where possible as that causes per-pixel overhead. Perhaps we should be trying to provide this information up-front? It's tricky...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm a little confused by this comment?
// when data to PCS, output from calculator is descaled XYZ
// but expected return value is scaled XYZ
// see DemoMaxICC IccCmm.cpp : CIccXformMatrixTRC::Apply()
How does the calculator know how to descale the output from the scaled input?
Also doesn't the Gray TRC caclulator require the same?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure I follow your question but I'm still getting my head around it myself. Does it help if I say...
- Matrix TRC can only be used with XYZ PCS
- Simple curve lookups use the scaled XYZ
- In this case, because the output of the curve needs to be multiplied with a matrix, we need to use the actual descaled XYZ it represents
- Downstream operations tend to assume the PCS will be in scaled form
I've experimented with a different approach that might make things clearer, moving the responsibility of knowing about this from ColorTrcCalculator
to ColorProfileConverter
. The downside is you can't "just use" the calculator, but that's true of ICC transforms generally anyway once you start looking closely.
Imagine ColorTrcCalculator
doesn't have these changes, and instead ColorProfileConverter
has these extra steps:
// output of Matrix TRC calculator is descaled XYZ, needs to be re-scaled to be used as PCS
Vector4 sourcePcs = sourceParams.Converter.Calculate(source.ToScaledVector4());
sourcePcs = sourceParams.IsMatrixTrc ? new CieXyz(sourcePcs.AsVector3()).ToScaledVector4() : sourcePcs;
Vector4 targetPcs = ... // do conversion from source PCS to target PCS, potentially with perceptual adjustments
// input to Matrix TRC calculator is descaled XYZ, need to descale PCS before use
targetPcs = targetParams.IsMatrixTrc ? new Vector4(CieXyz.FromScaledVector4(targetPcs).ToVector3(), 1) : targetPcs;
return TTo.FromScaledVector4(targetParams.Converter.Calculate(targetPcs));
I can't tell if it feels better or worse. Depends what mood I'm in when I look!
As for Gray TRC, I can't see any evidence that it does the same thing, just multiplies the scaled monochrome by scaled D50.
💡 If that matrix multiplication in LutEntryCalculator
does indeed need fixing, then presumably the matrix multiplication in ColorTrcCalculator
does too.
With the latest commit, I've been able to run each random-value-input test case of Summary of where this is at:
|
/* This is a hack to trick Unicolour to work in the same way as ImageSharp. | ||
* ImageSharp bypasses PCS adjustment for v2 perceptual intent if source and target both need it | ||
* as they both share the same understanding of what the PCS is (see ColorProfileConverterExtensionsIcc.GetTargetPcsWithPerceptualV2Adjustment) | ||
* Unicolour does not support a direct profile-to-profile conversion so will always perform PCS adjustment for v2 perceptual intent. | ||
* However, PCS adjustment clips negative XYZ values, causing those particular values in Unicolour and ImageSharp to diverge. | ||
* It's unclear to me if there's a fundamental correct answer here. | ||
* | ||
* There are 2 obvious ways to keep Unicolour and ImageSharp values aligned: | ||
* 1. Make ImageSharp always perform PCS adjustment, clipping negative XYZ values during the process - but creates a lot more calculations | ||
* 2. Make Unicolour stop performing PCS adjustment, allowing negative XYZ values during conversion | ||
* | ||
* Option 2 is implemented by modifying the profiles so they claim to be v4 profiles | ||
* since v4 perceptual profiles do not apply PCS adjustment. | ||
*/ | ||
bool isSourcePerceptualV2 = sourceConfig.Icc.Intent == Intent.Perceptual && sourceConfig.Icc.Profile!.Header.ProfileVersion.Major == 2; | ||
bool isTargetPerceptualV2 = targetConfig.Icc.Intent == Intent.Perceptual && targetConfig.Icc.Profile!.Header.ProfileVersion.Major == 2; | ||
if (isSourcePerceptualV2 && isTargetPerceptualV2) | ||
{ | ||
sourceConfig = GetUnicolourConfigAsV4Header(sourceConfig); | ||
targetConfig = GetUnicolourConfigAsV4Header(targetConfig); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is the last fly in the ointment for me at the moment and hopefully the comment makes sense. My goal was to be able to confirm that any source data values follow the same calculations as Unicolour, but in the case where both profiles need PCS adjustment I need to trick Unicolour into treating the profiles in a different way.
If this is too unpleasant, a simpler alternative is to detect if the source PCS ends up as XYZ with negative values, in which case generate a new vector of random numbers and restart the test, and just trust that negative XYZ PCS is converted correctly.
[Clut3x1, new Vector4(0.2f, 0.6f, 0.8f, 0), new Vector4(0.284f, 0, 0, 0)], | ||
[Clut3x1, new Vector4(0.75f, 0.75f, 0.75f, 0), new Vector4(0.31f, 0, 0, 0)], | ||
[Clut2x2, new Vector4(0.2f, 0.6f, 0, 0), new Vector4(0.46f, 0.54f, 0, 0)], | ||
[Clut2x2, new Vector4(0.2f, 0.6f, 0, 0), new Vector4(0.34f, 0.66f, 0, 0)], |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I reproduced the failing tests in the Unicolour codebase - which uses a custom CLUT implementation, not a port of DemoIccMAX - and updated the expected values accordingly. ImageSharp's CLUT produces the same results.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's fantastic news!
public static object[][] Lut8ConversionTestData = | ||
{ | ||
new object[] { Lut8, new Vector4(0.2f, 0.3f, 0, 0), new Vector4(0.339762866f, 0, 0, 0) }, | ||
new object[] { Lut8Matrix, new Vector4(0.21f, 0.31f, 0.41f, 0), new Vector4(0.431305826f, 0, 0, 0) }, | ||
new object[] { Lut8, new Vector4(0.2f, 0.3f, 0, 0), new Vector4(0.4578933760499877f, 0, 0, 0) }, | ||
new object[] { Lut8Matrix, new Vector4(0.21f, 0.31f, 0.41f, 0), new Vector4(0.40099657491875312f, 0, 0, 0) }, | ||
}; | ||
|
||
public static object[][] Lut16ConversionTestData = | ||
{ | ||
new object[] { Lut16, new Vector4(0.2f, 0.3f, 0, 0), new Vector4(0.245625019f, 0, 0, 0) }, | ||
new object[] { Lut16Matrix, new Vector4(0.21f, 0.31f, 0.41f, 0), new Vector4(0.336980581f, 0, 0, 0) }, | ||
new object[] { Lut16, new Vector4(0.2f, 0.3f, 0, 0), new Vector4(0.3543750030529918f, 0, 0, 0) }, | ||
new object[] { Lut16Matrix, new Vector4(0.21f, 0.31f, 0.41f, 0), new Vector4(0.29739562389689594f, 0, 0, 0) }, | ||
}; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These test values are updated in the same way as the CLUT tests, calculated independently in Unicolour. LutEntryCalculator
tests are now reenabled.
@@ -37,7 +37,7 @@ public Vector4 Calculate(Vector4 value) | |||
{ | |||
if (this.doTransform) | |||
{ | |||
value = Vector4.Transform(value, this.matrix); | |||
value = Vector4.Transform(value, Matrix4x4.Transpose(this.matrix)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It seems Vector4.Transform
was performing Vector × Matrix not Matrix × Vector? My matrix maths is rusty, but in lieu of some kind of Matrix4x4.Transform(matrix, vector)
function it sounds like transposing the matrix has the same effect as changing the order of multiplication.
This issue wasn't caught yet because none of the ICC conversion tests use a profile of CMYK data ⇒ XYZ PCS. And even if it did, it would only get caught if using a non-identity matrix? As always, I'll be surprised if we can find a profile that meets these conditions, though we can include it in testing if we do.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Honestly, I'm not sure. @saucecontrol could I borrow your brain here please?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Vector4.Transform
is a dot (inner) product, which is commutative between a vector and matrix1.
I can work up some profiles to test some of these edge cases if you'd like. I swear one of these days I'm going to get around to finishing my implementation of LUT profile transforms, and I'll need them then anyway.
Footnotes
-
Well, not in general, but in this case it's a square matrix and a vector with length equal to the length of each matrix dimension, so it is here. ↩
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here we want a matrix multiplication M×V which isn't commutative, but doing V⋅MT effectively calculates M×V in this particular case.
Is that an OK assumption to make given we're restricted to Matrix4x4
and Vector4
? Is there a more suitable System.Numerics
way to do this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You don't want M×V, you want M⋅V, and I promise it's commutative.
The reason transposing gives you the correct result is that Vector4.Transform
assumes the matrix is row-major, i.e. the V is a 1x4 matrix. Most of ICC docs describe the matrices as column-major, which would mean your V is a 4x1 matrix.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You can also see from the implementation of Vector4.Transform
why it's done row-major -- that enables SIMD acceleration per row because of the memory layout. For perf reasons, you'll want to normalize to row major on the edges of the API.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You don't want M×V, you want M⋅V
I think some of my assumptions have muddied the waters a bit, so I'm going to try to summarise, and if I'm still making a mess I'll have to leave this to someone else.
- Until now I've basically ignored that the vector is a 1x4 matrix, and I'm sure I've confused things by using V to mean "the values of the vector" as opposed to "a 1x4 matrix"
- In terms of the maths I want to perform, I want to multiply a 4x4 matrix with a 4x1 matrix (what I was referring to as not commutative), and there's no in-built function for that?
- In terms of the implementation, it sounds like we want to perform a dot product of a 4x4 matrix with a 1x4 matrix (which is commutative) for SIMD reasons
- Since I'm stuck with a row-major 1x4 matrix, I need to make the 4x4 matrix row-major for the dot product calculation
- I'm not experienced enough with
System.Numerics
to know if there's a more conventional way to "achieve the effect of matrix multiplication" than transposing the 4x4 matrix - @JimBobSquarePants perhaps any 4x4 matrices just need to be initialised row-major?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In terms of the maths I want to perform, I want to multiply a 4x4 matrix with a 4x1 matrix (what I was referring to as not commutative), and there's no in-built function for that?
This is also commutative, as it is also a dot product. The only difference is whether you compute the dot product of the vector with each row of the matrix or with each column of the matrix.
There is no fast way to compute it column-wise, because it means gathering the non-adjacent values in memory. In fact, transposing the matrix with SIMD and then performing a row-wise dot product with SIMD is the fast way to do this, although it is much less fast than normalizing your matrix to row-major when reading the profile and skipping the transpose step during processing.
perhaps any 4x4 matrices just need to be initialised row-major?
Exactly
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just to be sure the maths explanation is clear:
M⋅V where M is 4x4 can be summarized as VMatrix1⋅VColor + VMatrix2⋅VColor + VMatrix3⋅VColor + VMatrix4⋅VColor where VMatrix represents either the rows (VColor is 1x4) or columns (VColor is 4x1) of the matrix.
That result is the same as V⋅M, which would be VColor⋅VMatrix1 + VColor⋅VMatrix2 + VColor⋅VMatrix3 + VColor⋅VMatrix4, because V⋅V is commutative.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure, I'm referring to matrix multiplication in general.
I'm not sure how feasible initialising row-major is when we also need the inverse of the matrix for reverse transforms, will see.
Prerequisites
Description
Note: This is a replacement for the original PR #273 from @JBildstein that was automatically closed by our Git LFS history rewrite. Individual commits have unfortunately been lost in the process. Help is very much needed to complete the work.
As the title says, this adds methods for converting colors with an ICC profile.
Architecturally, the idea is that the profile is checked once for available and appropriate conversion methods and a then a delegate is stored that only takes the color values to convert and returns the calculated values. The possible performance penalty for using a delegate is far smaller than searching through the profile for every conversion. I'm open for other suggestions though.
There are classes to convert from the profile connection space (=PCS, can be XYZ or Lab) to the data space (RGB, CMYK, etc.) and vice versa. There are also classes to convert from PCS to PCS and Data to Data but they are only used for special profiles and are not important for us now but I still added them for completeness sake.
A challenge here is writing tests for this because of the complexity of the calculations and the big amount of different possible conversion paths. This is a rough list of the paths that exist:
The three main approaches in that list are
The most used approaches are Color Trc for RGB profiles and LutAToB/LutBToA for CMYK profiles.
Todo list:
Help and suggestions are very welcome.