This bug was present from Oodle 2.5.0 to 2.5.4 ; if you use those versions you should update to 2.5.5
When the bug occurs, the OodleLZ_Compress call returns success, thinking it made valid compressed data, but it has actually made a damaged bit stream. When you call Decompress it might return failure, or it might return success but produce decompressed output that does not match the original bits.
Any compressed data that you have made which decodes successfully (and matches the original uncompressed data) is fine. The presence of the bug can only be detected by attempting to decode compressed data and checking that it matches the original uncompressed data.
The decoder is not affected by this bug, so if you have shipped user installations that only do decoding, they don't need to be updated. If you have compressed files which were made incorrectly because of this bug, you can patch only those individual compressed files.
Technical details :
This bug was caused by one of the internal bit stream write pointers writing past the end of its bits, potentially over-writing another previously written bit stream. This caused some of the previously written bits to become garbage, causing them to decode into something other than what they had been encoded from.
This only occured with 64-bit encoders. Any data written by 32-bit encoders is not affected by this bug.
This bug could in theory occur on any Kraken & Mermaid compressed data. In practice it's very rare and I've only seen it in one particular case - "whole huff chunks" on data that is only getting a little bit of compression, with uncompressed data that has a trinary byte structure (such as 24-bit RGB). It's also much more likely in pre-2.3.0 compatibility mode (eg. with OodleLZ_BackwardsCompatible_MajorVersion=2 or lower).
BTW it's probably a good idea in general to decode and verify the data after every compress.
I don't do it automatically in Oodle because it would add to encode time, but on second thought that might be a mistake.
Pretty much all the Oodle codecs are so asymmetric, that doing a full decode every time wouldn't add much to
the encode time. For example :
Kraken Normal level encodes at 50 MB/s
Kraken decodes at 1000 MB/s
To encode 1 MB is 0.02 s
To decode 1 MB is 0.001 s
To decode after every encode changes the encode time to 0.021 s = 47.6 MB/s
it's not a very significant penalty to encode time, and it's worth it to verify that your data definitely
decodes correctly. I think it's a good idea to go ahead and add this to your tools.
I may add a "verify" option to the Compress API in the future to automate this.