Comments (9)
@SimoneLazzaris will try to repro. as soon as I can which may not be until next week. For sanity, can you run using the trace (--trace or -t) or even debug (--debug) and capture the full stdout/stderr here?
from sbom-utility.
@SimoneLazzaris Just wanted to let you know I just was able to try this out on my local machine and saw the indications of some infinite loop (not waiting for a seg. violation and killed it) given that the files are relatively small in size. Having validated both and run other commands against its contents, it is my fear that the error may lie in one of the imported libraries which may take some time to pinpoint (and even more time to perhaps fix upstream if possible).
from sbom-utility.
@SimoneLazzaris Comparing these 2 "Trivy" SBOM files using an online, general text diff tool, it finds 2694 removals and 8296 additions. I am sure that the underlying diff comparator is losing its mind and running out of memory looking for JSON objects matches between the two files (which uses deep hashes).
The warning for "diff" command use is that the files must be relatively similar... despite being binary image scans from Trivy from apparent image file minor point revisions (semantic image file versions) these files are completely dissimilar from one primary aspect:
- order of components within JSON arrays appears to be completely mismatched.
The generalized "diff" libs used are not specific to any schema (SPDX, CycloneDX or any other JSON) and have no means to "normalize" the contents prior to comparison. In fact, normalization of JSON is only possible with custom knowledge of what makes each array entry (esp. for anonymous types) unique (i.e., a unique key or set of fields that create a unique key) to hash by reliably (relative to the JSON object).
In addition, within each component there are several Trivy custom properties that are identical across many components which makes comparison (similarity weighting) near impossible. For example:
{
"bom-ref": "1041129c-b3a8-4896-9ba4-cf92e58ed5d2",
"type": "application",
"name": "usr/local/bin/nsc",
"properties": [
{
"name": "aquasecurity:trivy:Class",
"value": "lang-pkgs"
},
{
"name": "aquasecurity:trivy:Type",
"value": "gobinary"
}
]
},
{
"bom-ref": "4ce1b5d8-fb7a-4506-9c92-ff2ca0de8e69",
"type": "application",
"name": "usr/local/bin/nats",
"properties": [
{
"name": "aquasecurity:trivy:Class",
"value": "lang-pkgs"
},
{
"name": "aquasecurity:trivy:Type",
"value": "gobinary"
}
]
},
where the similarity "score" would be high between these 2 components (with no knowledge the the only key in the case component
objects was bom-ref
.
This complexity is the reason why "merge" functions are not simple from any tool (even github commits of similar files) and require human (not yet AI) analysis to resolve "merge conflicts".
If a great deal of custom hashing code were written (which means a unique hash function per-object in the JSON schema), then normalization becomes more realistic. However, objects with any depth of nested objects increases the time necessary for deep comparisons of objects (as well as lots of hashing memory overhead).
from sbom-utility.
All hope is not lost... as I said I planned on adding a "sort" function with knowledge of CycloneDX data schema structure (at least for top-level objects like components
, services
, and vulnerabilities
for example. However, I cannot promise when I could begin such a feature, but would love help ;)
from sbom-utility.
@mrutkows thanks for your effort. I know that mine was not a textbook example. I just find bad for the software to panic and wanted to report that.
from sbom-utility.
The problem lies in the go-dff
library which has many operations that look to create new "diff" representations by using slice text/string insertion into an existing slice (i.e., to create +/- prefixed strings with the difference text). Primarily the 2 I debugged were in the file: github.com/sergi/[email protected]/diffmatchpatch/patch.go
and include functions:
patchMake2(text1 string, diffs []Diff)
and specifically its logic statements as follows step outside slice bounds:
case DiffDelete:
patch.Length1 += len(aDiff.Text)
patch.diffs = append(patch.diffs, aDiff)
postpatchText = postpatchText[:charCount2] + postpatchText[charCount2+len(aDiff.Text):]
as well as another function PatchAddContext(patch Patch, text string)
and the logic:
pattern = text[patch.Start2 : patch.Start2+patch.Length1]
Short of a rewrite (as the lengths being passed in the patches are not being calculated properly, clearly) and perhaps the introduction of a "safe" slice reallocation routine, this will not be fixed anytime soon.
from sbom-utility.
@mrutkows thanks for your effort. I know that mine was not a textbook example. I just find bad for the software to panic and wanted to report that.
I can "catch" the panics; but, that really does not solve the underlying library I relied upon from working properly to produce a diff :(
I truly believe that normalizing the data (both files) will result in coherent/useful diff results (even using the faulty library), but it is a very complicated task in-and-of itself.
from sbom-utility.
@SimoneLazzaris I managed to add "guard rails" to avoid the panics within the upstream file that was the source (of more than one) panic where they were accessing string slices past their current size/memory allocations...
The result is a large "patch" file that really has many large blocks of meaningless deltas:
diff.txt
The source code file I patched locally to avoid the panics is from the upstream library (i.e., "patch.go"); specifically, I added "if" tests before indexing into string slices in 2 places:
patch.go.txt
but again, even if I pushed this to upstream, the library has not been touched in like 6 years... and it absolutely masks other "bad" logic that is creating the leading to the bad character counting used to index into the slices that cause the panics.
from sbom-utility.
@SimoneLazzaris checkout the cleaner output if such a panic is now caught:
Welcome to the sbom-utility! Version `latest` (sbom-utility) (darwin/arm64)
===========================================================================
[INFO] Loading (embedded) default schema config file: `config.json`...
[INFO] Loading (embedded) default license policy file: `license.json`...
[INFO] Reading file (--input-file): `nats-box-49.sbom.json` ...
[INFO] Reading file (--input-revision): `nats-box-50.sbom.json` ...
[INFO] Comparing files: `nats-box-49.sbom.json` (base) to `nats-box-50.sbom.json` (revised) ...
[ERROR] panic occurred: runtime error: slice bounds out of range [2004:1743]
[ERROR] diff failed: differences between files perhaps too large.
In addition, the exit code is now 1
(app. error).
from sbom-utility.
Related Issues (20)
- Enhancement: Summarize "duplicate components" schema error HOT 5
- Enhancement: Add testcases to validate JSF signatures HOT 1
- replace deprecated `ioutil` package functions with latest advised `io` and `os` package replacements HOT 10
- Support stdin for input HOT 1
- Include config.json and licenses.json in the compiled executable HOT 2
- Support Graph rendering of dependences and formatted (graph) output
- Testcase: Need test for new complex "licenseChoice" schema defn. HOT 1
- Create Microsoft Softare Installer (MSI) file for Windows
- Testcase: Need new test case for new "Creation Tools" object
- Release v013.0 is missing release assets HOT 2
- Testcase: Need testcase that has a CDXService with no "bom-ref"
- Support OWASP SCVS "Profiles" for use in validation, trimming, etc. commands
- SIGSEGV: segmentation violation code=0x2 addr=0x0 pc=0x104bfa024
- Feature Request - Generate JSON with entire structure
- TODO: Change Formulation and ModelCard schemas to use pointers
- Support SPDX in the "patch" command
- Add support for "legacy" and new `Tool` structure introduced in CycloneDX v1.5 HOT 1
- Support for both the v1.5 component evidence `identity` and the v1.6 array of `componentIdentityEvidence` HOT 2
- Add Apache Skywalking-eyes license checking to GitHub action CI workflow
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from sbom-utility.