Giter Club home page Giter Club logo

compliance-trestle's People

Contributors

aj-stein-nist avatar alejo2995 avatar anebula avatar be-code avatar bradh avatar brunomarq avatar butler54 avatar compliance-trestle-1 avatar deenine avatar degenaro avatar enikonovad avatar folksgl avatar fsuits avatar guyzyl avatar hukkinj1 avatar imgbot[bot] avatar jayhawk87 avatar jeffdmgit avatar jpower432 avatar jrubinstein-dev avatar leninmehedy avatar ma1h01 avatar mab879 avatar mrgadgil avatar pritamdutt avatar srmamit avatar stevemar avatar vikas-agarwal76 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

compliance-trestle's Issues

Refactor / Enhancement: Consolidating model IO into OscalBaseClass and/or pydantic

Issue description / feature objectives

As an trestle developer I would like to have one place which presents a consistent set of IO features and abstractions. Today pydantic presents IO utilities for json which are very good and have wrappers which help support the use of optimized json libraries. This functionality does not exist XML and/or yaml, however, could be created.

In order to streamline this the recommendation is to consolidate functionality into the OscalBaseClass where json and other formats can be treated in a similar method.

This will allow us to explore whether:

  1. Yaml support we build generically and can potentially upstream to pydantic
  2. XML support could potentially be added.

Completion Criteria

trestle.oscal.utils functionality is streamlined into OscalBaseClass

Formalize support for various formats

Issue description / feature objectives

To date the project has been a little washy on language support within the context of trestle. This is a proposal for documenting what our planned support will be for xml vs json vs yaml.

Context:

Reading the files in the OSCAL repo it looks like yaml is a bit of a second class citizen. Given this the proposal is the following:

  1. XML is supported in the following ways:

    • For import and as an optional output for assemble
    • To be read as an external reference (e.g. via href)
    • Not supported for split / merge
  2. For split / merge both json and yaml are supported

  • Json is created by default
  • yaml output is an optional override
  • Trestle may / MUST error when 'hybrid' use is created

On the last point there are technically no issues so i'm not really sure what we should do - I would just like it to look clean.

  1. Users wanting to transition from json to yaml or vice versa should do so via re-importing the file trees.

Completion criteria

  • Agreed / document decision in spec and/or README.md
  • Issues created for future work items.

Demonstrate generation of dynamic models which manipulate input OSCAL objects.

Issue description / feature objectives

trestle split and trestle merge will create issues in the OSCAL objects that are non-compliant. e.g. they miss mandatory fields.

Demonstrate the ability to define partial objects for use within the split / merge while STILL being able to validate the documents successfully

Completion Criteria

  • Demonstration class / methods complete
  • Test class / methods complete.

Implement `trestle validate`

Issue description / feature objectives

Implement trestle validate according to specifications.

Completion Criteria

Complete implementation and testing.

Expected Behavior

  • Validate contents of OSCAL model as per specifications.

Implement `trestle remove`

Issue description / feature objectives

Implement trestle remove according to specifications.

Completion Criteria

Complete implementation and testing.

Expected Behavior

  • Be able to remove a sub OSCAL element from a file or model

scripts/fix_any.py does not work together with make gen-oscal

Describe the bug
scripts/fix_any.py is embedded in scripts/gen_oscal.sh. That should be run by make gen-oscal. When doing so the path assumptions within scripts/fix_any.py fail with the following error:

Traceback (most recent call last):
  File "scripts/fix_any.py", line 123, in <module>
    with open(out_name, 'w') as out_file:
FileNotFoundError: [Errno 2] No such file or directory: 'out_trestle/oscal/ssp.py'

To Reproduce
Steps to reproduce the behavior:

  1. make gen-oscal

Expected behavior
make gen-oscal updates oscal models AND overwrites current models so it can be run cleanly.

Code signing for BInary release.

Setup infrastructure to automate code signing for a binary release.

Issue description / feature objectives

We need to sign code with digital certificate (possibly with IBM CA root) to ensure authenticity of code open sourced by IBM.
Completion Criteria

codesign -dv --verbose=4

returns expected Authority and Hash

Model generation as part of the CICD build process.

Issue description / feature objectives

One of the original thoughts on trestle is to automate the build process for keeping up to date with oscal as the standard evolves.

One concern I have is that our 'drift' today is inconsistent. We have not made a firm decision on 'which' version of OSCAL we use (apart from the latest) - and differences are mainly driven by when we decide that we need to mess with the OSCAL models.

I can see one a few approaches could be taken.

We change our generation script to look for a release explicitly: https://github.com/usnistgov/OSCAL/releases.
The laziest way is we do this manually in a 'set and forget' mode, however, that may expose us to drift.

A second approach could be to put a check in the CICD pipeline that we are always using the 'latest' version of oscal. This could be defined one of two ways.

  1. The latest tag / release
  2. The latest in trunk

Given the way OSCAL are managing their repo it's possible to converge both options so latest in trunk will be the latest tag (as they have working and release directories in the repo.).

Once we have a decision here - the next question is on how to deploy it. My gut feel would be the best way to check would be to run the 'update' code in the CICD pipeline and fail the check if it produces a change. That way the user would need to do and update locally before a PR could be merged.

Completion Criteria

  • Strategy decided and documented in contributing.md
  • CICD / other changes are implemented.

Deep run time type conversion for pydantic models across different pydantic 'modules' representing OSCAL objects.

Issue description / feature objectives

Current approach for trestle is to create one module of modules per OSCAL schema. OSCAL schemas are overlapping in nature - which means we have actual or near duplicate object definitions in various modules.

Discussion with NIST team (usnistgov/OSCAL#731) is unresolved and we are unlikely to see shared models in the short term w/o changing our model structures.

This gives us a requirement: I want a simple interface to do a deep copy of pydantic models across 'module spaces'.

E.g. Humans know that trestle.oscal.catalog.Metadata is equivalent to trestle.oscal.profile.Metadata and should be deep copy-able, however, the pydantic type enforcement cause errors.

Ideally what we would want to have is something similar in functionality to this:

import trestle.oscal.catalog as c
import trestle.oscal.profile as p

my_catalog: c.Catalog = catalog_from_disk_or_elsewhere


# Following line does not work
# profile: p.Profile = p.Profile(metadata=my_catalog.metadata)

# Something like this would be nice(ish)
 profile: p.Profile = p.Profile(metadata=my_catalog.metadata.cast_to(p.Metadata))
# or 
 profile: p.Profile = p.Profile(metadata=my_catalog.metadata.cast_to(p))

Completion Criteria

Deep copy routine completed for a generic usecase including associated unit tests.

Expected Behavior

See above plus the notes below

  1. It should be recursive - Field introspection in pydantic should allow the exploration of the underlying objects.
  2. By default inconsistent fields should throw exceptions - allowing users to specially program / handle
  3. It may be worth exploring option behaviour which is 'permissive' / opportunistic.
  4. At this point there should be no assumptions on transformation in the workflow. Generic copy capabilities come fir

Actual Behavior

  • None - users have to deep code copies at this current point in time.

parser.wrap_for_output fails to insert object into field object correctly due to incorrect name reference

Describe the bug
Calling wrap_for_output will fails for classes which have hyphens in their names.

   wrapper = parser.wrap_for_output(tdn)
  File "/Users/chris/opt/anaconda3/lib/python3.7/site-packages/trestle/core/parser.py", line 144, in wrap_for_output
    wrapped_model = wrapper_model(**{class_to_oscal(class_name, 'field'): model})
  File "pydantic/main.py", line 346, in pydantic.main.BaseModel.__init__
pydantic.error_wrappers.ValidationError: 1 validation error for TargetDefinition

Smartphone (please complete the following information):

  • Device: [e.g. iPhone6]
  • OS: [e.g. iOS8.1]
  • Browser [e.g. stock browser, safari]
  • Version [e.g. 22]

Additional context
Add any other context about the problem here.

Implement `trestle import`

Issue description / feature objectives

Implement trestle import according to specifications.

Completion Criteria

Complete implementation and testing.

Expected Behavior

  • Contents of OSCAL model in imported file decomposed and placed in the expected folder as specification.

Make a decision: Use of mypy validation within the project

Mypy may make our development cleaner / safer, however, does require a concerted effort across the project.

  1. Demonstrate mypy is viable including validation (including for IDEs) or not

  2. IF so upstream and ensure mypy compilation is part of devops process.

Add trestle duplicate to CLI spec doc

Issue description / feature objectives

In discussion trestle duplicate appears to drive foundational behaviour for other commands. Build out docs for duplicates.

Completion Criteria

  • Complete documentation
  • Create issue for implementation.

Evaluate tooling options to xccdf conversion into json and/or converting xccd schema to json

XCCDF is another NIST standard for representing compliance information as part of scap (https://csrc.nist.gov/Projects/Security-Content-Automation-Protocol/Specifications/xccdf)

the definition of xccdf is xml only.

Evaluate options for representing xccdf in high fidelity as json including:

  1. Creating a derivative json schema
  2. Translation tools

Ideally these tools should be callable from trestle (aka python based).

Even more ideally: this should be represented as pydantic models.

Provide initial feedback then we will determine a path forward.

Support and testing for windows based running of trestle.

Issue description / feature objectives

This is an optimistic issue, however, I think we should think about this now before the refactors get to big (as in it may be containable now).

Compliance officers may be using windows. We should check for at least the editing functionality whether installs and builds work properly on windows. Based on the experience of @fsuits this should be true now.

Completion Criteria

Implement `trestle init`

Issue description / feature objectives

Implement trestle init according to specifications.

Completion Criteria

Complete implementation and testing.

Expected Behavior

  • Directory structure created for trestle project according to specification.

Implement `trestle create`

Issue description / feature objectives

Implement trestle create according to specifications.

Completion Criteria

Complete implementation and testing for all trestle create subcommands.

Expected Behavior

  • Sample content created for each trestle create subcommand.

Implement `trestle split`

Issue description / feature objectives

Implement trestle split according to specifications.

Completion Criteria

Complete implementation and testing.

Expected Behavior

  • Be able to split subcomponent of an OSCAL model including simple JSON object, arrays and objects of type additionalProperties.

Create a binary distribution of trestle for airgapped / environments which have 'allowed software' lists.

Compliance is most important in sensitive environments. One property of those environments is that applications (and source code) typically must be vetted before use (e.g. https://www.cyber.gov.au/acsc/view-all-content/essential-eight/essential-eight-explained)

  • Create an approach to creating binary distributions of trestle such that one object (not trestle and all dependencies) needs to be approved.

  • Previous reviews had identified the approach taken by the aws cli (see quoted content below) as a good approach.

  • Explore whether code can also be signed.

Objectives:

We need to package trestle cli so that it is easier for end user to download and use rather than going through usual complex process of setting up python requirements as outlined here: https://packaging.python.org/tutorials/installing-packages/#installing-requirements

As said here: Python’s flexibility is why the first step in every Python project must be to think about the project’s audience and the corresponding environment where the project will run. It might seem strange to think about packaging before writing code, but this process does wonders for avoiding future headaches. https://packaging.python.org/overview/
Findings

AWS-CLI is the best role model for us and we can follow their approach on delivering the python based cli to end-users:

AWS CLI github: https://github.com/aws/aws-cli
AWS CLI has the install script and self contained installer script here: https://github.com/aws/aws-cli/tree/develop/scripts (edited)
Mac: https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2-mac.html
Linux: https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2-linux.html
windows: https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2-windows.html

Create codecov.yml which provides realistic coverage metrics

The automatically generated classes in trestle.oscal should not need to be individually tested to 100% coverage. UT coverage for any behaviour w.r.t pydantic should, occur over our OscalBaseModel which dictates pydantic behavior.

  • Create a codecov.yml file which realistically computes coverage.

  • Potentially move trestle.oscal.base_model such that trestle.oscal is only generated code.

  • Put in place minimum coverage metrics as well with codecov.

Bug: Conflicting dependencies between markdown-it-py and attrs

Describe the bug
When following the instructions in the README to install Trestle from source, the following error is thrown after trying to install dependencies:
ERROR: markdown-it-py 0.4.9 has requirement attrs~=19.3, but you'll have attrs 20.2.0 which is incompatible.

To Reproduce
Steps to reproduce the behavior:

  1. git clone https://github.com/IBM/compliance-trestle.git
  2. cd compliance-trestle
  3. python3 -m venv venv
  4. . ./venv/bin/activate
  5. pip install -q -e ".[dev]" --upgrade --upgrade-strategy eager

Expected behavior
All dependencies for dev should be installed without errors.

Review parser functionality as parser.parse_model and parser.to_full_model_name have overlapping functionality

Issue description / feature objectives

In reviewing the functionality in #60 I noticed that the functionality in that PR is a little confusing. Specifically is there a situation where we ever need to get the full module name as provided by to_full_model_name and not return the class itself?

We may be able to simplify processes and eliminate parser.parse_model by returning the class rather than the class path.

Also to_full_model_name makes some pretty dramatic assumptions that I do not think we are necessarily assuming especially when we have split models.

Create optimisation function / flag for trestle directory trees.

Issue description / feature objectives

The default behaviour of trestle split and trestle merge is that they do not create a single file within it's own directory

e.g.
trestle split -e metadata
on

catalog.json

would result in

catalog.json
metadata.json

NOT

catalog.json
metadata/metadata.json

However, it's possible a user can end up in a situation where they DO have single directories like this based on using different contexts to perform different operations.

E.g.
running trestle merge -e version
from

metadata.json
version.json

would merge in the version.json file. However looking at the root directory for the catalog would give you a tree such as

catalog.json
metadata/metadata.jsoln

which is unnecessary. trestle optimise would take a root file and optimise the directory tree below it.

e.g. trestle optimise -f catalog.json on the above directory would result in

catalog.json
metadata.json

Of course this would need to be recursive.

Completion

  • Update and agree on spec
  • Implement appropriate commands

Pydanitc appears to no be correctly enforcing for some structures.

Issue description / feature objectives

In investigating some typing behaviour I noticed that there are some issues with the pydantic models. when we have a structure such as
as

{
"named-uuid-element_1" :{"component"},
"named-uuid-element_2":{"component"}
}

which results in an 'any type enforcement from the generated models (see two examples below)

class ComponentDefinition(BaseModel):
    metadata: Metadata
    import_component_definitions: Optional[List[ImportComponentDefinition]] = Field(
        None, alias='import-component-definitions'
    )
    components: Optional[Dict[str, Any]] = None
    capabilities: Optional[Dict[str, Any]] = None
    back_matter: Optional[BackMatter] = Field(None, alias='back-matter')
class InventoryItem(BaseModel):
    asset_id: str = Field(
        ...,
        alias='asset-id',
        description='Organizational asset identifier that is unique in the context of the system. This may be a reference to the identifier used in an asset tracking system or a vulnerability scanning tool.',
        title='Asset Identifier',
    )
    description: Description
    properties: Optional[List[Prop]] = None
    annotations: Optional[List[Annotation]] = None
    links: Optional[List[Link]] = None
    responsible_parties: Optional[Dict[str, Any]] = Field(
        None, alias='responsible-parties'
    )
    implemented_components: Optional[Dict[str, Any]] = Field(
        None, alias='implemented-components'
    )
    remarks: Optional[Remarks] = None

Expected behavior would be something closer to this:

class ComponentDefinition(BaseModel):
    metadata: Metadata
    import_component_definitions: Optional[List[ImportComponentDefinition]] = Field(
        None, alias='import-component-definitions'
    )
    components: Optional[Dict[str, Component]] = None
    capabilities: Optional[Dict[str, Capability]] = None
    back_matter: Optional[BackMatter] = Field(None, alias='back-matter')

Completion Criteria

Models are correctly generated.

XCCDF and OPA results transformation to OSCAL findings results

Issue description / feature objectives

With most Cloud APIs requiring JSON as a first class citizen we need a consistent mechanism for converting 'test results' into OSCAL.

Current assumption is that asssessment-results in OSCAL is the appropriate location. The question is how to we capture 'rules' and other objects that are may or may not be 1:1 with an objective and represent an individual result of a technical implementation.

Usecase a: XCCDF where 1 'rule' == 1 objective.

Usecase b: NIST where N rules = 1 objective.

For this we need to test our ability to map a limited subset of 'results' formats into OSCAL finding.

  1. SCAP Xccdf test results from compliance operator (or similar)compliance_operator_AssessmentResult 7-15.json.zip
    1a) OpenSCAP xccdf results running on linux os (centos / rhel)

  2. OPA inspired / generated results (https://github.ibm.com/cocoa/evidence-summary)

  3. IBM S&CC results (based on spreadsheets).

Completion Criteria

Implement `trestle assemble`

Issue description / feature objectives

Implement trestle assemble according to specifications.

Completion Criteria

Complete implementation and testing.

Expected Behavior

  • Assemble all subcomponents of a OSCAL model and place the resulting file under dist folder as per specifications.

Raise an issue with datamodel-code-generator about strange behavior

Issue description / feature objectives

#12 created a temporary fix for pydantic model creation. The fix will be brittle. Underlying generation in datamodel-code-generator needs to be fixed.

Completion Criteria

  • Issue raised with datamodel-code-generator
  • Ideally create a fix, however, reporting will be sufficient.
  • If underlying issue is fixed remove hacky script from #12

Implement `trestle merge`

Issue description / feature objectives

Implement trestle merge according to specifications.

Completion Criteria

Complete implementation and testing.

Expected Behavior

  • The inverse behavior of trestle split.

  • Example of expected behavior:

SCENARIO #1

cd nist80053
├── catalog.json
├── catalog
│   ├── groups.json
│   ├── metadata.json
│   ├── groups
│   │   ├── 0000__group.json
│   │   ├── 0001__group.json
│   │   ├── 0002__group.json
│   │   ├── 0003__group.json
│   │   ├── 0004__group.json
│   │   ├── 0005__group.json
│   │   ├── 0006__group.json
│   │   ├── 0007__group.json
│   │   ├── 0008__group.json
│   │   ├── 0009__group.json
│   │   ├── 0010__group.json
│   │   ├── 0011__group.json
│   │   ├── 0012__group.json
│   │   ├── 0013__group.json
│   │   ├── 0014__group.json
│   │   ├── 0015__group.json
│   │   ├── 0016__group.json
│   │   ├── 0017__group.json
│   │   ├── 0018__group.json
│   │   ├── 0019__group.json
│   ├── metadata
│            ├── parties.json
│            ├── parties
│                ├── 0000__party.json

----

cd catalog/groups
/catalogs/nist80053/catalog/groups$ tsl merge -e *
> Invalid. Merge path needs to have at least 2 parts.

----

cd catalog
/catalogs/nist80053/catalog$ tsl merge -e groups.*
├── catalog.json
├── catalog
│   ├── groups.json
│   ├── metadata.json

or

/catalogs/nist80053/catalog$ tsl merge -e groups
> Invalid. Merge path needs to have at least 2 parts.

or

/catalogs/nist80053/catalog$ tsl merge -e metadata.*
├── catalog.json
├── catalog
│   ├── groups.json
│   ├── metadata.json
│   ├── groups
│   │   ├── 0000__group.json
│   │   ├── 0001__group.json
│   │   ├── 0002__group.json
│   │   ├── 0003__group.json
│   │   ├── 0004__group.json
│   │   ├── 0005__group.json
│   │   ├── 0006__group.json
│   │   ├── 0007__group.json
│   │   ├── 0008__group.json
│   │   ├── 0009__group.json
│   │   ├── 0010__group.json
│   │   ├── 0011__group.json
│   │   ├── 0012__group.json
│   │   ├── 0013__group.json
│   │   ├── 0014__group.json
│   │   ├── 0015__group.json
│   │   ├── 0016__group.json
│   │   ├── 0017__group.json
│   │   ├── 0018__group.json
│   │   ├── 0019__group.json

or

cd ..
/catalogs/nist80053$ tsl merge -e catalog.metadata
├── catalog.json
├── catalog
│   ├── groups.json
│   ├── groups
│   │   ├── 0000__group.json
│   │   ├── 0001__group.json
│   │   ├── 0002__group.json
│   │   ├── 0003__group.json
│   │   ├── 0004__group.json
│   │   ├── 0005__group.json
│   │   ├── 0006__group.json
│   │   ├── 0007__group.json
│   │   ├── 0008__group.json
│   │   ├── 0009__group.json
│   │   ├── 0010__group.json
│   │   ├── 0011__group.json
│   │   ├── 0012__group.json
│   │   ├── 0013__group.json
│   │   ├── 0014__group.json
│   │   ├── 0015__group.json
│   │   ├── 0016__group.json
│   │   ├── 0017__group.json
│   │   ├── 0018__group.json
│   │   ├── 0019__group.json

SCENARIO #2

----
├── catalog.json
├── catalog
│   ├── groups.json
│   ├── metadata.json
│   ├── groups
│   │   ├── 0000__group.json
│   │   ├── 0001__group.json
│   │   ├── 0002__group.json
│   │   ├── 0003__group.json
│   │   ├── 0004__group.json
│   │   ├── 0005__group.json
│   │   ├── 0006__group.json
│   │   ├── 0007__group.json
│   │   ├── 0008__group.json
│   │   ├── 0009__group.json
│   │   ├── 0010__group.json
│   │   ├── 0011__group.json
│   │   ├── 0012__group.json
│   │   ├── 0013__group.json
│   │   ├── 0014__group.json
│   │   ├── 0015__group.json
│   │   ├── 0016__group.json
│   │   ├── 0017__group.json
│   │   ├── 0018__group.json
│   │   ├── 0019__group.json
│   ├── metadata
│            ├── parties.json
│            ├── parties
│                ├── 0000__party.json
----

/catalogs/nist80053$ tsl merge -e catalog.*
├── catalog.json
-----
/catalogs/nist80053$ tsl merge -e catalog
> Invalid. Merge path needs to have at least 2 parts.
-----
/catalogs/nist80053$ tsl merge -e *
> Invalid. Merge path needs to have at least 2 parts.

Intra-document validation for OSCAL artifacts.

Issue description / feature objectives

The current pydantic models do not describe sufficiently the constrains required for a OSCAL schema.

NIST defines two sets of IDs

  1. UUIDs: UUIDs must be globally unique within their object type. An acceptable solution is to test whether
    a) UUIDs conform to required schema
    b) UUIDs are unique (and error identifying the conflict if it exists)

Note: Many UUIDs are optional - I think we should have an option to populate more UUIDs as required.

  1. NIST also defines ID fields. ID fields are scoped within the referenced document e.g. two catalogs can have colliding ID fields
    Suggest two tiers of validation:
  2. Compulsory: For an given object type (e.g. control) no id fields collide
  3. Best-practice: For a given schema document where ID's are defined no ID's collide

Note: (1) and (2) explicitly point to defining UUIDs / ID's and not references to UUIDs.

The above functionality should be generic such that it can operate on any OSCAL object, however, presuming uuid and id are special fields that exist.

Completion Criteria

  • Completed functionality
  • Unit test coverage across all new code
  • tested for catalog, profile, target, and SSP at a minimum.
  • This functionality should work for both assembled and 'distributed' artifacts.
  • validation should be able to operate over all files in a repository.

Minimize duplication in referencing model names as strings

Issue description / feature objectives

There is currently significantly duplication in the code when referencing model names. We should probably have a dictionary in a constants file that maps the reference models names to the pydantic model objects and use that dictionary for all the trestle commands such as:

  • during the creation of the directory structure in trestle init
  • during instantiation of the pydantic model object

Completion Criteria

  • have a single place for reference models names that is used everywhere else in the code

Implement `trestle add`

Issue description / feature objectives

Implement trestle add according to specifications.

Completion Criteria

Complete implementation and testing.

Expected Behavior

  • Add subcomponent to existing OSCAL model as per specifications.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.