ONAP AAI UI

Projects that follow the best practices below can voluntarily self-certify and show that they've achieved an Open Source Security Foundation (OpenSSF) best practices badge.

There is no set of practices that can guarantee that software will never have defects or vulnerabilities; even formal methods can fail if the specifications or assumptions are wrong. Nor is there any set of practices that can guarantee that a project will sustain a healthy and well-functioning development community. However, following best practices can help improve the results of projects. For example, some practices enable multi-person review before release, which can both help find otherwise hard-to-find technical vulnerabilities and help build trust and a desire for repeated interaction among developers from different companies. To earn a badge, all MUST and MUST NOT criteria must be met, all SHOULD criteria must be met OR be unmet with justification, and all SUGGESTED criteria must be met OR unmet (we want them considered at least). If you want to enter justification text as a generic comment, instead of being a rationale that the situation is acceptable, start the text block with '//' followed by a space. Feedback is welcome via the GitHub site as issues or pull requests There is also a mailing list for general discussion.

We gladly provide the information in several locales, however, if there is any conflict or inconsistency between the translations, the English version is the authoritative version.
If this is your project, please show your badge status on your project page! The badge status looks like this: Badge level for project 1737 is passing Here is how to embed it:
You can show your badge status by embedding this in your markdown file:
[![OpenSSF Best Practices](https://www.bestpractices.dev/projects/1737/badge)](https://www.bestpractices.dev/projects/1737)
or by embedding this in your HTML:
<a href="https://www.bestpractices.dev/projects/1737"><img src="https://www.bestpractices.dev/projects/1737/badge"></a>


These are the Silver level criteria. You can also view the Passing or Gold level criteria.

        

 Basics 16/17

 Change Control 1/1

  • Previous versions


    The project MUST maintain the most often used older versions of the product or provide an upgrade path to newer versions. If the upgrade path is difficult, the project MUST document how to perform the upgrade (e.g., the interfaces that have changed and detailed suggested steps to help upgrade). [maintenance_or_update]

    All major releases are tagged in gerrit and the artifacts are stored with the release information on onap.nexus. So we can access all old versions of the artifact. If and when a upgrade requires certain steps to be followed they are being added to the release documents as needed


 Reporting 3/3

  • Bug-reporting process


    The project MUST use an issue tracker for tracking individual issues. [report_tracker]
  • Vulnerability report process


    The project MUST give credit to the reporter(s) of all vulnerability reports resolved in the last 12 months, except for the reporter(s) who request anonymity. If there have been no vulnerabilities resolved in the last 12 months, select "not applicable" (N/A). (URL required) [vulnerability_report_credit]

    Vulnerabilities can be reported using the link https://wiki.onap.org/pages/viewpage.action?pageId=6591711 Currently we dont have any vulnerabilities reported, but the wiki page explains on how to report a vulnerability and how to report anonymously if you do not want the credit for it.



    The project MUST have a documented process for responding to vulnerability reports. (URL required) [vulnerability_response_process]
    This is strongly related to vulnerability_report_process, which requires that there be a documented way to report vulnerabilities. It also related to vulnerability_report_response, which requires response to vulnerability reports within a certain time frame.

    Vulnerabilities handling is documented in https://wiki.onap.org/pages/viewpage.action?pageId=6591711


 Quality 17/19

  • Coding standards


    The project MUST identify the specific coding style guides for the primary languages it uses, and require that contributions generally comply with it. (URL required) [coding_standards]
    In most cases this is done by referring to some existing style guide(s), possibly listing differences. These style guides can include ways to improve readability and ways to reduce the likelihood of defects (including vulnerabilities). Many programming languages have one or more widely-used style guides. Examples of style guides include Google's style guides and SEI CERT Coding Standards.

    The project MUST automatically enforce its selected coding style(s) if there is at least one FLOSS tool that can do so in the selected language(s). [coding_standards_enforced]
    This MAY be implemented using static analysis tool(s) and/or by forcing the code through code reformatters. In many cases the tool configuration is included in the project's repository (since different projects may choose different configurations). Projects MAY allow style exceptions (and typically will); where exceptions occur, they MUST be rare and documented in the code at their locations, so that these exceptions can be reviewed and so that tools can automatically handle them in the future. Examples of such tools include ESLint (JavaScript), Rubocop (Ruby), and devtools check (R).

    We currently do not have any style enforcer configured for java applications. Look into http://checkstyle.sourceforge.net/ it is an enforcer that works with maven.


  • Working build system


    Build systems for native binaries MUST honor the relevant compiler and linker (environment) variables passed in to them (e.g., CC, CFLAGS, CXX, CXXFLAGS, and LDFLAGS) and pass them to compiler and linker invocations. A build system MAY extend them with additional flags; it MUST NOT simply replace provided values with its own. If no native binaries are being generated, select "not applicable" (N/A). [build_standard_variables]
    It should be easy to enable special build features like Address Sanitizer (ASAN), or to comply with distribution hardening best practices (e.g., by easily turning on compiler flags to do so).

    The application does not create native binaries. (Some of the libraries it depends on do, but those are external.)



    The build and installation system SHOULD preserve debugging information if they are requested in the relevant flags (e.g., "install -s" is not used). If there is no build or installation system (e.g., typical JavaScript libraries), select "not applicable" (N/A). [build_preserve_debug]
    E.G., setting CFLAGS (C) or CXXFLAGS (C++) should create the relevant debugging information if those languages are used, and they should not be stripped during installation. Debugging information is needed for support and analysis, and also useful for measuring the presence of hardening features in the compiled binaries.

    The application does not create native binaries. (Some of the libraries it depends on do, but those are external.)



    The build system for the software produced by the project MUST NOT recursively build subdirectories if there are cross-dependencies in the subdirectories. If there is no build or installation system (e.g., typical JavaScript libraries), select "not applicable" (N/A). [build_non_recursive]
    The project build system's internal dependency information needs to be accurate, otherwise, changes to the project may not build correctly. Incorrect builds can lead to defects (including vulnerabilities). A common mistake in large build systems is to use a "recursive build" or "recursive make", that is, a hierarchy of subdirectories containing source files, where each subdirectory is independently built. Unless each subdirectory is fully independent, this is a mistake, because the dependency information is incorrect.

    The application does not create native binaries. (Some of the libraries it depends on do, but those are external.)



    The project MUST be able to repeat the process of generating information from source files and get exactly the same bit-for-bit result. If no building occurs (e.g., scripting languages where the source code is used directly instead of being compiled), select "not applicable" (N/A). [build_repeatable]
    GCC and clang users may find the -frandom-seed option useful; in some cases, this can be resolved by forcing some sort order. More suggestions can be found at the reproducible build site.

    All releases are tagged in gerrit(git), and the builds are controlled using jenkins. By providing the git tag information the same image can be build over and over again with same bit-for-bit result.


  • Installation system


    The project MUST provide a way to easily install and uninstall the software produced by the project using a commonly-used convention. [installation_common]
    Examples include using a package manager (at the system or language level), "make install/uninstall" (supporting DESTDIR), a container in a standard format, or a virtual machine image in a standard format. The installation and uninstallation process (e.g., its packaging) MAY be implemented by a third party as long as it is FLOSS.

    All packages are delivered either as an jar artifact or a docker image. Incase of maven artifacts, they can be removed using the pom file. In case of docker container. We can delete the container we dont want. Also control the orchestration in Kubernetes if you want to exclude certain docker images.



    The installation system for end-users MUST honor standard conventions for selecting the location where built artifacts are written to at installation time. For example, if it installs files on a POSIX system it MUST honor the DESTDIR environment variable. If there is no installation system or no standard convention, select "not applicable" (N/A). [installation_standard_variables]

    The compiled docker images and jar files can be installed/used as the user sees fit. Both run on JVM or docker. So there is no reason to selecting locations etc.



    The project MUST provide a way for potential developers to quickly install all the project results and support environment necessary to make changes, including the tests and test environment. This MUST be performed with a commonly-used convention. [installation_development_quick]
    This MAY be implemented using a generated container and/or installation script(s). External dependencies would typically be installed by invoking system and/or language package manager(s), per external_dependencies.

    All the components require only java and maven to begin with for a developer to quickly install and test it. Even for deployment using OOM and the right amount of resources, we can deploy the full AAI/ONAP suite in less than a day. The steps are documented in https://onap.readthedocs.io/en/latest/submodules/oom.git/docs/oom_quickstart_guide.html


  • Externally-maintained components


    The project MUST list external dependencies in a computer-processable way. (URL required) [external_dependencies]
    Typically this is done using the conventions of package manager and/or build system. Note that this helps implement installation_development_quick.

    Projects MUST monitor or periodically check their external dependencies (including convenience copies) to detect known vulnerabilities, and fix exploitable vulnerabilities or verify them as unexploitable. [dependency_monitoring]
    This can be done using an origin analyzer / dependency checking tool / software composition analysis tool such as OWASP's Dependency-Check, Sonatype's Nexus Auditor, Synopsys' Black Duck Software Composition Analysis, and Bundler-audit (for Ruby). Some package managers include mechanisms to do this. It is acceptable if the components' vulnerability cannot be exploited, but this analysis is difficult and it is sometimes easier to simply update or fix the part.

    Nexus sonar scan is run on all the projects on a weekly basis



    The project MUST either:
    1. make it easy to identify and update reused externally-maintained components; or
    2. use the standard components provided by the system or programming language.
    Then, if a vulnerability is found in a reused component, it will be easy to update that component. [updateable_reused_components]
    A typical way to meet this criterion is to use system and programming language package management systems. Many FLOSS programs are distributed with "convenience libraries" that are local copies of standard libraries (possibly forked). By itself, that's fine. However, if the program *must* use these local (forked) copies, then updating the "standard" libraries as a security update will leave these additional copies still vulnerable. This is especially an issue for cloud-based systems; if the cloud provider updates their "standard" libaries but the program won't use them, then the updates don't actually help. See, e.g., "Chromium: Why it isn't in Fedora yet as a proper package" by Tom Callaway.

    External components are maintained through Maven. The user can get a list of all included components using the maven dependency tree and can update or reuse as they see fit



    The project SHOULD avoid using deprecated or obsolete functions and APIs where FLOSS alternatives are available in the set of technology it uses (its "technology stack") and to a supermajority of the users the project supports (so that users have ready access to the alternative). [interfaces_current]

    We avoid depending on deprecated/obsolete functions.


  • Automated test suite


    An automated test suite MUST be applied on each check-in to a shared repository for at least one branch. This test suite MUST produce a report on test success or failure. [automated_integration_testing]
    This requirement can be viewed as a subset of test_continuous_integration, but focused on just testing, without requiring continuous integration.

    Automatic test suites are run every time before merging the code. The code check in cannot pass with out jenkins posting a +1 on the review.



    The project MUST add regression tests to an automated test suite for at least 50% of the bugs fixed within the last six months. [regression_tests_added50]

    When regressions occur, we add tests for them.



    The project MUST have FLOSS automated test suite(s) that provide at least 80% statement coverage if there is at least one FLOSS tool that can measure this criterion in the selected language. [test_statement_coverage80]
    Many FLOSS tools are available to measure test coverage, including gcov/lcov, Blanket.js, Istanbul, JCov, and covr (R). Note that meeting this criterion is not a guarantee that the test suite is thorough, instead, failing to meet this criterion is a strong indicator of a poor test suite.

    We use sonar to measure the code coverage. https://sonar.onap.org/about Code coverage at the date of filling this report(2018-09-19) is Sparky-be: 48.7 Data-router: 51.5 Router-core: 69.2 search-data-service: 54


  • New functionality testing


    The project MUST have a formal written policy that as major new functionality is added, tests for the new functionality MUST be added to an automated test suite. [test_policy_mandated]

    Contributing guide lines for development is recorded in https://wiki.onap.org/display/DW/Development+Procedures+and+Policies



    The project MUST include, in its documented instructions for change proposals, the policy that tests are to be added for major new functionality. [tests_documented_added]
    However, even an informal rule is acceptable as long as the tests are being added in practice.
  • Warning flags


    Projects MUST be maximally strict with warnings in the software produced by the project, where practical. [warnings_strict]
    Some warnings cannot be effectively enabled on some projects. What is needed is evidence that the project is striving to enable warning flags where it can, so that errors are detected early.

    Build systems run the compile with test flag enabled by default. So any failure in test cases will fail the ci and the merge request.


 Security 11/13

  • Secure development knowledge


    The project MUST implement secure design principles (from "know_secure_design"), where applicable. If the project is not producing software, select "not applicable" (N/A). [implement_secure_design]
    For example, the project results should have fail-safe defaults (access decisions should deny by default, and projects' installation should be secure by default). They should also have complete mediation (every access that might be limited must be checked for authority and be non-bypassable). Note that in some cases principles will conflict, in which case a choice must be made (e.g., many mechanisms can make things more complex, contravening "economy of mechanism" / keep it simple).

    The project strives to implement secure design principles.


  • Use basic good cryptographic practices

    Note that some software does not need to use cryptographic mechanisms. If your project produces software that (1) includes, activates, or enables encryption functionality, and (2) might be released from the United States (US) to outside the US or to a non-US-citizen, you may be legally required to take a few extra steps. Typically this just involves sending an email. For more information, see the encryption section of Understanding Open Source Technology & US Export Controls.

    The default security mechanisms within the software produced by the project MUST NOT depend on cryptographic algorithms or modes with known serious weaknesses (e.g., the SHA-1 cryptographic hash algorithm or the CBC mode in SSH). [crypto_weaknesses]
    Concerns about CBC mode in SSH are discussed in CERT: SSH CBC vulnerability.

    These applications do not encrypt or decrypt any of the data.



    The project SHOULD support multiple cryptographic algorithms, so users can quickly switch if one is broken. Common symmetric key algorithms include AES, Twofish, and Serpent. Common cryptographic hash algorithm alternatives include SHA-2 (including SHA-224, SHA-256, SHA-384 AND SHA-512) and SHA-3. [crypto_algorithm_agility]

    Certificates are managed through AAF micro-service which will be deployed with ONAP suite



    The project MUST support storing authentication credentials (such as passwords and dynamic tokens) and private cryptographic keys in files that are separate from other information (such as configuration files, databases, and logs), and permit users to update and replace them without code recompilation. If the project never processes authentication credentials and private cryptographic keys, select "not applicable" (N/A). [crypto_credential_agility]

    Use authentication is taken care of by portal micro-service which will be deployed with ONAP suite



    The software produced by the project SHOULD support secure protocols for all of its network communications, such as SSHv2 or later, TLS1.2 or later (HTTPS), IPsec, SFTP, and SNMPv3. Insecure protocols such as FTP, HTTP, telnet, SSLv3 or earlier, and SSHv1 SHOULD be disabled by default, and only enabled if the user specifically configures it. If the software produced by the project does not support network communications, select "not applicable" (N/A). [crypto_used_network]

    The projects supports secure TLS and HTTPS and Insecure protocols are disabled by default in these applications, they cab be over-ridden by user configuration



    The software produced by the project SHOULD, if it supports or uses TLS, support at least TLS version 1.2. Note that the predecessor of TLS was called SSL. If the software does not use TLS, select "not applicable" (N/A). [crypto_tls12]

    The products support TLS version 1.2



    The software produced by the project MUST, if it supports TLS, perform TLS certificate verification by default when using TLS, including on subresources. If the software does not use TLS, select "not applicable" (N/A). [crypto_certificate_verification]

    Certificate validation is done before answering any calls.



    The software produced by the project MUST, if it supports TLS, perform certificate verification before sending HTTP headers with private information (such as secure cookies). If the software does not use TLS, select "not applicable" (N/A). [crypto_verification_private]

    The certificate is validated before sending http headers or private information.


  • Secure release


    The project MUST cryptographically sign releases of the project results intended for widespread use, and there MUST be a documented process explaining to users how they can obtain the public signing keys and verify the signature(s). The private key for these signature(s) MUST NOT be on site(s) used to directly distribute the software to the public. If releases are not intended for widespread use, select "not applicable" (N/A). [signed_releases]
    The project results include both source code and any generated deliverables where applicable (e.g., executables, packages, and containers). Generated deliverables MAY be signed separately from source code. These MAY be implemented as signed git tags (using cryptographic digital signatures). Projects MAY provide generated results separately from tools like git, but in those cases, the separate results MUST be separately signed.


    It is SUGGESTED that in the version control system, each important version tag (a tag that is part of a major release, minor release, or fixes publicly noted vulnerabilities) be cryptographically signed and verifiable as described in signed_releases. [version_tags_signed]

  • Other security issues


    The project results MUST check all inputs from potentially untrusted sources to ensure they are valid (an *allowlist*), and reject invalid inputs, if there are any restrictions on the data at all. [input_validation]
    Note that comparing input against a list of "bad formats" (aka a *denylist*) is normally not enough, because attackers can often work around a denylist. In particular, numbers are converted into internal formats and then checked if they are between their minimum and maximum (inclusive), and text strings are checked to ensure that they are valid text patterns (e.g., valid UTF-8, length, syntax, etc.). Some data may need to be "anything at all" (e.g., a file uploader), but these would typically be rare.

    The project strives to validate all input to functions. The inputs that are provided to the services are checked against existing models such as OXM or search-abstraction layer and only valid inputs are allowed to be pass through



    Hardening mechanisms SHOULD be used in the software produced by the project so that software defects are less likely to result in security vulnerabilities. [hardening]
    Hardening mechanisms may include HTTP headers like Content Security Policy (CSP), compiler flags to mitigate attacks (such as -fstack-protector), or compiler flags to eliminate undefined behavior. For our purposes least privilege is not considered a hardening mechanism (least privilege is important, but separate).

    The project tries to use hardening mechanism whenever possible. Eg we use transaction id for tracking transactions through multiple services and also we use http headers to identify the application where possible



    The project MUST provide an assurance case that justifies why its security requirements are met. The assurance case MUST include: a description of the threat model, clear identification of trust boundaries, an argument that secure design principles have been applied, and an argument that common implementation security weaknesses have been countered. (URL required) [assurance_case]
    An assurance case is "a documented body of evidence that provides a convincing and valid argument that a specified set of critical claims regarding a system’s properties are adequately justified for a given application in a given environment" ("Software Assurance Using Structured Assurance Case Models", Thomas Rhodes et al, NIST Interagency Report 7608). Trust boundaries are boundaries where data or execution changes its level of trust, e.g., a server's boundaries in a typical web application. It's common to list secure design principles (such as Saltzer and Schroeer) and common implementation security weaknesses (such as the OWASP top 10 or CWE/SANS top 25), and show how each are countered. The BadgeApp assurance case may be a useful example. This is related to documentation_security, documentation_architecture, and implement_secure_design.

    Will work on for next release


 Analysis 2/2

  • Static code analysis


    The project MUST use at least one static analysis tool with rules or approaches to look for common vulnerabilities in the analyzed language or environment, if there is at least one FLOSS tool that can implement this criterion in the selected language. [static_analysis_common_vulnerabilities]
    Static analysis tools that are specifically designed to look for common vulnerabilities are more likely to find them. That said, using any static tools will typically help find some problems, so we are suggesting but not requiring this for the 'passing' level badge.
  • Dynamic code analysis


    If the software produced by the project includes software written using a memory-unsafe language (e.g., C or C++), then at least one dynamic tool (e.g., a fuzzer or web application scanner) MUST be routinely used in combination with a mechanism to detect memory safety problems such as buffer overwrites. If the project does not produce software written in a memory-unsafe language, choose "not applicable" (N/A). [dynamic_analysis_unsafe]
    Examples of mechanisms to detect memory safety problems include Address Sanitizer (ASAN) (available in GCC and LLVM), Memory Sanitizer, and valgrind. Other potentially-used tools include thread sanitizer and undefined behavior sanitizer. Widespread assertions would also work.

    All the projects use Java which are memory safe that run on JVM. Also the end product runs on a docker container which is run on docker.



This data is available under the Creative Commons Attribution version 3.0 or later license (CC-BY-3.0+). All are free to share and adapt the data, but must give appropriate credit. Please credit mrsjackson76 and the OpenSSF Best Practices badge contributors.

Project badge entry owned by: mrsjackson76.
Entry created on 2018-03-19 16:56:32 UTC, last updated on 2021-09-13 14:28:55 UTC. Last achieved passing badge on 2019-04-30 20:20:31 UTC.

Back