ONAP AAI UI

Projects that follow the best practices below can voluntarily self-certify and show that they've achieved an Open Source Security Foundation (OpenSSF) best practices badge.

No existe un conjunto de prácticas que pueda garantizar que el software nunca tendrá defectos o vulnerabilidades; incluso los métodos formales pueden fallar si las especificaciones o suposiciones son incorrectas. Tampoco existe ningún conjunto de prácticas que pueda garantizar que un proyecto mantenga una comunidad de desarrollo saludable y que funcione bien. Sin embargo, seguir las mejores prácticas puede ayudar a mejorar los resultados de los proyectos. Por ejemplo, algunas prácticas permiten la revisión por parte de múltiples personas antes del lanzamiento, lo que puede ayudar a encontrar vulnerabilidades técnicas que de otro modo serían difíciles de encontrar y ayudar a generar confianza y un deseo repetido de interacción entre desarrolladores de diferentes compañías. Para obtener una insignia, se deben cumplir todos los criterios DEBE y NO DEBE, se deben cumplir, así como todos los criterios DEBERÍAN deben cumplirse o ser justificados, y todos los criterios SUGERIDOS se pueden cumplir o incumplir (queremos que se consideren al menos). Si desea añadir texto como justificación mediante un comentario genérico, en lugar de ser un razonamiento de que la situación es aceptable, comience el bloque de texto con '//' seguido de un espacio. Los comentarios son bienvenidos a través del sitio de GitHub mediante "issues" o "pull requests". También hay una lista de correo electrónico para el tema principal.

Con mucho gusto proporcionaríamos la información en varios idiomas, sin embargo, si hay algún conflicto o inconsistencia entre las traducciones, la versión en inglés es la versión autorizada.
If this is your project, please show your badge status on your project page! The badge status looks like this: Badge level for project 1737 is passing Here is how to embed it:
You can show your badge status by embedding this in your markdown file:
[![OpenSSF Best Practices](https://www.bestpractices.dev/projects/1737/badge)](https://www.bestpractices.dev/projects/1737)
or by embedding this in your HTML:
<a href="https://www.bestpractices.dev/projects/1737"><img src="https://www.bestpractices.dev/projects/1737/badge"></a>


These are the Silver level criteria. You can also view the Passing or Gold level criteria.

        

 Basics 16/17

 Change Control 1/1

  • Versiones anteriores


    The project MUST maintain the most often used older versions of the product or provide an upgrade path to newer versions. If the upgrade path is difficult, the project MUST document how to perform the upgrade (e.g., the interfaces that have changed and detailed suggested steps to help upgrade). [maintenance_or_update]

    All major releases are tagged in gerrit and the artifacts are stored with the release information on onap.nexus. So we can access all old versions of the artifact. If and when a upgrade requires certain steps to be followed they are being added to the release documents as needed


 Informes 3/3

  • Bug-reporting process


    The project MUST use an issue tracker for tracking individual issues. [report_tracker]
  • Proceso de informe de vulnerabilidad


    The project MUST give credit to the reporter(s) of all vulnerability reports resolved in the last 12 months, except for the reporter(s) who request anonymity. If there have been no vulnerabilities resolved in the last 12 months, select "not applicable" (N/A). (URL required) [vulnerability_report_credit]

    Vulnerabilities can be reported using the link https://wiki.onap.org/pages/viewpage.action?pageId=6591711 Currently we dont have any vulnerabilities reported, but the wiki page explains on how to report a vulnerability and how to report anonymously if you do not want the credit for it.



    The project MUST have a documented process for responding to vulnerability reports. (URL required) [vulnerability_response_process]
    This is strongly related to vulnerability_report_process, which requires that there be a documented way to report vulnerabilities. It also related to vulnerability_report_response, which requires response to vulnerability reports within a certain time frame.

    Vulnerabilities handling is documented in https://wiki.onap.org/pages/viewpage.action?pageId=6591711


 Calidad 17/19

  • Coding standards


    The project MUST identify the specific coding style guides for the primary languages it uses, and require that contributions generally comply with it. (URL required) [coding_standards]
    In most cases this is done by referring to some existing style guide(s), possibly listing differences. These style guides can include ways to improve readability and ways to reduce the likelihood of defects (including vulnerabilities). Many programming languages have one or more widely-used style guides. Examples of style guides include Google's style guides and SEI CERT Coding Standards.

    The project MUST automatically enforce its selected coding style(s) if there is at least one FLOSS tool that can do so in the selected language(s). [coding_standards_enforced]
    This MAY be implemented using static analysis tool(s) and/or by forcing the code through code reformatters. In many cases the tool configuration is included in the project's repository (since different projects may choose different configurations). Projects MAY allow style exceptions (and typically will); where exceptions occur, they MUST be rare and documented in the code at their locations, so that these exceptions can be reviewed and so that tools can automatically handle them in the future. Examples of such tools include ESLint (JavaScript), Rubocop (Ruby), and devtools check (R).

    We currently do not have any style enforcer configured for java applications. Look into http://checkstyle.sourceforge.net/ it is an enforcer that works with maven.


  • Working build system


    Build systems for native binaries MUST honor the relevant compiler and linker (environment) variables passed in to them (e.g., CC, CFLAGS, CXX, CXXFLAGS, and LDFLAGS) and pass them to compiler and linker invocations. A build system MAY extend them with additional flags; it MUST NOT simply replace provided values with its own. If no native binaries are being generated, select "not applicable" (N/A). [build_standard_variables]
    It should be easy to enable special build features like Address Sanitizer (ASAN), or to comply with distribution hardening best practices (e.g., by easily turning on compiler flags to do so).

    The application does not create native binaries. (Some of the libraries it depends on do, but those are external.)



    The build and installation system SHOULD preserve debugging information if they are requested in the relevant flags (e.g., "install -s" is not used). If there is no build or installation system (e.g., typical JavaScript libraries), select "not applicable" (N/A). [build_preserve_debug]
    E.G., setting CFLAGS (C) or CXXFLAGS (C++) should create the relevant debugging information if those languages are used, and they should not be stripped during installation. Debugging information is needed for support and analysis, and also useful for measuring the presence of hardening features in the compiled binaries.

    The application does not create native binaries. (Some of the libraries it depends on do, but those are external.)



    The build system for the software produced by the project MUST NOT recursively build subdirectories if there are cross-dependencies in the subdirectories. If there is no build or installation system (e.g., typical JavaScript libraries), select "not applicable" (N/A). [build_non_recursive]
    The project build system's internal dependency information needs to be accurate, otherwise, changes to the project may not build correctly. Incorrect builds can lead to defects (including vulnerabilities). A common mistake in large build systems is to use a "recursive build" or "recursive make", that is, a hierarchy of subdirectories containing source files, where each subdirectory is independently built. Unless each subdirectory is fully independent, this is a mistake, because the dependency information is incorrect.

    The application does not create native binaries. (Some of the libraries it depends on do, but those are external.)



    The project MUST be able to repeat the process of generating information from source files and get exactly the same bit-for-bit result. If no building occurs (e.g., scripting languages where the source code is used directly instead of being compiled), select "not applicable" (N/A). [build_repeatable]
    GCC and clang users may find the -frandom-seed option useful; in some cases, this can be resolved by forcing some sort order. More suggestions can be found at the reproducible build site.

    All releases are tagged in gerrit(git), and the builds are controlled using jenkins. By providing the git tag information the same image can be build over and over again with same bit-for-bit result.


  • Installation system


    The project MUST provide a way to easily install and uninstall the software produced by the project using a commonly-used convention. [installation_common]
    Examples include using a package manager (at the system or language level), "make install/uninstall" (supporting DESTDIR), a container in a standard format, or a virtual machine image in a standard format. The installation and uninstallation process (e.g., its packaging) MAY be implemented by a third party as long as it is FLOSS.

    All packages are delivered either as an jar artifact or a docker image. Incase of maven artifacts, they can be removed using the pom file. In case of docker container. We can delete the container we dont want. Also control the orchestration in Kubernetes if you want to exclude certain docker images.



    The installation system for end-users MUST honor standard conventions for selecting the location where built artifacts are written to at installation time. For example, if it installs files on a POSIX system it MUST honor the DESTDIR environment variable. If there is no installation system or no standard convention, select "not applicable" (N/A). [installation_standard_variables]

    The compiled docker images and jar files can be installed/used as the user sees fit. Both run on JVM or docker. So there is no reason to selecting locations etc.



    The project MUST provide a way for potential developers to quickly install all the project results and support environment necessary to make changes, including the tests and test environment. This MUST be performed with a commonly-used convention. [installation_development_quick]
    This MAY be implemented using a generated container and/or installation script(s). External dependencies would typically be installed by invoking system and/or language package manager(s), per external_dependencies.

    All the components require only java and maven to begin with for a developer to quickly install and test it. Even for deployment using OOM and the right amount of resources, we can deploy the full AAI/ONAP suite in less than a day. The steps are documented in https://onap.readthedocs.io/en/latest/submodules/oom.git/docs/oom_quickstart_guide.html


  • Externally-maintained components


    The project MUST list external dependencies in a computer-processable way. (URL required) [external_dependencies]
    Typically this is done using the conventions of package manager and/or build system. Note that this helps implement installation_development_quick.

    Projects MUST monitor or periodically check their external dependencies (including convenience copies) to detect known vulnerabilities, and fix exploitable vulnerabilities or verify them as unexploitable. [dependency_monitoring]
    This can be done using an origin analyzer / dependency checking tool / software composition analysis tool such as OWASP's Dependency-Check, Sonatype's Nexus Auditor, Synopsys' Black Duck Software Composition Analysis, and Bundler-audit (for Ruby). Some package managers include mechanisms to do this. It is acceptable if the components' vulnerability cannot be exploited, but this analysis is difficult and it is sometimes easier to simply update or fix the part.

    Nexus sonar scan is run on all the projects on a weekly basis



    The project MUST either:
    1. make it easy to identify and update reused externally-maintained components; or
    2. use the standard components provided by the system or programming language.
    Then, if a vulnerability is found in a reused component, it will be easy to update that component. [updateable_reused_components]
    A typical way to meet this criterion is to use system and programming language package management systems. Many FLOSS programs are distributed with "convenience libraries" that are local copies of standard libraries (possibly forked). By itself, that's fine. However, if the program *must* use these local (forked) copies, then updating the "standard" libraries as a security update will leave these additional copies still vulnerable. This is especially an issue for cloud-based systems; if the cloud provider updates their "standard" libaries but the program won't use them, then the updates don't actually help. See, e.g., "Chromium: Why it isn't in Fedora yet as a proper package" by Tom Callaway.

    External components are maintained through Maven. The user can get a list of all included components using the maven dependency tree and can update or reuse as they see fit



    The project SHOULD avoid using deprecated or obsolete functions and APIs where FLOSS alternatives are available in the set of technology it uses (its "technology stack") and to a supermajority of the users the project supports (so that users have ready access to the alternative). [interfaces_current]

    We avoid depending on deprecated/obsolete functions.


  • Automated test suite


    An automated test suite MUST be applied on each check-in to a shared repository for at least one branch. This test suite MUST produce a report on test success or failure. [automated_integration_testing]
    This requirement can be viewed as a subset of test_continuous_integration, but focused on just testing, without requiring continuous integration.

    Automatic test suites are run every time before merging the code. The code check in cannot pass with out jenkins posting a +1 on the review.



    The project MUST add regression tests to an automated test suite for at least 50% of the bugs fixed within the last six months. [regression_tests_added50]

    When regressions occur, we add tests for them.



    The project MUST have FLOSS automated test suite(s) that provide at least 80% statement coverage if there is at least one FLOSS tool that can measure this criterion in the selected language. [test_statement_coverage80]
    Many FLOSS tools are available to measure test coverage, including gcov/lcov, Blanket.js, Istanbul, JCov, and covr (R). Note that meeting this criterion is not a guarantee that the test suite is thorough, instead, failing to meet this criterion is a strong indicator of a poor test suite.

    We use sonar to measure the code coverage. https://sonar.onap.org/about Code coverage at the date of filling this report(2018-09-19) is Sparky-be: 48.7 Data-router: 51.5 Router-core: 69.2 search-data-service: 54


  • New functionality testing


    The project MUST have a formal written policy that as major new functionality is added, tests for the new functionality MUST be added to an automated test suite. [test_policy_mandated]

    Contributing guide lines for development is recorded in https://wiki.onap.org/display/DW/Development+Procedures+and+Policies



    The project MUST include, in its documented instructions for change proposals, the policy that tests are to be added for major new functionality. [tests_documented_added]
    However, even an informal rule is acceptable as long as the tests are being added in practice.
  • Banderas de advertencia


    Projects MUST be maximally strict with warnings in the software produced by the project, where practical. [warnings_strict]
    Some warnings cannot be effectively enabled on some projects. What is needed is evidence that the project is striving to enable warning flags where it can, so that errors are detected early.

    Build systems run the compile with test flag enabled by default. So any failure in test cases will fail the ci and the merge request.


 Seguridad 11/13

  • Conocimiento de desarrollo seguro


    The project MUST implement secure design principles (from "know_secure_design"), where applicable. If the project is not producing software, select "not applicable" (N/A). [implement_secure_design]
    For example, the project results should have fail-safe defaults (access decisions should deny by default, and projects' installation should be secure by default). They should also have complete mediation (every access that might be limited must be checked for authority and be non-bypassable). Note that in some cases principles will conflict, in which case a choice must be made (e.g., many mechanisms can make things more complex, contravening "economy of mechanism" / keep it simple).

    The project strives to implement secure design principles.


  • Use buenas prácticas criptográficas

    Note that some software does not need to use cryptographic mechanisms. If your project produces software that (1) includes, activates, or enables encryption functionality, and (2) might be released from the United States (US) to outside the US or to a non-US-citizen, you may be legally required to take a few extra steps. Typically this just involves sending an email. For more information, see the encryption section of Understanding Open Source Technology & US Export Controls.

    The default security mechanisms within the software produced by the project MUST NOT depend on cryptographic algorithms or modes with known serious weaknesses (e.g., the SHA-1 cryptographic hash algorithm or the CBC mode in SSH). [crypto_weaknesses]
    Concerns about CBC mode in SSH are discussed in CERT: SSH CBC vulnerability.

    These applications do not encrypt or decrypt any of the data.



    The project SHOULD support multiple cryptographic algorithms, so users can quickly switch if one is broken. Common symmetric key algorithms include AES, Twofish, and Serpent. Common cryptographic hash algorithm alternatives include SHA-2 (including SHA-224, SHA-256, SHA-384 AND SHA-512) and SHA-3. [crypto_algorithm_agility]

    Certificates are managed through AAF micro-service which will be deployed with ONAP suite



    The project MUST support storing authentication credentials (such as passwords and dynamic tokens) and private cryptographic keys in files that are separate from other information (such as configuration files, databases, and logs), and permit users to update and replace them without code recompilation. If the project never processes authentication credentials and private cryptographic keys, select "not applicable" (N/A). [crypto_credential_agility]

    Use authentication is taken care of by portal micro-service which will be deployed with ONAP suite



    The software produced by the project SHOULD support secure protocols for all of its network communications, such as SSHv2 or later, TLS1.2 or later (HTTPS), IPsec, SFTP, and SNMPv3. Insecure protocols such as FTP, HTTP, telnet, SSLv3 or earlier, and SSHv1 SHOULD be disabled by default, and only enabled if the user specifically configures it. If the software produced by the project does not support network communications, select "not applicable" (N/A). [crypto_used_network]

    The projects supports secure TLS and HTTPS and Insecure protocols are disabled by default in these applications, they cab be over-ridden by user configuration



    The software produced by the project SHOULD, if it supports or uses TLS, support at least TLS version 1.2. Note that the predecessor of TLS was called SSL. If the software does not use TLS, select "not applicable" (N/A). [crypto_tls12]

    The products support TLS version 1.2



    The software produced by the project MUST, if it supports TLS, perform TLS certificate verification by default when using TLS, including on subresources. If the software does not use TLS, select "not applicable" (N/A). [crypto_certificate_verification]

    Certificate validation is done before answering any calls.



    The software produced by the project MUST, if it supports TLS, perform certificate verification before sending HTTP headers with private information (such as secure cookies). If the software does not use TLS, select "not applicable" (N/A). [crypto_verification_private]

    The certificate is validated before sending http headers or private information.


  • Secure release


    The project MUST cryptographically sign releases of the project results intended for widespread use, and there MUST be a documented process explaining to users how they can obtain the public signing keys and verify the signature(s). The private key for these signature(s) MUST NOT be on site(s) used to directly distribute the software to the public. If releases are not intended for widespread use, select "not applicable" (N/A). [signed_releases]
    The project results include both source code and any generated deliverables where applicable (e.g., executables, packages, and containers). Generated deliverables MAY be signed separately from source code. These MAY be implemented as signed git tags (using cryptographic digital signatures). Projects MAY provide generated results separately from tools like git, but in those cases, the separate results MUST be separately signed.


    It is SUGGESTED that in the version control system, each important version tag (a tag that is part of a major release, minor release, or fixes publicly noted vulnerabilities) be cryptographically signed and verifiable as described in signed_releases. [version_tags_signed]

  • Otros problemas de seguridad


    The project results MUST check all inputs from potentially untrusted sources to ensure they are valid (an *allowlist*), and reject invalid inputs, if there are any restrictions on the data at all. [input_validation]
    Note that comparing input against a list of "bad formats" (aka a *denylist*) is normally not enough, because attackers can often work around a denylist. In particular, numbers are converted into internal formats and then checked if they are between their minimum and maximum (inclusive), and text strings are checked to ensure that they are valid text patterns (e.g., valid UTF-8, length, syntax, etc.). Some data may need to be "anything at all" (e.g., a file uploader), but these would typically be rare.

    The project strives to validate all input to functions. The inputs that are provided to the services are checked against existing models such as OXM or search-abstraction layer and only valid inputs are allowed to be pass through



    Hardening mechanisms SHOULD be used in the software produced by the project so that software defects are less likely to result in security vulnerabilities. [hardening]
    Hardening mechanisms may include HTTP headers like Content Security Policy (CSP), compiler flags to mitigate attacks (such as -fstack-protector), or compiler flags to eliminate undefined behavior. For our purposes least privilege is not considered a hardening mechanism (least privilege is important, but separate).

    The project tries to use hardening mechanism whenever possible. Eg we use transaction id for tracking transactions through multiple services and also we use http headers to identify the application where possible



    The project MUST provide an assurance case that justifies why its security requirements are met. The assurance case MUST include: a description of the threat model, clear identification of trust boundaries, an argument that secure design principles have been applied, and an argument that common implementation security weaknesses have been countered. (URL required) [assurance_case]
    An assurance case is "a documented body of evidence that provides a convincing and valid argument that a specified set of critical claims regarding a system’s properties are adequately justified for a given application in a given environment" ("Software Assurance Using Structured Assurance Case Models", Thomas Rhodes et al, NIST Interagency Report 7608). Trust boundaries are boundaries where data or execution changes its level of trust, e.g., a server's boundaries in a typical web application. It's common to list secure design principles (such as Saltzer and Schroeer) and common implementation security weaknesses (such as the OWASP top 10 or CWE/SANS top 25), and show how each are countered. The BadgeApp assurance case may be a useful example. This is related to documentation_security, documentation_architecture, and implement_secure_design.

    Will work on for next release


 Analysis 2/2

  • Análisis estático de código


    The project MUST use at least one static analysis tool with rules or approaches to look for common vulnerabilities in the analyzed language or environment, if there is at least one FLOSS tool that can implement this criterion in the selected language. [static_analysis_common_vulnerabilities]
    Static analysis tools that are specifically designed to look for common vulnerabilities are more likely to find them. That said, using any static tools will typically help find some problems, so we are suggesting but not requiring this for the 'passing' level badge.
  • Dynamic code analysis


    If the software produced by the project includes software written using a memory-unsafe language (e.g., C or C++), then at least one dynamic tool (e.g., a fuzzer or web application scanner) MUST be routinely used in combination with a mechanism to detect memory safety problems such as buffer overwrites. If the project does not produce software written in a memory-unsafe language, choose "not applicable" (N/A). [dynamic_analysis_unsafe]
    Examples of mechanisms to detect memory safety problems include Address Sanitizer (ASAN) (available in GCC and LLVM), Memory Sanitizer, and valgrind. Other potentially-used tools include thread sanitizer and undefined behavior sanitizer. Widespread assertions would also work.

    All the projects use Java which are memory safe that run on JVM. Also the end product runs on a docker container which is run on docker.



This data is available under the Creative Commons Attribution version 3.0 or later license (CC-BY-3.0+). All are free to share and adapt the data, but must give appropriate credit. Please credit mrsjackson76 and the OpenSSF Best Practices badge contributors.

Project badge entry owned by: mrsjackson76.
Entry created on 2018-03-19 16:56:32 UTC, last updated on 2021-09-13 14:28:55 UTC. Last achieved passing badge on 2019-04-30 20:20:31 UTC.

Back