Giter Club home page Giter Club logo

core's People

Contributors

23bartman avatar aaronott avatar chemmi avatar connickshields avatar derweiser avatar dkefer avatar draichev avatar fzipi avatar infosecdad avatar intubun avatar johanlindfors-ts avatar maxwinkler07 avatar nessimk avatar pat-duarte avatar ristomcgehee avatar sebadele avatar willchilcutt avatar wonda-tea-coffee avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

core's Issues

Inclusion of the Not Applicable option in SAMM Responses

I think we should have a Not Applicable answer in the template. There are some sections or practices that do not apply to some organizations or teams. An example is Supplier Security under Security Requirements practice. This practice is only applicable if organizations use software that is developed by a vendor not subject to the organizations practices.

I recently did an assessment and i had to fudge this because it was not applicable and I did not want it to impact the overall score. I would imagine some other organizations would have the same issues for other areas.

Happy to discuss.

slack input on process

https://owasp.slack.com/archives/C0VF1EJGH/p1550254303001700
Could be used to plan the next steps after a SAMM gap analysis? https://medium.com/@chrisvmcd/mapping-maturity-create-context-specific-maturity-models-with-wardley-maps-informed-by-cynefin-37ffcd1d315

History from old repo:
@SebaDele opened this issue on Mar 24, 2019
@23bartman 23bartman assigned 23bartman and dkefer on May 22, 2019
@SebaDele SebaDele added the guidance label on Nov 23, 2019
@SebaDele commented on Nov 23, 2019
consider in guidance

Misspelling in V-ST-2-A.yml

'vulnerability' is spelled incorrectly in the Activity Benefit.

As is:
benefit: Detection of organization-specific easy-to-find vulnerabilites

Should be:
benefit: Detection of organization-specific easy-to-find vulnerabilities

Two questions have the same ID

Hi,
The following two questions have the same ID (1e005e11997f4929a12fdb939599e77e):
samm/Supporting Resources/v2.0/Datamodel/Datafiles/Question O-EM-1-A.yml
samm/Supporting Resources/v2.0/Datamodel/Datafiles/Question O-IM-1-A.yml

This has to be a mistake right?
It causes some trouble if one needs to parse the content.

Old repo history:
@dschwarz91 opened this issue on May 14, 2020
@fzipi fzipi added the bug label on Nov 14, 2020
@23bartman 23bartman self-assigned this on Dec 23, 2020

Translation to brazilian portuguese

Hello friends,

I'm one of Belo Horizonte chapter leaders, from Brazil, and we have some mates here with intent to translate SAMM to our language. We did a translation for API Top 10 Security Risk, forking the official repo and after working on it, return the content as pull request, with a "pt-BR" directory.

I've look to instruction for contributions, but I can't find nothing specific for translations. There is some instructions to conduct this translation of SAMM?

History from old repo:
@raphaelhagi opened this issue on Aug 4, 2020
@23bartman commented on Dec 23, 2020:
Hi, I will be reaching out to you to get you involved in our i18n translation efforts. Cheers.
@23bartman 23bartman self-assigned this on Dec 23, 2020

automate the linking of product teams work items against SAMM maturity goals?

question/suggestion from Fred Blaise on Slack https://owasp.slack.com/archives/C0VF1EJGH/p1611598420007100
Hey there.. for people wanting to automate the linking of product teams work items against SAMM maturity goals to achieve some type of "continuous maturity modeling"... how have you done it? jira labels? whatever SCM labels? Something other than labels? Code annotation? Else? Thanks for your help!

History from old repo:
@SebaDele opened this issue on Jan 31
@SebaDele SebaDele added the enhancement label on Jan 31

input security assessment

from @robvanderveer Architecture assessment: Seba explained that this is a bit of a tough topic. Maybe the following helps: In our practice we use architecture assessment as the starting point for our design/code reviews, and we want to assess how the system is taking security reponsibilities for the relevant system properties. For that we use our own ISO25010-based SIG security model which describes the system properties: secure data transport, identification strength, access management strength, session management strength, Authorized access, input/output verification, secure data storage, evidence strength and secure user management. Each of these properties requires certain technical controls to be in place. This point of view of a system is what I call the ‘hardening’ point of view: through engineering you expect that certain countermeasures are in place at certain points. As hygene. No threat analysis is needed.
Our process is described here: https://www.softwareimprovementgroup.com/wp-content/uploads/2016/10/SIG_Evaluation_Criteria_Security_1.pdf
These are just my 2 cents on the topic. Also here it’s hard for me to see the author’s intentions. I think a one on one conference call is the most effective way to share thoughts here.

History from old repo:
@SebaDele opened this issue on Jan 5, 2019
@SebaDele SebaDele added the 4V1ArchitectureAssessment label on Jan 5, 2019
@SebaDele SebaDele self-assigned this on Jan 5, 2019
@SebaDele commented on Nov 23, 2019
organize this conference call to see how we can improve this security practice

Priority suggestions in implementation

Based on feedback of Trey:
"https://csrc.nist.gov/csrc/media/publications/sp/800-53/rev-5/draft/documents/sp800-53r5-draft.pdf Specifically look at Appendix E and think of providing priorities for tasks within the implementation guidelines."

We might think of a setup similar to ASVS where we indicate what is suggested at different levels of assurance.

History from old repo:
@23bartman opened this issue on Nov 24, 2019
@23bartman 23bartman added the enhancement label on Nov 24, 2019

Support for not applicable in scoring

Investigate how we could support not/applicable situations for companies in the scoring system

History from old repo:
@23bartman opened this issue on Nov 24, 2019
@23bartman added the enhancement label on Nov 24, 2019
@23bartman assigned SebaDele, 23bartman, yanfosec and infosecdad on Nov 24, 2019
@23bartman commented on Nov 24, 2019
To be looked into for v2.1

DSOMM: Inventory of artefacts

Where to best make explicit that we need a CMDB to support a number of our activities, such as updating and patching ? It might be a quality criterium.

History from old repo:
@Elointz reacted with thumbs up emoji
@23bartman assigned johndileo and itscooper on Jun 5, 2019
@SebaDele commented on Nov 23, 2019
consider as a common prerequisite as part of guidance/how-to?

@SebaDele SebaDele assigned 23bartman on Nov 23, 2019
@SebaDele SebaDele added the SAMM 2.0 label on Nov 23, 2019
@23bartman 23bartman assigned KGABSWEDEN on Dec 15, 2019
@stevespringett mentioned this issue 4 days ago
SBOM and OBOM question #579 (in old repo), current repo link: #48

create word cloud per activity

Once the activities are written, try visualising them as a word cloud.

History from old repo:
@nessimk opened this issue on Jun 7, 2019
@nessimk nessimk added the OSS2019 label on Jun 7, 2019
@nessimk nessimk self-assigned this on Jun 7, 2019
@SebaDele SebaDele added the enhancement label on Nov 23, 2019

Simplify repository structure

We want to split any code from content. Any code should be moved to a pipeline, the repository should contain content only.

Target structure:

.
├─ model
│  ├─ activities 
│  ├─ practices
│  └─ ...
├─ texts
├─ graphics (svg..)
└─ readme.md (and license.md + similar)

the use of "Regular" and "Annual" in the answer sections

This issue was raised by Troy Fridley on Slack in 2 posts:
https://owasp.slack.com/archives/C0VF1EJGH/p1616433624009100
"
Hello, OpenSAMM team and users.
We are currently evaluating migrating from OpenSAMM 1.0, which we have been utilizing for many years to 2.0. A couple of questions have come up around the usage of terms within the updated toolkit.
One that has caused consternation is the use of 'Regular' and 'Annual' within the answer sections. We not not been able to locate a definition of how these two time intervals are meant to be used. For most of our users, Regular is a better answer than Annual. As Regular implies that the activity occurs much more frequent than an Annual action.
Can someone provide me definitive / normative guidance these two intervals, as well as a link to where these are documented?
"
https://owasp.slack.com/archives/C0VF1EJGH/p1616598999025600
"
Thanks for getting back on this. The challenge that keeps being brought to me is that the toolbox does use these terms in a quantitative manner for level 3 controls. With 'Annual' being given a larger weight to the score than 'Regular'.
Is there a justification for why 'Annual' activities are given a higher weight than 'Regular' activities?
Is there a best practice around modifying the toolbox and thus the framework around these frequency terms to better align with how an organization may utilize these terms?
I was hoping that there would be normative guidance and definition of these terms. I have had the case brought to me multiple times now that control activities that happen at a higher / Regular rate bring greater value than those done an an Annual frequency. There are some controls that only need Annual review; However, there are many, especially those around KPIs, that Regular actions are best practice.
"
Summary:
...
"Can someone provide me definitive / normative guidance these two intervals, as well as a link to where these are documented?"
...
"The challenge that keeps being brought to me is that the toolbox does use these terms in a quantitative manner for level 3 controls. With 'Annual' being given a larger weight to the score than 'Regular'.
Is there a justification for why 'Annual' activities are given a higher weight than 'Regular' activities?"

History from old repo:
@SebaDele opened this issue on Mar 24, 2021
@SebaDele commented on May 2
this was discussed within the core team, here are some notes:

to explain the use of the 2 time frequencies:
"annually" was used with things that are more likely to be related to compliance that would have an annual requirement. "regularly" would be more used for something that we are not trying to dictate a timeframe, but needs to be done not just once.

We used "annual" in practices where it would have more weight that "regularly".
E.g. in stream A "Create and Promote" of security practice "Strategy & Metrics" we assume that at least an annual review will be more frequent and a stronger requirement than a regular review which is probably every couple of years.
Keep that in mind when reviewing and scoring your maturity.

We will keep this issue open for adding these clarifications in locations where these terms are used. And will consider to be more precise in time frequencies in the scoring mechanism.

update TM stream with remarks

Activity D-TA-1-B.yml
Always make sure to persist the outcome
--> Always persist the outcome

Activity D-TA-2-B.yml
Capture the threat modeling artifacts with tools that are used by your application teams.
--> Capture the threat modeling artifacts with tools used by your application teams.

the developer security culture. Reusable risk patterns,
--> the developer security culture. Reusable risk patterns,

Question D-TA-2-B.yml
Do you use a standard methodology, aligned on your application risk levels?
--> Do you use a standard methodology, aligned with your application risk levels?

You capture the threat modeling artifacts with tools that are used by your application teams
--> You capture the threat modeling artifacts with tools used by your application teams

You regularly (e.g., yearly) review the existing threat models to verify that no new threats are relevant for your applications
--> You review the existing threat models to verify that no new threats are relevant for your applications at least yearly

History from old repo:
@SebaDele opened this issue on Dec 20, 2019
@SebaDele self-assigned this on Dec 20, 2019
@SebaDele added SAMM 2.0 2D1ThreatAssessment streamB labels on Dec 21, 2019
@23bartman commented on Dec 23, 2020
@SebaDele Can you review whether version 2.0 is OK on this ? If not, we can consider finetuning the model.

Spanish Version Available (read the doc)

Hi there, guys, i translate the entire PDF and posted at my personal account, at MD format, was a long and hard job but certainly should have problems with some things, so here we go:

ESP.OWASP-SAMMv2.0

Please, spanish people help to correct the things that could be wrong.

Thanks!

Hola a todos, chicos, traduje el PDF completo y lo subí a mi cuenta persona del github, en formato MD, fué un largo y árduo trabajo y claramente puede contener errores, con algunas cosas pero acá vamos:

ESP.OWASP-SAMMv2.0

Por favor, la gente hispano-hablando, nos ayude a corregir las cosas que puedan estar erroneas.

Gracias!

History from old repo:

  • @telekomancer opened this issue on Jun 20, 2020
  • @23bartman commented on Dec 23, 2020
    Many thanks for your effort ! I will be reaching out to you to see how we can use your translation in our i18n efforts.
  • @23bartman 23bartman self-assigned this on Dec 23, 2020

Covering privacy in the model

We should consider whether we also want to cover privacy in the same model/effort.

History from old repo:
@23bartman opened this issue on Jun 4, 2019
@SebaDele commented on Nov 23, 2019
cover in the general guidance

@SebaDele SebaDele assigned 23bartman on Nov 23, 2019
@SebaDele SebaDele added the SAMM 2.0 label on Nov 23, 2019
@23bartman 23bartman assigned KGABSWEDEN on Dec 15, 2019

Align terminology

https://owaspsamm.org/model/governance/education-and-guidance/stream-b/
https://github.com/OWASP/samm/blob/master/Supporting%20Resources/v2.0/Datamodel/Datafiles/Activity%20G-EG-2-B.yml

Replace:
The organization implements a formal secure coding center of excellence

With:
The organization implements a formal Secure Software Center of Excellence

Secure Software Center of Excellence is used in the question and in the ML3 activity description.

History from old repo:
@Pat-Duarte opened this issue on Mar 20

Operations - Incident Detection

We should consider incorporating:
Application instrumentation as the means for identifying active attacks, abuse of specific application functionality, and other abnormal application behavior.

Example: Business Logic abuse such as a peak of a specific functionality over time.

This is a real-time monitoring requirement.

Level 2 or 3 (likely 3)

History from old repo:
@yanfosec opened this issue on Jun 6, 2018
@23bartman assigned johndileo on Dec 23, 2020

review from Slack - threat modeling

from Adam:
The term "flaws" appears regularly. I would encourage you to go back to "issues" or move to "issues, tradeoffs or flaws" as often times the issues are design tradeoffs, not raw "flaws"

to update this stream accordingly

History from old repo:
@SebaDele opened this issue on Dec 21, 2019
@SebaDele added enhancement 2D1ThreatAssessment streamB SAMM 2.0 labels on Dec 21, 2019
@SebaDele self-assigned this on Dec 21, 2019

translate the toolbox

prepare the toolbox to be translated to other languages

start with a "dummy" language

History from old repo:
@SebaDele opened this issue on Oct 28, 2020

Why are activities in different maturity levels independent and rated equally?

From what I can see in the rating calculation, it does not matter whether I have a good coverage in a level-1 activity of a specific stream, or a level-3 activity. Also, higher-level activities do not depend on lower-level activities. So, in terms of the practice rating and in terms of dependencies, the maturity levels do not look to me like actual maturity levels. This seems illogical to me, and hard to explain to a team that is assessed.

Other process maturity models define a generic maturity level for each activity, like Initial, Repeatable, Defined, Capable, Efficient in CMM. BSIMM has a system that is comparable to SAMM, but it defines a "high-water mark" system where if you do at least one activity in maturity level 3, you automatically have that level, regardless of the activities below that level.

So to me personally, the term "maturity level" is a bit misleading in SAMM, because after the rating, I cannot tell which maturity level I have in each security practice. I just get a number that is completely unrelated to maturity levels.

Any takes on this? Is there something I didn't understand correctly?

History from old repo:
@thomaskonrad-sba opened this issue on Oct 15, 2020

@23bartman commented on Dec 23, 2020
Hi, thanks for you comment. You do understand the measuring correctly. In the past, SAMM used a measuring model as you described, where one needed to have all activities in underlying maturity levels before you could score on higher maturity levels. We decided to step away from this, as we encountered many situations where this would be awkward (where level 1 activities were not implemented for instance, or were decided not relevant, yet organisations were doing useful activities on higher levels). That's why we, in the end, decided to step away from these mandatory lower levels.

Between the lines, I do read that it might be useful to have different weights for different levels. We've considered this, but not implemented this so far. We might reconsider.

@23bartman 23bartman assigned SebaDele and 23bartman on Dec 23, 2020

@thomaskonrad-sba commented on Jan 4
Thanks for the explanations. I'd love to be part of such discussions. Is there a way to be part of the process?

Duplicate word in Answer Set - Z

'we' is used twice in answer set Z

As is:
text: Yes, we we improve it at regular times

Should be:
text: Yes, we improve it at regular times

Roadmap Phase 3 implimentation questions have wrong data validation rules

The data validation is set to use "AnsD" for the list source of choices, but it should be "AnsF" This is breaking the roadmap in that section as a result (since it isn't the right answer and isn't scored appropriately). This is in Column R on the "Roadmap" tab.

History in old repo:
@rhitmojo opened this issue on Jul 30, 2020
@23bartman commented on Dec 23, 2020
Many thanks for your feedback. We will look into this.
@23bartman 23bartman assigned dkefer on Dec 23, 2020

Add backup and restore to deployment

From DevSecOps maturity model. Add an activity to backup prior to deployment, and have the ability to rollback if required.

History from old repo:
@itscooper opened this issue on Jun 5, 2019
@thomaskonrad reacted with thumbs up emoji
@itscooper itscooper added the 3I2SecureDeployment label on Jun 5, 2019
@itscooper itscooper self-assigned this on Jun 5, 2019
@SebaDele assigned dkefer and unassigned itscooper on Nov 23, 2019
@SebaDele commented on Nov 23, 2019
consider as guidance

automate generation of the acronym list

Extract all instances of 2 or more sequences of uppercase letters across all activities to build the acronym list (if we keep it).

History from old repo:
@nessimk opened this issue on Jun 7, 2019
@nessimk added enhancement OSS2019 labels on Jun 7, 2019

Create stable/versioned URI references to SAMM entitities

Raised by Roberto Polli in Slack https://owasp.slack.com/archives/C01EQUM5TGS/p1617837141004300:
Hi there! I'm trying to reference SAMM entities (activities & Co) into #dsomm yaml files. I thought there were URIs but I just found that the repo provides yaml files, eg. https://github.com/OWASP/samm/blob/master/Supporting%20Resources/v2.0/Datamodel/Datafiles/Activity%20D-SA-1-A.yml
Are those files consolidated? Is there a way to map them to stable/versioned URIs? Thanks for your time, R cc: @wurstbrot

History from old repo:
@SebaDele opened this issue on Apr 10, 2021
@ioggstream commented on Jul 14
Any news?

DevOps security implications, controls and measures

I think it would be important that SAMM 2.0 cater for the SAAS direction a lot of companies are adopting and the collective DevOps tracks that are in flow.

I propose starting with this paper for direction: https://pdfs.semanticscholar.org/8f2c/a1fd43770dfbfdbed9850fd7dfbb6bb85010.pdf

History from old repo:
@bars0um opened this issue on May 28, 2019
@SebaDele added enhancement and guidance labels on Nov 23, 2019
@SebaDele commented on Nov 23, 2019
to be taken into account in the guidance track. the core model itself is deployment or development methodology agnostic.

ASVS reference

as part of security requirement refer to ASVS/MASVS (thx Jeroen)

History from old repo:
@SebaDele opened this issue on Jan 29, 2019
@commjoen reacted with thumbs up emoji
@SebaDele SebaDele added the 2D2SecurityRequirements label on Jan 29, 2019
@23bartman 23bartman self-assigned this on Mar 26, 2019
@SebaDele SebaDele added enhancement guidance labels on Nov 23, 2019
@SebaDele SebaDele assigned johnellingsworth and unassigned 23bartman on Nov 23, 2019
@23bartman commented on Dec 23, 2020
This will be performed when we have the OWASP references solution in place.

Add practice dependencies in v2

suggested by Adriana Verhagen:

In V1.5 you have indicated dependencies between diverse practices and each activity maturity, however, in V2.0 this information is gone and I would like to understand if there are still dependencies and if this has been documented somewhere.

History from old repo;

@SebaDele opened this issue on May 22, 2020

Persian Translation

Hello everybody,

We want to translate SAMM to persian language.
Is there any contributions guide for this?

History from old repo:
@v-zafari opened this issue on Dec 23, 2020

@23bartman commented on Dec 23, 2020
Hi Zafari,
thanks for reaching out. I will get back to you with more information on how to contribute on this.
Best,
Bart.

@23bartman 23bartman self-assigned this on Dec 23, 2020
@v-zafari commented on Dec 23, 2020
Thanks Bart, I'm waiting for your response and more details about this.

add roles/structures to the glossary

common roles, organisation structure to be covered by the glossary.
keep it general, explain it should be mapped on organisation implementing SAMM
(NIST 800-16 ?? too detailed)
if in doubt, it applies to you :-)
and add the customer, end-user, ...

History from old repo:
@SebaDele opened this issue on Nov 21, 2018
@23bartman 23bartman assigned @johndileo on Mar 26, 2019
@SebaDele SebaDele added the SAMM 2.0 label on Nov 23, 2019
@SebaDele commented on Nov 23, 2019
list of roles, not structure.

@SebaDele SebaDele assigned @BrettCrawley and unassigned @johndileo on Nov 23, 2019
@SebaDele commented on Dec 10, 2019
@johndileo can you create a basic markdown page with the roles as discussed in Dublin?

Potential confusion around Build Process checks and Scalable Baseline

The following section Implementation/Secure Build/Build Process (Maturity Level 2) has the following line "Finally, add appropriate automated security checks (e.g. using SAST tools) in the pipeline to leverage the automation for security benefit."

To me this seems unclear compared to the requirement for Scalable Baseline (Maturity Level 1) which states "Use automated static and dynamic security test tools for software, resulting in more efficient security testing and higher quality results."

My recommendation would be to remove the line in the Build Process entirely. I don't believe this would lessen the key takeaway for Build Process i.e. "maintain the integrity of the build process", and avoid confusion as to where you are scoring the use of SAST in the pipeline.

Thanks for considering this issue.

Nathan

Interview option that should be reviewed/changed

Hi,
in SAMM_Assessment_Toolbox_v2.0.xlsx,, for line 194 "Do you review and update the incident detection process regularly?"
the following answer options don't seem correct: Yes, for some applications, etc

In Data Validity, F194 is AnsF . AnsT seems more appropriate, so the options will be
No
Yes, but we improve it ad-hoc
Yes, we improve it at regular times
Yes, we improve it at least annually

History in old repo:
@kostasadriano opened this issue on Mar 20, 2020
@23bartman assigned @johndileo on Dec 23, 2020

Guidance on automation

Based on a comment of Trey: "I do think each area within the standard needs to have it's language reviewed for compatibility with full-automation of all tasks. It may even be wise to review the wide range of ways these tasks are automated currently by companies that do so and also consider how many of these tasks may be automated differently going forward. If for no other reason I think this would prolong the relevance of the standard and as Jeff pointed out on todays call, make it relevant to a larger audience."

To consider for a next release. probably not possible for all activities.

Idea: systematically include as part of the guidance (possibly via a separate yaml attribute).

History from old repo:
@23bartman opened this issue on Nov 24, 2019
@23bartman 23bartman added the enhancement label on Nov 24, 2019
@23bartman 23bartman assigned SebaDele and 23bartman on Nov 24, 2019

"Threat Assessment" yaml files have a space instead of a dash

I expect

all yaml Datafiles having a consistent name, eg:

Practice D-Security-Architecture.yml
Practice D-Security-Requirements.yml

Instead

Assessment files do not respect that (they have one more space instead of dash)

Practice D-Threat Assessment.yml
Practice V-Architecture Assessment.yml

History from old repo:
@ioggstream opened this issue on Apr 14, 2021
@23bartman assigned 23bartman and unassigned 23bartman 22 days ago

SBOM and OBOM question

In Implementation \ Secure Build it states:

Create records with Bill of Materials of your applications and opportunistically analyze these.

This should likely be renamed Software Bill of Materials (SBOM). But I cannot find anywhere in Operations to maintain an Operations Bill of Materials. Applications are typically deployed to something. Often times its an application server which is running on an operating system. These additional components form the full stack of an Operations Bill of Materials, but it appears to be assumed and an indirect requirement. I believe this is likely related to #128.

For reference, BSIMM specifically calls out operations bill of materials.

History from old repo:
@stevespringett opened this issue 4 days ago

Wrong word quality criteria for V-AA-1-B

In the quality criteria for verification/architecture-assessment/stream-b/maturity level 1
it says:
You consider different types of threats, including insider and data-related one
it should say:
You consider different types of threats, including insider and data-related ones

slack question on streams

https://owasp.slack.com/archives/C0VF1EJGH/p1551911973003900
What's the rationale behind two "streams" in each SAMM 2.0 category? Is it just a logical categorization? One stream doesn't take precedence over the other, correct?

History from old repo:
@SebaDele opened this issue on Mar 24, 2019
@23bartman 23bartman self-assigned this on Mar 26, 2019
@SebaDele SebaDele added guidance SAMM 2.0 labels on Nov 23, 2019
@SebaDele commented on Nov 23, 2019
will be explained as part of the general guidance

@23bartman 23bartman assigned SebaDele and KGABSWEDEN on Dec 10, 2019
@23bartman commented on Dec 23, 2020
We have explanatory text for this on the website. And we have communicated this via slack. Maybe do another blogpost about this ? @KGABSWEDEN Could you look into this ?

@KGABSWEDEN commented on Dec 23, 2020
Sure @23bartman

A number of small typos to fix

I noticed a number of minor typos while reading through the text but don't have a git environment at hand to directly fix them, hence this issue to track them.

SR1 Activity
The security competences and habits of the expernal suppliers..

https://owaspsamm.org/v1-5/
While we recommend to use the latest version of SAMM, here are is the version 1.5:

SA
Technology management looks at the .. (capitalise management to match stream name)
Add missing periods at the end of the short descriptions for Stream A1, Stream B2

AD1
the team considers each principle in the context of the overall system and identify features

AD2
standardize on one or more shared service per category

TM1 quality criteria
You have a list of the most important technologies used in or in support of each application (add commas)

History from old repo:

@nessimk opened this issue on Jun 2, 2020
@nessimk self-assigned this on Jun 2, 2020

Dinis's implementation model

Can we integrate Dinis' implementation model (facts / Jira / ...) as a guidance document to the core model ? I think the proposal makes a lot of sense, but at the same time, the core model is not heavily impacted IMO. This might be a good solution to align both.

History from old repo:
@23bartman opened this issue on May 22, 2019
@23bartman 23bartman assigned SebaDele, 23bartman and @DinisCruz on May 22, 2019
@SebaDele SebaDele added enhancement guidance labels on Nov 23, 2019

Improve phrasing in EG3B

Replace this:
Form communities around roles and responsibilities and enable developers and engineers from different teams and business units to communicate freely and benefit from each other’s expertise.

With this:
Form communities around roles and responsibilities. Enable developers and engineers from different teams and business units to communicate freely so they can benefit from each other’s expertise.

History from old repo:
@Pat-Duarte opened this issue on Mar 20
@Pat-Duarte Pat-Duarte assigned 23bartman on Mar 20

Clarify defect resolution

Based on a feedback of Charlotte on the SAMM v2.0 alpha version: "Great seeing the implementation section, I found it to be well done. I have a comment about scanning 3rd party vulnerabilities. We implement a scanner that includes 3rd party vulnerabilities and allows teams to allocate them to a composition analysis tool rather than treat them in the same manner. But I think you may mean that they should be included in the scan and treated as a threat, rather than treated in the exact same way as the in-house vulnerabilities. I think some clarification on the difference in process for the two (3rd party of in-house code security flaws) would be helpful. The tendency is to ignore the 3rd party flaws as the in-house devs can't usually change the 3rd party code base. So some guidance on how to manage the process of identifying, classifying, seeing how the in-house code can be made to defend against them, and follow up with vendors or organizations that supply the 3rd party code might be useful. They both are important but aren't handled in the exact same way."

Clarify further in defect management that the action to take to resolve a defect depends on the type of the defect (e.g., an architectural issue vs. a coding issue vs. a 3rd party library issue vs. ...).

History from old repo:
@23bartman opened this issue on Nov 24, 2019
@23bartman added 3D3DefectManagement SAMM 2.0 labels on Nov 24, 2019
@23bartman assigned dkefer on Nov 24, 2019
@23bartman self-assigned this on Dec 11, 2019

Design - Security Requirements - Supplier Security Criterion for ML 3

One of the criterion for maturity level 3 reads:

"The vendor has a secure SDLC that includes secure build, secure deployment, defect management, and incident management that align with those used in your organization"

I do not think the vendors SDLC practices necessarily has to align with those used in your organization as your organisation may be of a lesser maturity and vendors may have many customers with varying processes. As in indication of maturity of the vendor, I would suggest a criterion along the lines of:

"_The vendor has a secure SDLC that includes secure build, secure deployment, defect management, and incident management and is able to demonstrate operating effectiveness of practices." The criterion has to be independent of my own organizations practices.

Happy to hear your thoughts on this.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.