owasp / iot-security-verification-standard-isvs Goto Github PK
View Code? Open in Web Editor NEWOWASP IoT Security Verification Standard (ISVS)
License: Other
OWASP IoT Security Verification Standard (ISVS)
License: Other
Hi!
I noticed that V2 doesn't currently contain any requirements for:
Do you think it would be a good idea to add these? Some example (draft) requirements could be:
GitBook hosting has been an issue due to Github account requirements (tied to one user) along with formatting constraints. We want to explore static site generators that are easy to maintain, provide free hosting, and enable formatting customizations to illustrate emphasis where needed.
MKDocs
https://www.mkdocs.org/
https://github.com/gristlabs/mkdocs-windmill
https://mkdocs.github.io/mkdocs-bootswatch/
https://github.com/squidfunk/mkdocs-material
Cheat sheet series uses this
Jekyll - Preferred option
https://jekyllrb.com/
Theme https://github.com/pmarsceill/just-the-docs - maybe with this or another theme and a color that aligns with owasp projects
GitHub Pages utilizes Jekyll https://pages.github.com/
https://dev.to/ows_ali/how-to-host-a-static-website-on-github-for-free-2pd1
Note: Modifying the report structure may bread the document build actions and will need to be updated.
For L2 and L3: Verify that third-party code and components are known and their integrity and authenticity is validated before execution.
Both require almost the same (2.3.1 missing API keys as example).
Would suggest to remove 2.2.1 and add API keys to 2.3.1 since this requirement better fits Data Protection than Authorization.
The ISVS currently does not cover security requirements related to detecting and responding to security incidents.
Example requirement that's missing: Verify that an appropriate response strategy is in place in case an end device's root keys are compromised, given that root keys cannot be remotely updated.
Other example of a requirement that exists but could be generalized:
| 4.5.8 | Verify that users can obtain an overview of paired devices to validate that they are legitimate (for example, by comparing the MAC addresses of connected devices to the expected ones). | | | ✓ |
This should apply to any network location, not only TCP/IP based ones (e.g. NFS and TFTP use UDP and are typical protocols used for network boot).
Suggestion:
Verify that the bootloader does not allow code to be loaded from arbitrary locations. Locations include both local storage (SD, USB, etc.) and network locations (NFS, TFTP, etc.).
The suggestions for Bluetooth and Wifi are reasonable but for L3 I think they need to go further:
The firmware update chapter currently explicitly covers roll-back attacks. The Freeze and Mix & Match attack cases are not (explicitly) covered.
These, together with others that we potentially overlooked, can be found here: https://theupdateframework.io/security/
I would enhance the suggestion to push for secure by default configuration at release as follows:
Verify the device is released with firmware and configuration appropriate for a release build (as opposed to debug versions), with appropriate security functions enabled by default.
The appropriate security functions would refer to the threat model done in 1.1.
Would it be beneficial to explicitly state on the lines of, how the data collected (in general and sensitive) from endpoints should be accessed only by authorized personnel with sufficient access privileges in the entire ecosystem especially when 3rd parties are involved
The suggestion to use OS benchmarks is part of 3.2.1, and reference links are provided at the bottom of that page, which is great.
However, I have the feeling that the recommendation to use OS benchmarks should be made more explicit.
I believe it is something that deserves additional explanation either in the control objective or in a dedicated section.It might boil down to personal experience, but benchmarking is one of the best tools available to ensure you properly secure your OS. It has been a requirement for governmental customers for a long time now (SCAP, especially the DISA benchmarks) but also something I see more and more from more secure conscious commercial customers. It can be automated, it produces clear reports and traceability.
It takes the guesswork out of "follow industry standards"
Leaned on 4.1.6 for L2/3 I suggest to rephrase this requirement as follows for L1:
Verify that in case TLS is used, the device checks the validity of endpoint certificates and disallows connections to endpoints that use an invalid certificate (e.g. wrong common name, untrusted issuer, etc).
The ASVS suggests the use of threat modelling for every design change - 1.1.2 (Secure Software Development Lifecycle Requirements) and the protection against identified risks or threats - 11.1.5 (Business Logic Security Requirements).
It would be helpful to add a similar requirement to suggest threat modelling or similar methodologies for identification of all application threats and risks specifically in the IoT ecosystem. This would fall under section V1: IoT Ecosystem Requirements
.
Suggestion:
Option 1: Modify existing requirements in 1.1 - Application and Ecosystem Design
or 1.3 Secure Development
to include the use of threat modeling or similar methodologies for protecting applications in the IoT ecosystem based on likely security risks or threats.
Option 2: Add new requirement to section 1.1 - Application and Ecosystem Design
:
1.1.7 Verify the use of threat modeling or similar methodologies for each application in in the IoT ecosystem based on likely security risks or threats | ✓ | ✓ | ✓ |
Please share your thoughts on this.
This is a strong requirement for GDPR and probably other legislations: "verify that the product has a data protection / privacy policy".
Regarding: "4.1.2 Verify that in case TLS is used its configured to only use FIPS-compliant cipher suites (or equivalent)."
In my opinion, FIPS compliant ciphers are aimed at government and defense applications, and more modern but still widely accepted ciphers can provide better performance
I believe this is better put in the ASVS: "9.1.2 Verify using up to date TLS testing tools that only strong cipher suites are
enabled, with the strongest cipher suites set as preferred."
Would love to know if there is a specific reasoning behind the FIPS recommendation or hear what others have to say
We have to make sure to fix the numbering of requirements before release.
There are a few elements of vocabulary throughout the ISVS. Mainly, the ISVS is related to the verification of the security in IoT systems, which comprise devices, communications, and processing in mobile applications, in the edge, in Cloud systems.
I suggest to refine the vocabulary used in the introduction to reflect this in the FrontIsPiece:
4.2.7 is specific to MQTT but this is already required for the whole ecosystem in 2.1.5. Also 4.2.7 should be renumbered to 4.2.4 if it's not going to be removed anyway.
Hi,
3.2.9 on OS configuration says that one should:
Verify the embedded OS provides protection against unauthorized access to RAM (e.g. RAM scrambling).
I'm somewhat confused about this as I thought RAM scrambling was a transparent hardware security measure. Could u please list some examples on OS's where you can effectively configure RAM scrambling or similar measures? Is there anything in Linux for that?
Thanks,
Attila
Implementing an IMA requires a TPM chip to store measurement hashes of files. TPMs on embedded devices are rare and could be more costly than "security chips" offered by various semiconductor vendors. TPM libraries, drivers, remote attestation server, and bootloader support could be major dependencies factored in.
We should think about modifying this requirement tailored to level 3 capable devices with TPMs (could be small market adoption) or generalize to specify the usage of integrity protection solutions such as IMA/EVM, dm-verity, and dm-integrity which could cast a wider net.
WPS is super useful for non-tech people, they press a button and connect.
I would rewrite "Verify that Wi-Fi Protected Setup (WPS) is not used to establish Wi-Fi connections between devices." to "can be deactivated" or to go even further: "is deactivated by default when physical access is a threat
I can't find any requirement about the communication channel for secure updates.
Suggestion to add "3.4.12 Verify that over the air updates are transmitted using a secure communication channel"
There are several areas which I believe could be more consistent:
Wi-Fi and Bluetooth
These are very specific protocols. What if the device uses LoRaWAN or Zigbee?
It could be more beneficial to separate LAN and WAN communications as the threat model is different.
Figure used in https://github.com/OWASP/IoT-Security-Verification-Standard-ISVS/blob/master/en/Using_ISVS.md#the-isvs-security-model
The blocks used in the figure are confusing: they mix components and features (e.g. Wi-Fi or software updates) with processes (e.g. design, secure development).
I would suggest to:
Enabled debug interface do not only apply to FPGAs. For example how many devices feature a JTAG port that allows you to dump and rewrite the firmware without authentication?
It looks very difficult to know what to check in 1.1.1 as it is open to interpretation and non-repeatable.
Suggestion to rewrite to something more tangible: "Verify that threats have been identified and remediated according to the risk they present on the IoT system.
This would effectively support requirements of ETSI EN 303 645 and ISA/IEC 62443-4-1.
A main issue comes from insecure updates being flashed without authorization (think ransomware or Botnet script).
A good way to protect from this is to have the following requirement: Verify that updates can only be installed by authorized users.
What about ZigBee for example? Could be resolved with
Use the strongest security settings available for wireless communication
But IMHO also wired communication should be protected as good as possible, so my suggestion is this:
Use the strongest security settings available for any communication protocol in use.
The second sentence in the objective seems very important to me and could be extended like follows to make the point even clearer to the reader.
On the other hand, hardware that contains backdoors or undocumented debug features can completely compromise the security of the entire device even if adequate security measures have been taken on the other layers of the stack.
Verify that all components can be updated
Additionally: verify that all components are supported by their supplier for a duration at least equivalent to the product warranty period.
3.2.11 mentions to ensure third party code is executed in a containerized runtime environment.In my experience, containerization (especially when used out-of-the-box) is not a security feature like virtual machines can be.
Containers make it easier to deploy, maintain and scale applications but expert knowledge and specific configuration is required to ensure containers are actually properly isolated from the host operating system and each other.I would recommend to explicitly mention this in 3.2.11 and provide a link to eg. OWASP Docker top 10 in the references at the bottom.
Maybe I don't understand the requirement correctly, but:
Since the requirement talks about hardware that is chosen I understand that this is not hardware that was engineered inhouse. Given that I don't see how one would verify that no un(officially)documented features are in place rather than testing for this extensively. Is that the idea of the requirement?
Besides 4.2.2 is a bit confusing to read it's also already covered by 2.4.1. I suggest to remove it.
I’m missing a whole set of requirements about monitoring the devices on the field, security-oriented log collection as well as post deployment vulnerability tracking etc.
Is it intentionally out of scope? if not - it should probably be in the ecosystem chapter.
Chapter V4: Communication Requirements contains concrete requirements for the popular Wi-Fi and Bluetooth protocols. Though, there are many more popular protocols such as LoRaWaN.
Confusing wording "Verify good password policies are enforced throughout the IoT ecosystem by disallowing hardcoded passwords and provisioning duplicate identities or passwords across devices." => I Would recommend: "and provisioning unique identities and passwords for each device."
Maybe it's me, but I don't get the point of the reference to the 16-32 byte AES key sizes. Why not just require this:
Verify that encryption keys are the maximum size the device supports and that this size is sufficient to adequately protect the information transmitted over the Bluetooth connection.
There is not option regarding the backup of configuration, which could support resilience requirements.
Would it make sense to add a requirement: "allow users to backup their configuration"? L1 to L3.
The ISVS currently does not address that not implementing a security control and/or accepting a failed security control/vulnerability is a effort vs risk based decision. We could add something to the using the ISVS chapter.
1.2.9 is going one step further than 5.1.7 but both aim to the same goal, so they should be put in the same table.
I also doubt the benefit of 5.1.7 since it makes no real difference for an attacker if the header is populated or not. If the pads are there, accessing the (JTAG) interface is easy anyway.
Does it make sense to split up requirement 2.1.5:
Verify certificate based authentication is preferred over password based authentication within the IoT ecosystem.
"Use password authentication using strong passwords" could be a L1
"Use certificate based authentication" could be L3
Is there a recommendation on certificate based authentication?
Unique certificate per device/user?
How are these certificates best generated and installed on the device? During manufacturing, registration, something else?
For professional IoT devices, the manufacturer might want to support protocols such as SCEP for certificate mgmt.
Any requirements regarding certificate renewal?
Maybe all of this is going in to much detail.
In IoT there are sometimes webservers which are built into devices. The HTTPS certificate can sometimes be retrieved. If that certificate is signed by a public CA for a domain owned and operated by the company, this certificate can be misused in MitM or phishing attacks.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.