Skip to content

Commit

Permalink
Process some of the contributions by Reza and team on risk analysis
Browse files Browse the repository at this point in the history
  • Loading branch information
robvanderveer authored Jul 28, 2024
1 parent d9fff56 commit 12df64c
Showing 1 changed file with 6 additions and 7 deletions.
13 changes: 6 additions & 7 deletions content/ai_exchange/content/docs/ai_security_overview.md
Original file line number Diff line number Diff line change
Expand Up @@ -259,7 +259,7 @@ Selecting potential risks (Threats) that could impact the organization requires

Since AI systems are software systems, they require appropriate conventional application security and operational security, apart from the AI-specific threats and controls mentioned in this section.

### 2. **Evaluating Risks by Estimating Likelihood and Impact**
### 2. Evaluating Risks by Estimating Likelihood and Impact
To determine the severity of a risk, it is necessary to assess the probability of the risk occurring and evaluating the potential consequences should the risk materialize.

**Estimating the Likelihood:**
Expand All @@ -271,11 +271,11 @@ Evaluating the impact of risks in AI systems involves understanding the potentia
**Prioritizing risks**
The combination of likelihood and impact assessments forms the basis for prioritizing risks and informs the development of Risk Treatment decisions. Commonly organizations use a risk heat map to visually categorize risks by impact and likelihood. This approach facilitates risk communication and decision-making. It allows the management to focus on risks with highest severity (high likelihood and high impact).

### 3. **Risk Treatment**
### 3. Risk Treatment
Risk treatment is about deciding what to do with the risks. It involves selecting and implementing measures to mitigate, transfer, avoid, or accept cybersecurity risks associated with AI systems. This process is critical due to the unique vulnerabilities and threats related to AI systems such as data poisoning, model theft, and adversarial attacks. Effective risk treatment is essential to robust, reliable, and trustworthy AI.

Risk Treatment options are:
1. **Mitigation**: Implementing controls to reduce the likelihood or impact of a risk. This is often the most common approach for managing AI cybersecurity risks. See the many controls in this resource.
1. **Mitigation**: Implementing controls to reduce the likelihood or impact of a risk. This is often the most common approach for managing AI cybersecurity risks. See the many controls in this resource and the 'Select controls' subsection below.
- Example: Enhancing data validation processes to prevent data poisoning attacks, where malicious data is fed into the Model to corrupt its learning process and negatively impact its performance.
2. **Transfer**: Shifting the risk to a third party, typically through transfer learning, federated learning, insurance or outsourcing certain functions.
- Example: Using third-party cloud services with robust security measures for AI model training, hosting, and data storage, transferring the risk of data breaches and infrastructure attacks.
Expand All @@ -302,13 +302,12 @@ For the threats that are the responsibility of other organisations: attain assur
### 7. Select controls
Then, for the threats that are relevant to you and for which you are responsible: consider the various controls listed with that threat (or the parent section of that threat) and the general controls (they always apply). When considering a control, look at its purpose and determine if you think it is important enough to implement it and to what extent. This depends on the cost of implementation compared to how the purpose mitigates the threat, and the level of risk of the threat. These elements also play a role of course in the order you select controls: highest risks first, then starting with the lower cost controls (low hanging fruit).

### 8. Use references
When implementing a control, consider the references and the links to standards. You may have implemented some of these standards, or the content of the standards may help you to implement the control.
Controls typically have quality aspects to them, that need to be fine tuned to the situation and the level of risk. For example: the amount of noise to add to input data, or setting thresholds for anomaly detection. The effectiveness of controls can be tested in a simulation environement to evaluate the performance impact and security improvements to find the optimal balance. Fine tuning controls needs to continuously take place, based on feedback from testing in simulation in in production.

### 9. Risk acceptance**
### 8. Residual isk acceptance**
In the end you need to be able to accept the risks that remain regarding each threat, given the controls that you implemented.

### 10. Further management of these controls
### 9. Further management of the selected controls
(see [SECPROGRAM](/goto/secprogram/)), which includes continuous monitoring, documentation, reporting, and incident response.

---
Expand Down

0 comments on commit 12df64c

Please sign in to comment.