Trusted AI

 
 

Problem

In today’s world, IT security is more critical than ever before. Companies cannot afford to lose visibility into their own infrastructures, and indirect threats from vendors pose a significant risk in terms of network security.

Specifically in the government domain, they must secure their Artificial Intelligence/Machine Learning (AI/ML) systems from end-to-end. However, their security is not equipped to accurately assess and grant Authority to Operate (ATO) in a timely manner, evaluate new AI threats, incorporate DevOps practices, and predict future threats/attacks. The gap between AI/ML systems and the current security processes is a significant hurdle impacting the mission, as shown in the diagram to the right (Figure 1).

Furthermore, Trusted AI must assess and secure the entire ML Pipeline, Data Operations (DataOps), and ML Operations (MLOps) platform(s), as shown in the diagram below (Figure 2).  

 

Solution

How to Bridge the Security Gap

Ampsight has modified an existing open-source software to create a rapid threat assessment tool for AI-enabled systems.  This web-based tool named Cloud Vector, will allow the government to assess, track, and monitor ML workloads within the existing RMF (Risk Management Framework).

By conducting a rapid threat assessment, stakeholders have the opportunity to continuously monitor their organization's operational risks, at their ultimate decision making. Alongside their retrospective impact, controls and plans of action, managers are able to strategize and practice proper risk mitigation techniques at all times.

Augmenting an organizations existing security processes with our solution addresses the Security Engineering “Problem Definition” and “Trustworthiness Assessment” areas of AI System Security so that security professionals can effectively evaluate AI threats and accelerate time to achieve Authority to Operate (ATO).

 

Conclusion

In the ever-evolving cyber arena, a dynamic means of addressing emerging threats in contrast with phenomenal technological developments such as AI/ML, tools and processes must be developed by system owners to maintain information dominance securely.    

Based on the inherent nature of AI/ML, it is required to develop dynamic approaches including conducting rapid threat assessments thereby, enabling the stakeholders the opportunity to continuously monitor their organization's operational risks, in line with their ultimate decision-making responsibilities.

This Tool offers abounding features answering the need to rapidly meet compliance in accordance with federal processing standards. Ultimately, Cyber Vector promises to be a significant addition to the security practitioner’s arsenal while serving the ability to; analyze the anticipated impact, controls and plans of action to remediate the threat. While operating in an AI/ML environment, Cyber Vector’s operational managers are able to effectively strategize and practice proper risk mitigation techniques at all times.

 

Figure 1. “The Governments Current, Outdated Security for AI/ML Systems”

 

Figure 2 “Ideal AI/ML Secure System”

 

Figure 3 “Cyber Vector’s continuous monitoring feature”

 
 
Guest User