Research
03.04.2025
Agentic Software development and Engineering best practices
For Developers, Security Practitioners and AI Researchers building and deploying Agents in Production
With the application economy moving fast from API driven economy to AI Agent driven economy, AI Agents are acting as autonomous digital assistants in different verticals, execute tasks on behalf of end users by interacting with various IT, ERP, CRM, HR, Finance systems and tools in an enterprise context. This unique functionality brings a distinct set of security challenges during development and at runtime. While Marqus AI runtime Agentic AI security takes care of detection of issues and remediating them at runtime, here are the key Software Engineering best practices specifically to be practiced by Developers and AI Researchers while building the AI Agents. We hope this can be a Technical enablement resource for the Developers and Partners who consider themselves Agent Builders. The below mentioned practices are in alignment with the Shift-left security or Security by design paradigm in the security Industry.
1. Ensure Secure Communication Channels for Agent to Agent Collaboration
Use Encrypted Protocols for Communication
AI agents often need to communicate with other systems and tools, which makes securing these communication channels essential. We propose the following best practices in doing so:
TLS and HTTPS: Always use TLS and HTTPS to encrypt communications between AI agents and external systems.
API Security: Secure APIs with authentication mechanisms like OAuth to ensure that only authorized agents can access specific services.
By securing communication channels, you prevent unauthorized interception and tampering of data exchanged by AI agents.
2. Implement Strong Authentication and Access Controls
Restrict Agent Access Based on Roles and Tasks
AI agents should have access only to the resources necessary for their specific tasks to minimize the risk of unauthorized actions. Best Practices:
Role based Access Control RBAC: Assign roles to AI agents and restrict their access to only what is necessary for their role.
Multi-Factor authentication: Use MFA for authenticating agents to ensure that even if one credential is compromised, unauthorized access is prevented.
Strong authentication and access controls prevent misuse and limit the impact of potential breaches.
3. Regularly Monitor and Audit AI Agent Activities
Continuous Monitoring for Anomalous Behavior
AI agents must be continuously monitored to detect and respond to suspicious activities promptly. Best Practices:
Logging and Auditing: Maintain detailed logs of all actions performed by AI agents. Regularly audit these logs to identify and investigate unusual
behavior.
Real-Time Alerts: Implement real-time monitoring and alerting systems to detect and respond to suspicious activities immediately. Monitoring and auditing help in early detection and mitigation of security incidents.
Continuous Validation and Verification: Conduct thorough validation before deployment and ongoing verification to ensure compliance with organizational standards. Evaluate the AI system against expected outcomes to maintain accuracy and fairness.
4. Secure AI Agent Training and Deployment Environments
Isolate and Protect the Environments
The environments where AI agents are trained and deployed must be secure to prevent tampering and unauthorized access. Best Practices:
Sandboxing: Use sandbox environments to isolate training and testing phases from production environments.
Secure-Configuration: Ensure that AI agent environments are configured securely, with minimal permissions and hardened against attacks.
Isolating and securing the environments helps protect the integrity and confidentiality of the AI agents.
5. Implement Strong Data Privacy Measures
Protect Sensitive Data Handled by AI Agents
AI agents often handle sensitive data, making it crucial to implement measures to protect this data. Best Practices:
Data Anonymization: Anonymize or pseudonymize data wherever
possible to minimize exposure of sensitive information.
Access-Controls: Implement strict access controls to ensure that sensitive data is accessible only to authorized entities.
Protecting sensitive data handled by AI agents helps maintain privacy and compliance with regulations.6. Ensure Explainability and Transparency of different Agent Actions
Provide Insight into Agent Decision-Making
Understanding how AI Agents are making Decisions are crucial for identifying and mitigating potential security risks. Best Practices:
Explainable AI: Use techniques that provide clear explanations of the decision-making processes of AI agents.
Documentation: Maintain comprehensive documentation of AI agent configurations, training data, and decision-making logic.
Explainability and transparency enhance trust and enable better security assessments.
7. Develop and Test Incident Response Plans
Prepare for AI Agent Security Incidents
Having a robust Security Threat detection Post Incident response Plan specifically for AI agents ensures a quick and effective response to security incidents. Best Practices:
Incident Response Planning: Develop a detailed incident response plan that includes specific procedures for AI agent-related incidents.
Regular Drills: Conduct regular drills and simulations to test the effectiveness of the incident response plan and make necessary adjustments.
Preparedness for incidents helps mitigate the impact of security breaches involving AI agents.
8. Data Logging and Observability Best Practices for different Agent Actions:
Establish auditing and logging practices: Implement secure logging mechanisms to track user activities. Use digital forensics and monitoring tools to maintain oversight of system performance and security events.
9. Implement Secure Code Signing and App Whitelisting:
Code signing and app whitelisting are techniques used to verify the integrity and authenticity of AI agent software. Best Practices:
Secure Code Signing: Sign AI agent software with digital certificates to ensure that the code has not been tampered with and is from a trusted source.
App Whitelisting: Implement app whitelisting policies to only allow authorized software to run on AI agent devices, reducing the attack surface. Secure code signing and app whitelisting help prevent unauthorized software from executing on AI agents.
Conclusion:
These additional engineering best practices for securing AI agents at runtime from a cybersecurity perspective include secure coding practices, secure deployment and updates, secure data storage and encryption, secure networking and firewalls, secure logging and monitoring, secure end-to-end encryption, and secure code signing and app whitelisting. By implementing these practices, organizations can enhance the security of their AI agents and protect against potential threats. For Security Threats which would still happen at runtime in Production deployments, Marqus AI platform is here to help, enabled and address the issues for any Enterprise and SOC teams.
Securing AI agents requires a tailored approach that addresses the unique challenges posed by their functionality. By implementing these nine best practices—secure communication channels, strong authentication and access controls, continuous monitoring, secure environments, robust data privacy measures, explainability, and robust incident response plans—organizations can enhance the security of their AI agents and protect against potential threats.
10.10.2024
Agentic AI: A new paradigm
We believe we are witnessing the first chapter of the generative AI revolution. Late 2022 marked the mainstream debut of chatbots powered by large language models (LLMs). By 2023, generative AI use cases such as image generation, homework assistance, companionship, and research saw explosive growth. In 2024, the world witnessed the rise of Agentic AI—autonomous, intelligent agents capable of emulating full-time roles, such as sales development representatives (SDRs). These AI agents promised unprecedented productivity, creativity, and operational efficiency. By 2026, these systems will go beyond basic automation. They will deeply integrate with enterprise ecosystems, accessing sensitive data, authenticating users, performing multimodal tasks, and autonomously calling APIs to execute decisions across almost every function and department in an organization.
However, as the capabilities of Agentic AI expand, so too do the challenges and stakes of securing these systems. Below is an infographic from Norwest VP highlighting the incoming challenges to securing AI Agents in the near future.
Securing AI Agents is complex, challenging and will be unlike anything we’ve seen.
The leap from Generative AI to Agentic AI will fundamentally alter the cybersecurity landscape due to:
Expanded Attack Surface
Agentic AI interacts directly with enterprise systems, APIs, and user accounts. This extensive integration increases the potential points of entry for malicious actors.Autonomous Decision-Making during Run Time
Unlike static models, Agentic AI can make decisions in real-time. If compromised, an attacker could co-opt these capabilities to cause significant damage, such as executing fraudulent transactions or manipulating sensitive data.Data Sensitivity
By design, Agentic AI has access to sensitive and proprietary enterprise information. Protecting this data from leaks, breaches, and unauthorized use is key to deploying safe Agents.Complex Behaviors
Agentic AI systems are dynamic and self-learning. Malicious actors could exploit vulnerabilities not present at deployment but introduced through future updates or evolving decision-making processes.User Authentication and Authorization
Agentic AI agents often mimic human behaviors, making it challenging to distinguish legitimate activity from that of a compromised agent.
Existing security categories do NOT fully secure AI Agents.
Existing security solutions are great at extending their solutions to Gen AI (LLMs), Agentic AI (LLM Agentic applications) and any other technology that may come their way in the future! We’ve noticed incumbents adding “AI wrappers” to their existing solutions to sell to CISOs, and securing data through enhancements to DSPM in the form of AISPM (which is a good starting point), but the nature of AI agents fundamentally introduce new vulnerabilities which are hard to detect or protect against with existing solutions. This is primarily due to the nature of AI Agentic vulnerabilities:
Unique Attack Vectors:
AI agents have distinct vulnerabilities, such as adversarial attacks (e.g., perturbations in inputs), model inversion attacks, or poisoning attacks, which many traditional security solutions are not designed to detect or mitigate.
Dynamic Nature of AI Agents:
AI agents often adapt and evolve over time. Static or rule-based security systems may struggle to keep up with these dynamic behaviors.
Complexity of AI Models:
AI models involve intricate neural networks that require specialized tools for monitoring and securing. Existing vendors may lack the expertise to inspect or secure these models.
Data Privacy Concerns:
AI agents often handle sensitive data, but traditional security tools might not adequately address the privacy implications of AI-driven data usage or storage.
Lack of Granular Explainability:
Many AI-related security incidents, such as biased decisions or rogue outputs, require an understanding of how models make decisions—something traditional security solutions might not offer.
Integration Challenges:
AI agents often integrate with multiple systems in ways that traditional security tools aren't designed to monitor holistically.
Marqus AI is purpose-built to protect against AI Agentic threats, based on our published research to OWASP and CSA (cloud security alliance), and are helping organizations secure their AI Agents and their ever-evolving security challenges through our suite of product offering.