Security

Last Updated: July 21, 2025

Your security is our priority. Shape is built with privacy and security at its core.

  1. PRIVACY BY DESIGN. All changes you make in Shape are designed to be private until you decide to share or deploy them. However, as with any AI platform, certain data may be processed for service delivery, safety monitoring, and system improvement. EXPERIMENTAL TECHNOLOGY NOTICE: Shape uses experimental AI systems that are continuously evolving. Security measures are regularly updated to address emerging threats and vulnerabilities in AI systems. SENSITIVE ENVIRONMENT WARNING: Please note that we are still in the journey of growing our product and improving our security posture. If you're working in a highly sensitive environment, you should exercise caution when using Shape (or any other AI tool). We hope this page provides insight into our progress and helps you make a proper risk assessment.
  1. DATA PROTECTION

2.1 Encryption Everywhere. (a) In Transit: All data transmission uses TLS 1.3 encryption, (b) At Rest: Your data is encrypted using industry-standard AES-256 encryption, (c) Processing: Sensitive operations use encrypted memory and secure enclaves.

2.2 Data Processing Transparency. (a) Service Delivery: Data may be processed to provide AI features and platform functionality, (b) Safety Monitoring: Content may be analyzed for policy violations and harmful content, (c) Quality Assurance: Inputs and outputs may be reviewed for system improvement, (d) Legal Compliance: Data may be accessed for regulatory compliance and law enforcement.

  1. ACCESS CONTROLS. (a) Zero-Trust Architecture: Every request is authenticated and authorized, (b) Role-Based Permissions: Team members only access what they need for their role, (c) Multi-Factor Authentication: Available for all accounts with enforcement options, (d) Session Management: Automatic timeouts and secure session handling, (e) Admin Access Logging: All administrative access is logged and monitored.
  1. DATA ISOLATION (a) User Workspaces: Your projects are logically separated from other users, (b) Sandboxed Execution: All code execution happens in secure, isolated environments, (c) Network Segmentation: Critical systems are separated and monitored, (d) Geographic Isolation: Data processing occurs in controlled geographic regions.
  1. PLATFORM SECURITY

5.1 Local-First Architecture. (a) Device Processing: Many operations happen directly on your device, (b) Minimal Data Transfer: Only necessary data is transmitted to our servers, (c) Kill Switch: Instantly revoke access to shared changes at any time.

5.2 Infrastructure Security. (a) Cloud Security: Built on enterprise-grade cloud infrastructure, (b) Regular Updates: Automatic security patches and updates, (c) Monitoring: 24/7 security monitoring and threat detection, (d) Backup Systems: Redundant systems ensure data availability.

  1. AI SECURITY. (a) Model Isolation: AI processing is isolated from other user data where technically feasible, (b) Training Data Protection: User data is not used for AI training without explicit consent, (c) Secure Inference: AI features run in monitored environments with safety guardrails, (d) Data Minimization: AI features only access data necessary for the requested operation, (e) Bias Detection: Ongoing monitoring for AI bias and inappropriate outputs, (f) Content Filtering: Multi-layer safety systems to detect harmful content.
  1. EXPERIMENTAL TECHNOLOGY RISKS. (a) Evolving Security: As an experimental AI platform, security measures are continuously evolving, (b) Unknown Vulnerabilities: New AI-related attack vectors may emerge that we haven't yet addressed, (c) Model Limitations: AI models may behave unpredictably and produce unexpected outputs, (d) Training Data Risks: AI models may reflect biases or information from their training data.
  1. DEVELOPMENT SECURITY

8.1 Secure Development Practices. (a) Security by Design: Security considerations in every feature, (b) Code Reviews: All code changes reviewed for security implications, (c) Dependency Management: Regular security audits of third-party components, (d) Penetration Testing: Regular security assessments by external experts.

8.2 Code Generation Security. (a) Safe Output: Generated code follows security best practices, (b) Validation: All generated code is validated before delivery, (c) No Secrets: Generated code never includes hardcoded credentials, (d) Your Codebase: Integration respects your existing security patterns.

  1. BUSINESS CONTINUITY AND SERVICE LEVEL AGREEMENTS

9.1 Uptime Commitments. (a) Target Uptime: 99.5% monthly uptime (excluding scheduled maintenance), (b) Planned Maintenance: Scheduled during off-peak hours with 48-hour notice, (c) Emergency Maintenance: May occur without notice for security or stability issues, (d) Service Credits: Available for sustained outages exceeding our SLA thresholds.

9.2 Data Backup and Recovery. (a) Automated Backups: Daily encrypted backups of all user data, (b) Retention Period: 30 days of backup history, (c) Geographic Distribution: Backups stored in multiple geographic locations, (d) Recovery Time: Target recovery within 4 hours for critical service restoration.

9.3 Business Continuity. (a) Disaster Recovery Plan: Tested quarterly with documented procedures, (b) Redundant Systems: Critical infrastructure deployed across multiple availability zones, (c) Vendor Dependencies: Diversified cloud infrastructure to avoid single points of failure.

  1. DELAWARE AI COMMISSION COMPLIANCE

10.1 Regulatory Alignment. (a) Monitoring: Active monitoring of Delaware AI Commission recommendations under House Bill 333, (b) Implementation: Commitment to implementing appropriate AI safety measures as guidance develops, (c) Reporting: Ready to provide AI usage reports to regulatory bodies as required, (d) Safety Standards: Proactive implementation of AI safety best practices ahead of formal requirements.

10.2 AI Risk Management. (a) High-Risk Identification: Regular assessment of AI features for potential risks, (b) Safety Protocols: Implementation of safety measures for generative AI capabilities, (c) User Protection: Safeguards to prevent AI misuse and protect user rights, (d) Continuous Monitoring: Ongoing evaluation of AI system performance and safety.

  1. COMPLIANCE AND STANDARDS

11.1 Current Practices. (a) Security Frameworks: Following NIST Cybersecurity Framework and Delaware data protection guidelines, (b) Data Protection: GDPR and CCPA-ready privacy practices with attention to emerging AI regulations, (c) Industry Standards: Working toward SOC 2 Type I controls appropriate for SaaS providers, (d) Regular Audits: Internal and external security assessments with focus on AI system security, (e) AI Governance: Monitoring compliance with evolving AI regulations including EU AI Act considerations.

11.2 Security Maturity Journey. (a) Current Status: Early-stage startup with foundational security controls in place, (b) Ongoing Improvements: Continuously enhancing security posture as we grow, (c) External Validation: Working toward third-party security certifications, (d) Transparency: Regular updates on our security progress and challenges.

  1. INCIDENT RESPONSE

12.1 Prevention. (a) Threat Intelligence: Continuous monitoring for emerging threats, (b) Automated Detection: Real-time security event detection, (c) Proactive Monitoring: 24/7 security operations center, (d) Regular Drills: Incident response practice and preparation.

12.2 Response Process. (a) Immediate Containment: Isolate and contain any security incidents, (b) Assessment: Evaluate scope and impact of incidents, (c) Notification: Prompt communication with affected users, (d) Remediation: Fix vulnerabilities and strengthen defenses, (e) Post-Incident: Learn from incidents to improve security.

12.3 Communication. (a) Transparency: Clear communication about security issues, (b) Timely Updates: Regular status updates during incidents, (c) Post-Mortem: Detailed analysis and lessons learned, (d) Continuous Improvement: Security enhancements based on findings.

  1. YOUR ROLE IN SECURITY

13.1 Best Practices. (a) Strong Passwords: Use unique, complex passwords, (b) Enable 2FA: Add an extra layer of account protection, (c) Regular Reviews: Monitor your account activity regularly, (d) Secure Environment: Keep your devices and browsers updated, (e) Report Issues: Contact us immediately if you notice anything suspicious.

13.2 Account Security. (a) Unique Credentials: Don't reuse passwords from other services, (b) Team Access: Regularly review team member permissions, (c) Session Management: Log out when using shared devices, (d) Data Backup: Keep local backups of critical work.

  1. TRANSPARENCY

14.1 Security Updates. (a) Regular Reports: Quarterly security updates and improvements, (b) Vulnerability Disclosure: Responsible disclosure of security issues, (c) Open Communication: Direct line to our security team, (d) Community Feedback: We welcome security feedback from users.

  1. CONTACT OUR SECURITY TEAM. For security issues or questions: Email: security@autoshape.ai Platform: Tag @security within Shape. Security Vulnerability Disclosure: If you discover a security vulnerability, please report it responsibly a security@autoshape.ai
  1. RISK ASSESSMENT GUIDANCE. For highly sensitive environments, consider conducting your own security assessment before adoption, implementing additional data protection measures, reviewing our security practices against your requirements, starting with non-sensitive use cases to evaluate our platform. We recommend caution when using Shape (or any AI tool) with highly confidential business information, personal identifiable information (PII), regulated data (HIPAA, PCI-DSS, etc.), intellectual property requiring maximum protection.

Security is an ongoing commitment. We continuously invest in improving our security posture and protecting your data. As an experimental AI platform, we're transparent about our current capabilities and limitations. Questions? Contact our security team anytime at security@autoshape.ai.