My Organization's security concerns

Thank you for taking a moment to review this message. I am a warehouse manager at a large building systems and hvac company in northeast Ohio. One of my main goals has been to streamline and organize a large disparate inventory across multiple locations, jobs, technicians and service vehicles. I have developed a very strong basic foundation for using AI in many of my day to day tasks as well as working with AI to develop a strong workflow and series of prompts ready to go for Agentic coding for an app that would help me to accomplish building this inventory tracking system. Everyone was excited about the prospects for this system but the companies cyber-security team has made it clear that they do not want to have an AI built application in this system due to a devastating security breach several years ago. Any suggestions? Thank you

Hi Douglas, welcome :waving_hand:
I think the best approach is to start small and security-first. Instead of presenting it as a fully AI-controlled system, position it as an internal assistant tool with limited access and human oversight. You could begin with non-sensitive/anonymized data, role-based access controls, audit logs, approvals, and AI features like inventory search, summaries, and recommendations first. Also, involving the cybersecurity team early and building around their requirements will make adoption much easier. .

this is the spec in the prompt package i submitted for review

AUTHORITATIVE SPECIFICATION — PHASE 1 & PHASE 2

This document is the authoritative specification for Phase 1 and Phase 2.

Please implement exactly what is defined here.

If a behavior is not explicitly described, do not invent it—ask.

Additionally, it would be abhorrent for this system to access or use any data, information, or software that is not specifically developed by or entered into this system.

================================================

CYBERSECURITY GUARDRAILS (NON‑NEGOTIABLE)

================================================

- Do NOT access the public internet or external APIs.

- Do NOT access enterprise systems, file shares, email, ERP, Teams/SharePoint, or cloud services.

- Do NOT transmit, export, or train on data outside this application.

- No telemetry, analytics beacons, crash reporting, or external logging.

- No external network calls at runtime.

- Application must run fully offline within the internal environment.

- Use ONLY data entered by users or generated internally.

- Use ONLY dependencies explicitly approved.

  • If a dependency is not built‑in to the chosen runtime, list it and ask for approval before coding.

- All data must remain in the local application database and local logs only.

================================================

DEPLOYMENT ACCESS (LOCKED)

================================================

- The application is hosted internally (on‑prem server or internal workstation).

- Users may access the application “from anywhere” ONLY via approved secure access:

  • Corporate VPN, or

  • Corporate Zero‑Trust application gateway.

- The application itself must not depend on any external internet services.

- The application must not be exposed as a public internet site.

- No functionality may rely on public DNS, public APIs, or cloud infrastructure.

Hi Douglas, this is actually a very strong and professional security-first specification. I think your approach will help address many common cybersecurity concerns because you’ve clearly defined strict offline operation, zero external access, local-only data handling, approved dependencies, and controlled VPN/Zero-Trust access. Positioning the system this way as an isolated internal application rather than an internet-connected AI platform is likely the right direction for gaining security team trust. The level of detail here shows you’re taking enterprise security and data governance Perfectly.

It is no surprise that the cyber-security team would be hesitant to accept the AI prompt text as proof that the AI will actually comply with those restrictions.

Why would the AI not comply? Do different models have different compliance issues?

What proof can you offer that it will comply? The cyber-security team has no reason to trust its behavior.

I am unable to offer any proof and I am certainly not experienced enough to offer any examples. That is what I am trying to learn here. So my question is what makes AI so inherently untrustworthy in these situations?

Essentially, it’s a language model, not a thinking machine.
Its behavior is not entirely predictable, by design.

So you would have to understand coding language in order to have AI code an app and then go through and check it for vulnerabilities? I just had a meting with the IT department and their position is that AI is going to do what AI is going to do, and even with the guardrails in the prompt package they are unwilling to let me go ahead and do this project and would rather I explore commercially available software. I guess I should be happy that my company is willing to spend the money for this project but I was excited to develop it myself and have it do exactly what I wanted it to do in the way I wanted it done.

I work in an industry that is extremely cautious when it comes to safety and security.

Unless you performed a detailed software validation process against the application requirements, you would never know what pitfalls the AI-generated software might contain.

Hey Douglas! Your spec already demonstrates a solid security mindset offline-only deployment, VPN/Zero-Trust access, and no external API calls are exactly the kind of controls that should resonate with a cautious security team.

A couple of areas worth strengthening before your next conversation with them:

  • Prompt injection is worth explicitly addressing in your documentation. Since the app will process user-entered text through an AI model, a malicious or careless input could potentially manipulate the model’s behavior in unintended ways even in an offline environment. Showing the team you’re aware of this and have mitigations planned goes a long way.
  • Frame everything around the CIA Triad: Confidentiality, Integrity, and Availability. Security teams think in these terms, and mapping your proposal to each pillar makes it far easier for them to evaluate and approve.
  • Add a Business Continuity plan: Given the company had a serious breach before, they’ll want to know what the fallback looks like if this system fails or is compromised. Even a basic answer to “how do we operate without it?” dramatically reduces perceived risk.

Finally, rather than pitching the full system upfront, consider proposing a sandboxed pilot one location, limited data scope, fully auditable. Let them stress-test it. A security team that’s been burned before will trust incremental evidence far more than a comprehensive proposal. For what I’ve seen your foundation is strong it’s mostly about speaking their language now. Good luck!