An entirely different strategy.
We are hardened by deploying locally and behind the existing firewall of each deployment – this creates a decentralized approach rather than having our basis in the cloud. This also reduces latency of response for our uses – as there’s no INTERNET exchange between our Ai and users – for campus access INTRANET – on a local WIFI network. For users that need campus support this is lightweight secure and they can reliably access their personal data though a natural language interface. Even when accessing at home its a local pop. Deployed at scale our attackers would need to physically infiltrate thousands of unique machines and human administrations. Much easier to just go after chatbot services located in the “cloud” and centrally deployed to get the biggest yield for a blackhat would you say? For instance Twitter was recently compromised through a social engineering attack that we wouldn’t be subject to.