Technical datasheet

Technology
Underlying technology C#, Python, C++
User interface Web based
Operation mode Windows service
Supported model file formats .GGUF
Supported models User's hardware is the only limiting factor
Connectivity
Application connectivity (API) ChatGPT, Microsoft Copilot
Application connectivity Database (SQL), Files, E-mail (POP3, SMTP. MAPI), Webservices (HTTP/REST API)
Features
GPU layer offloading Users can set how much they want to utilize their hardware
Logging Each loaded model and its agents are logged individually
Train with local knowledge Bots trained this way will give answers more in line with your company's image and needs, consistently
Train specialist bots Marketing, Sales, Support, HR bots with specialised knowledge to get the most accurate answers possible
Supported AI Agents
Chatbot Engages in real-time conversations with colleagues
E-mail assistant Automatically suggests replies to emails based on context
Web assistant Summarize contents of websites, analyse them for SEO
Voice assistant Responds to voice commands for hands-free operation
Phone assistant Handles customer inquiries over the phone
SMS system Send and receive SMS messages based on contextual information
AI Gateway
E-mail to AI Incoming emails are routed to an AI which can reply automatically or write a suggestion reply and forward it to the original recipient
Phone call to AI Incoming phone calls are accepted, listened to and answered by AI, using speech-to-text and text-to-speech technology
SMS to AI Incoming text messages are read and answered by AI, while complying to character limits (e. g. 160 characters, UTF-7)
WhatsApp to AI Connected via WhatsApp API key. Messages are read, comprehended and answered by AI
MS Teams to AI Connected via MS Teams API key
Chat to SQL database Enter a query over chat, and the bot responds based on knowledge it finds in the database
Chat to TXT files Store, send, modify information in TXT files with simple chat prompts
Chat to HTTP/REST webservice Create your own interface to chat with your model
Deployment
Locally Runs directly on user's hardware
Online Connects to an online, external service such as ChatGPT or Microsoft Copilot via an API key
vLLM Runs on a private, local Linux server, or VPS
AI pipeline Combines the power of multiple AI models by using one's output as the other's input
Minimum System Requirements
Intel CPU Core i3-2100 / Pentium 7505 / Celeron 6305
AMD CPU FX-4100
GPU Must support OpenGL 2.1 or Direct3D 11/12
RAM 8 GB (for models with 3 billion parameters)
Operating System Windows 10
Recommended System Requirements
Intel CPU Core i7-10700
AMD CPU Ryzen 5 3600
GPU NVidia RTX 3090
RAM 16 GB
Operating System Windows 11