Skip to content

Glossary

This glossary explains common AI terms used throughout the documentation as well as the main platform terms used in SkyPath.

INFO

Some terms in this glossary describe general AI industry concepts. Those definitions are included to help users understand the ideas behind private AI, retrieval workflows, and document-based response generation.

General AI Terms

AI Model

An AI model is the software system that generates text, images, summaries, and other outputs based on the instructions and context it receives.

Completion

A completion is the output returned by an AI model after it receives a prompt. In practice, a completion can be an answer, a summary, a proposal draft, an image, or another generated result.

Context

Context is the information provided to the AI system along with the prompt. This can include user instructions, retrieved document content, source document text, or project-specific guidance.

Context Window

The context window is the amount of text or information a model can consider in a single request. If too much information is supplied, some of it may need to be reduced, summarized, or selected more carefully.

Embedding

An embedding is a numerical representation of text or other content that helps a system compare similarity between pieces of information. Embeddings are commonly used in retrieval and semantic search systems.

Fine-Tuning

Fine-tuning is the process of training a model further on additional examples so it behaves in a more specialized way. In privacy-focused systems, users often want assurance that their prompts and documents are not being used for fine-tuning.

Grounding

Grounding means giving the AI system reliable source material so its answer is based on real supporting information instead of only its general model knowledge.

Hallucination

A hallucination is an AI output that sounds correct but is unsupported, incorrect, or invented. Good instructions, relevant context, and careful review help reduce hallucinations.

Inference

Inference is the process of running a prompt through an AI model to generate an output. In user-facing tools, inference is what happens when you click a button to generate an answer, proposal, summary, or image.

Large Language Model (LLM)

A large language model, or LLM, is an AI model designed to understand and generate text. LLMs are commonly used for chat, summarization, question answering, drafting, and other language-based tasks.

Prompt

A prompt is the instruction or input sent to an AI system. A prompt may include the user’s question, formatting instructions, document context, and project-specific guidance.

Prompt Engineering

Prompt engineering is the process of adjusting prompts to improve output quality, structure, tone, or detail. This can include changing wording, adding instructions, or clarifying constraints.

RAG (Retrieval-Augmented Generation)

RAG is a method that combines retrieval and generation. The system first retrieves relevant content from a document library or search layer, then supplies that content to the AI model so the generated response is better grounded in your own materials.

Retrieval

Retrieval is the process of finding relevant content from stored documents or other data sources before asking the AI model to generate a response.

Semantic search looks for content based on meaning rather than only exact keyword matches. This helps a system find relevant passages even when the wording is not identical to the search query.

Stateless Model

A stateless model processes each request independently rather than relying on long-term remembered state inside the model itself. In privacy-focused workflows, this helps reduce unnecessary persistence of user prompts and context.

Stored Completion

A stored completion is a generated output that is saved by an application after the model returns it. This is different from a model remembering the prompt. An application may save generated results or chat history in its own database even when the model itself is stateless.

Temperature

Temperature is a model setting that influences how predictable or creative the output is. Lower temperature usually produces more consistent output, while higher temperature can increase variety or creativity.

Token

A token is a unit of text a model uses internally when reading input and producing output. Token limits affect how much text can be processed in a single request.

Vector Database

A vector database is a storage system designed to hold embeddings and retrieve content based on similarity. Vector databases are often used in AI retrieval systems to help find document passages related to a question or prompt.

WYSIWYG Editor

WYSIWYG stands for “What You See Is What You Get.” A WYSIWYG editor lets users edit formatted content visually instead of working directly in markup or code.

Platform Terms

Collection

A collection is a grouped set of Knowledgebase documents and websites. Collections are used to organize supporting material by subject, product, service line, or opportunity type so the right context can be attached to the right project.

Confidence Score

A confidence score is the system’s estimate of how confident it is in a generated answer. In the Q&A workflow, confidence scores help users decide which answers may need more review.

Corporate Account

A Corporate account is intended for organizations that need invited users, shared access, and more controlled collaboration across projects and collections.

Document Content Search is the feature that searches the Knowledgebase for relevant matches and returns documents or locations where the requested content appears.

Extracted Instructions

Extracted instructions are response requirements, constraints, or guidance pulled automatically from a source document and stored in the project instructions area.

Individual Account

An Individual account is intended for a single user who does not need to share projects, collections, or related assets with a team.

Knowledgebase

The Knowledgebase is the platform’s library of uploaded documents and websites. It provides the supporting context used by search, chat, summarization, answer generation, proposal generation, and other AI tools.

Primary Document

The primary document is the main source file uploaded to a project for response work. This is typically an RFI or RFP that the user needs to answer.

Processing Method

The processing method is the selected response workflow for a source document. The platform supports the Q&A Method and the Proposal Method.

Profile Background Generator

The Profile Background Generator is the AI tool used to create company background documents, capability statements, or personnel profile content using available collection context.

Project

A project is the main workspace for one opportunity, bid, contract response, or proposal effort. It groups together the source documents, collections, instructions, and generated outputs for that activity.

Project Instructions

Project instructions are the stored responder guidance used to shape AI outputs. These instructions can be entered manually or extracted from the source document.

Project Wizard

The Project Wizard is the guided setup flow that helps users create a project, upload the main document, add or select collections, and review the setup before processing begins.

Proposal Method

The Proposal Method is the workflow used when a source document should be evaluated as a whole and answered with one complete narrative response.

Q&A Method

The Q&A Method is the workflow used when a source document should be broken into individual questions, requirements, or response items that are answered one at a time.

Required Indicator

A required indicator marks an extracted question or response item that appears to need a mandatory response.

Secure Chat

Secure Chat is the platform’s conversational AI interface for private, context-aware chat. Users can ask direct questions, draft short content, and optionally include collections as supporting context.

Source Document

A source document is the uploaded file that the platform processes as the main item to be answered. Source documents are usually RFIs, RFPs, or other response-driven files.

Supporting Documents

Supporting documents are files added to collections to help the AI tools generate grounded output. These can include product guides, pricing sheets, company information, technical references, statements of work, and similar materials.

Summarize Documents

Summarize Documents is the AI tool that creates a shorter summary of a project or collection document based on the summarization options selected by the user.

Download Merge with Original

Download Merge with Original is the Q&A export option that attempts to place generated answers back into the original uploaded source document.

Quick Actions Toolbar

The Quick Actions toolbar is the area of the platform that gives users fast access to commonly used AI tools such as presentation generation, summarization, image generation, and related features.

SAM.gov Public Bid Search is the feature that helps users search open public contract opportunities and turn relevant opportunities into new projects.

For more detailed workflow examples, continue to the System Overview, Projects, Building Your Knowledgebase and Collections, Working with Q&A Responses, and Generating Proposals pages.

SkyPath AI, Privata and the SkyPath AI logo are trademarks of SkyPath AI.