overviewadaptersapplicationscore toolsetscognitive operating systemsanomaly detection

COGNITIVE OPERATING SYSTEMS

LEARNING ENGINE

The core of the FOUNDATION Platform COS is the Learning Engine (LE), which consists of a Linguistic Engine and a Cognitive Engine. The LE processes and analyzes normalized input data through a number of steps, including symbolizing and cognitive processing. It also uses a set of memories to stage, abstract, and compare new events against historic events and previously abstracted activity models.

The plasticity and stability of the LE’s learning model are not bound to a specific type, and the LE will adaptively create, modify, reinforce, and decay the models which best represent the input data. It is also capable of building models using composite data sources from disparate data types for multi-sensor fusion. Nonparametric algorithms such as these developed by FOUNDATION Platform can describe many types of stochastic models.

As opposed to rules-based systems that require configuration, the FOUNDATION Platform learning engine does not make assumptions about the underlying distribution and provides high performance models for anomaly detection.

LINGUISTIC ENGINE

The Intellective AI COS uses a patented AI method called “cognitive neuro-linguistic event recognition.” [1-3] The platform utilizes a linguistic model to automatically characterize and describe events seen in the incoming input data. The linguistic model representation is illustrated in Figure 6. Objects and events seen in the incoming data stream are abstracted into a hieratical collection of descriptors that characterize the input data. The lower-level, more primitive elements are defined as alpha symbols. Collections of alphas are grouped into beta symbols as higher levels of abstraction are defined. Finally, collections of betas are grouped into gamma symbols.

Figure 6: Linguistic Representation

This approach dynamically generates a lexicon, or feature dictionary of feature symbols based on a statistical distribution of feature symbols identified in the specific input data. Using these feature symbols, the linguistic engine generates a linguistic model based on the probability of each occurrence in relation to other feature symbols. This results in a feature syntax, which yields a Cognitive AI System able to learn, identify, and recognize complex patterns without the aid or guidance of predefined activities. The resulting collection of higher-order gammas becomes a highly efficient method for encoding memories of patterns observed in the past. The collection of neuro-linguistic representations can be used to perform real-time pattern analysis and characterization of new incoming input data.

Cognitive operating system

FEATURE ANALYSIS COMPONENT

The Feature Analysis Component (FAC) retrieves the normalized vectors of input from the Sensory Memory. Further, the FAC may perform, but not limited to time series analysis and applies encoding to the input. Finally, the FAC stages the processed input data into the pipeline to the GPU for analysis by the Symbol Analysis Component.

SYMBOL ANALYSIS COMPONENT

The Symbol Analysis Component (SlAC) symbolizes the data from the FAC based on values occurring repeatedly in association with one another. The SIAC generates a set of feature-symbols for each input feature from the FAC. The SIAC learns as it goes and may identify new feature symbols, decay feature symbols, and reinforce feature symbols in real time.

LEXICON ANALYSIS COMPONENT

The Lexicon Analysis Component (LAC) builds a dictionary that includes combinations of co-occurring feature-symbols. The combinations of feature-symbols are feature-words that may represent a particular activity or event. The LAC learns and generates feature-words in the feature-dictionary from the SlAC. The LAC may create new feature-words, decay existing feature-words, and reinforce feature-words over time.

SYNTAX ANALYSIS COMPONENT

The Syntax Analysis Component (SxAC) builds feature-syntax from the feature-words produced by the LAC. The SxAC receives the feature-words identified by the LAC and generates a connected graph, where the nodes of the graph represent the feature-words and the edges represent a relationship between the feature-words. The SxAC may reinforce or decay the links based on the frequency with which the feature-words relate to one another in a data stream.

CONTEXT ANALYSIS COMPONENT

The Context Analysis Component (CAC) builds feature-context from the feature-syntax output by the SxAC. The CAC performs sequence analysis for contexture feature-syntax, such as the trajectory of a tracked object.

Figure 7: Cognitive Engine Diagram

The cognitive engine includes an inference net that is configured to retrieve data for processing from Short Term Memory, Episodic Memory, Long Term Memory, and Semantic Memory to make inferences as shown on Figure 7. Up-to-date and continuously updated models may be stored in Short Term Memory. The states of these models may be overwritten whenever the models are updated. To save these states, the models with repetitive excitation will be periodically persisted to Episodic Memory and Semantic Memory with some potential generalization. Repetitive excited memory from Episodic Memory will be stored in Long Term Memory with generalization of events.

SEMANTIC MEMORY

Semantic Memory stores the generalized representation models of the Linguistic Engine. (Patent Link)

SHORT TERM MEMORY

Short Term Memory is hierarchical key-value data storage for models that are new and/or updated at multiple levels.

EPISODIC MEMORY

Episodic Memory is encoded with Sparse Distributed Memory generalizing the Short Term Memory.

LONG TERM MEMORY

The Long Term Memory is encoded with Sparse Distributed Memory generalizing the Episodic Memory.

INFERENCE NET

External sensory stimuli converted into linguistic representation may trigger a corresponding condition to update data on the activated inference node. The trigger criteria specifies the conditions under which the processing logic execute after retrieving data from Short Term Memory, Episodic Memory, Long Term Memory, and/or Semantic Memory.

SENSORY MEMORY

Sensory Information from the Applications, such as FOUNDATION Video and FOUNDATION IoT is stored in the Sensory Memory long enough for data from these sensors types to be transferred into the Linguistic Engine.

NVIDIA GPU CUDA

The fields of AI and neural network processing have seen major advances in neural network processing performance by using GPU co-processors. This enables thousands of GPU processing cores to concurrently operate on vector and array data, resulting in 10 to 30 times improvement in overall performance as compared to CPU-only computing methods. The FOUNDATION Platform technology is designed to allow a scalable collection of GPU processors to assist in parallel processing of massive amounts of incoming input data. This significantly increases the real-time responsiveness and the number of input data sensors that can be supported on a given system.