AI & ML interests

AGI, ASI, Reactive Awareness Models, Real-Time Reactive Language Models, Memory Systems, Reactive Neural Networks & Event-Driven AI

Recent Activity

AdamF92  updated a dataset about 6 hours ago
ReactiveAI/ConversationalRetrieval-SMAT
AdamF92  published a dataset about 6 hours ago
ReactiveAI/ConversationalRetrieval-SMAT
AdamF92  updated a Space about 17 hours ago
ReactiveAI/README
View all activity

Organization Card

Reactive AI

We are working on our own ideas of Reactive Neural Networks (RxNN) and Event-Driven AI, advancing from language models to AGI awareness models.

Reactive Neural Networks and Event-Driven AI

Reactive Neural Networks (RxNN) are memory-augmented neural networks with higher levels of recurrence (inter-sequence vs. intra-sequence in RNNs), focused on processing single interactions with access to previous interactions via memory layers. We call this event-driven real-time processing to distinguish it from classical data-driven processing of the full conversation history in each interaction. This difference is crucial in case of AGI and awareness - the key feature of humans awareness, is that we remember what we were doing 10 mins ago, without recalling the whole-day history - we are working in real-time - just like event-driven Reactive Neural Networks.

In Event-Driven AI models are processing the data in reaction to environment or internal events, and are emitting other response events as a result. Processing of input and output events by the model is called the interaction. Event or an interaction could occur in any point in continous time. Models have to be stateful and remember the data between the interactions.

Strong Reactive Neural Networks like Reactor could emit and listen to its internal events, while the Weak Reactive Neural Networks are working only on environment events.

Stateful Reactive Language Models (RxLM)

Our Reactive Transformer and second, improved generation of RxT, are extending stateless Transformer language models (almost all LLMs), introducing Attention-based Memory System (ABMS) with Short-Term Memory (STM) or multi-level Mixture-of-Memory (MoM / with Long-Term Memory). It's based on higher-level of recurrence and memory - not between tokens like SSMs, Linear Attention (it could be combined with RxLM) or RNNs, but between interactions (query and answer). They introduce effective stateful processing with continual learning, infinite memory & context and are natively conversational & agentic

RxLM vs LLM advantages

Processing single interactions in real-time by Reactive Language Models leads to revolutional improvements in inference speed/cost:

  • LLM inference costs are increasing quadratically with conversation length (accumulated for each next message), because of full dialog history processing
  • RxLM inference costs are linear, depending only on single interaction tokens (not accumulated) - each next interaction is number of steps times cheaper than for LLM
  • same for inference speed - LLM has to process full history, while RxLM only single message (only first interaction could be slower because of encoder/memory attention overhead)

In example, for a dialog with DeepSeek R1, that have overally ~90k tokens, I paid for about 1.5M tokens. With RxLM it will cost only that ~90k tokens, so it will be about 15x cheaper

Reactor AGI

Our final goal- Reactor - is planned as the first awareness AGI model, that's modelling consciousness as an Infinite Chain-of-Thoughts, connected to Mixture-of-Memory (MoM) in Attention-based Memory System and Receptors/Effectors systems for real-time reactive processing. It will be able to constantly and autonomously learn from interactions in Continouos Live Learning process.

Visit our website! [Work in progress]