This past week at NVIDIA GTC, a Silicon Valley startup, ORBAI, demonstrated its revolutionary BICHNN SNN Autoencoder AI technology as part of its NeuroCAD tool suite. This SNN technology uses generation 3 spiking neural networks running on NVIDIA GPUs to autoencode data for video, speech, vision and other applications using completely unsupervised learning that happens at run-time, even in deployment.
This single architecture is a powerful general purpose neural computer and is capable of replacing all of the current AI neural networks using DNNs, CNNs, RNNs, Transformers, and other application-specific architectures with one unified general architecture over the next 3-5 years.
By building on this SNN Autoencoder technology, ORBAI is developing Artificial General Intelligence that will enable more advanced AI applications, with conversational speech, human-like cognition, and planning and interaction with the real world, learning without supervision. It will find first use in smart devices, homes, and robotics, then in online professional services with an AGI at the core powering them.
What we usually think of as Artificial Intelligence (AI) today, when we see human-like robots and holograms in our fiction, talking and acting like real people and having human-level or even superhuman intelligence and capabilities, is actually called Artificial General Intelligence (AGI), and it does NOT exist anywhere on earth yet. What we actually have for AI today is much simpler and much more narrow Deep Learning (DL) that can only do some very specific tasks better than people and has fundamental limitations that will not allow it to become AGI.
The SNN Autoencoder technology that ORBAI is developing and patenting can dynamically encode any reality it perceives into fundamental building blocks or basis sets (and basis coordinates) that it can use to understand and manipulate that reality with the native mathematical language of linear algebra and computers, then reconstruct its results from the building blocks back to reality, giving computer-based AI the ability to work with real-world general inputs and artificial general intelligence operations on them.
With these developments, ORBAI will take the first steps towards AGI that can perceive the real world, reduce those perceptions to an internal format that computers can understand, yet still plan, think and dream like a human, then convert the results back to human understandable form, and even converse fluently using human language, enabling online professional services in finance, medicine, law, and other areas. It can also add these enhanced analytics, forecasting, and decision making capabilities to financial forecasting and enterprise software - where it can be used by businesses large and small. ORBAI's business model is to license the development tools and a developer toolkit to customers and 3rd party developers that work with them, then provide access to the AGI as SAAS, enabling our developer network to connect to it with data and applications for various customer needs.