The Future of Ai Will Take a Different, More General Approach


ORBAI aims to develop a human-like AI with fluent conversational speech

Santa Clara, CA, Oct. 09, 2021 (GLOBE NEWSWIRE) — The California-based startup ORBAI has developed and patented a design for AGI that can learn more like the human brain by interacting with the world, encoding and storing memories in narratives, dreaming about them to form connections, creating a model of its world, and using that model to predict, plan and function at a human level, in human occupations.

With this technology, ORBAI aims to develop Human AI, with fluent conversational speech on top of this AGI core, to provide AI professional services, all the way from customer service to legal, medical, to financial advice, with these services provided online, inexpensively, for the whole world. The core of the Legal AI has already been tested in litigation, with great success.

Brent Oster, President/CEO of ORBAI has helped Fortune 500 companies (and startups) looking to adopt ‘AI’, but consistently found that DL architectures tools fell far short of their expectations for ‘AI’. Brent started ORBAI to develop something better for them.

Today, if we browse the Internet for news on AI, we find that AI just accomplished something humans already do, yet far better. Still, it isn’t easy to develop artificial general intelligence (AGI) through human-created algorithms. Do you think AGI may require machines to create their own algorithms? According to you, what is the future of machines that learn to learn?

This is correct, today people design deep learning networks by hand, defining the layers and how they connect, but after a lot of tinkering, they can only get each network to do a specific task, like CNNs for image recognition, or RNNs for speech recognition, or Reinforcement learning for simple problem solving, like games or mazes. All of these require a very well defined and constrained problem, and labelled data or human input to measure success and train. This limits the effectiveness and breadth of application for each of these specific methods.

ORBAI has built a toolset called NeuroCAD ( that uses a process with genetic algorithms to evolve more powerful and general purpose spiking neural networks to shape them to fill in the desired functionality, so yes, the tools are designing the AI. One example is our SNN autoencoder that can learn to take in any type or 2D or 3D spatial-temporal input, encode it to a latent, compressed format and also decode it. The cool part is you don’t have to format nor label your data. It learns the encoding automatically. This takes the functionality of CNNs, RNNs, LSTMs, GANs, and combines them into one more powerful general purpose analog neural network that can do all these tasks. By itself this is very useful, as the output can be clustered, then the clusters labelled, or associated with other modalities of input, or used to train a conventional predictor pipeline.

But this is for designing components. There is a second level to NeuroCAD that allows these components to be assembled and connected into structures, and these composite structures can be evolved to do very general tasks. For example, we may want to build a robot controller, so we put two vision autoencoders for stereo vision, a speech recognition autoencoder for voice commands, and autoencoders for sensors and motion controllers. Then we put an AI decision making core in the middle, that can take in our encoded inputs, store them in memory, learn how sequences of these inputs evolve in time, and store models for what responses are required. Again, all of these autoencoders and components are evolved to their specific area, and how they connect is evolved, as is the decision core in the middle.

To get this to work, we have to take some guesses about how to design this artificial decision core, the brain in the middle, and seed the genetic algorithms with a couple decent designs, so it will process the sensory input, store it and build relationships between the memories, build narratives with inputs and actions with progressively more advanced models that make the robot better able to understand what to do given specific instructions and the state of its world. Once we have an initial guess, we start evolving the components and how they connect to each other and the architecture of the decision making core.

So the short answer is yes, we will have evolutionary genetic algorithms design our AI, from the components, to the way they connect, to how they solve problems, starting with small ‘brains’ and working up, like biological evolution did.

For details, see the ORBAI Patents and NVIDIA GTC presentations listed at the bottom of our AGI page:

Many experts, including computer scientists and engineers, predict that artificial general intelligence (AGI) is possible in the near future. But, ORBAI shows us that it is coming even sooner than we likely anticipated. Could you please shed some light on the project and tell us more about the 3D characters?

What is usually meant is superhuman AGI, which is the apex of this process, but there are degrees and flavors of artificial general intelligence along the way.

– Having more general neural nets that can combine the functionality of CNNs, RNNs, RL, and other Gen 2 AI components into one neural net architecture that is more general and more powerful – One year

– Building an artificial intelligence that can take in sensory inputs, form memories and associations between them, plan and make decisions with them, at the level of insect – Two years, a rodent – Three years

– Human-like conversational speech and and general purpose decision making, but trained only in a specific vocation – Four years for first implementation, 6 years to Make it really work. Some vocations like Law and Medicine have constrained spaces of information and decisions, so are easier than building a general human

– These vocational AIs can be trained independently, then later be migrated to a common architecture and combined to form a multi-skilled AGI. It would not be a general human AI, but would have superhuman capability to do areas of each profession, have deeper and wider knowledge reach, and the ability to model the future, plan and predict better than humans.

– Perfecting AGI, making a completely conversational, human-level general AI that is indistinguishable from us and can pass a Turing Test will most likely require building a synthetic AGI, that is much more powerful than human, that can then use all that power to emulate or mimic a human being, if that is what we want it to do.

What most people talk about as AGI is actually superhuman artificial general intelligence. But how do we measure “superhuman”? Deep learning AI is already superhuman in some very specific areas, and with advances like ORBAI is doing, will become superhuman in broader professional areas in analysis, planning, and prediction. We will have better conversational speech, we might pass the Tuiring test in 4-6 years, but how can speech become superhuman after that? Mastering 8 languages or more? Hm, this gets a bit muddier. I think superhuman is when AGI can solve most problems and predict into the future far better than us.

We base our AGI curve on Moore’s Law, and unlike current Gen 2 DNN based AI, we are using analog neural net computers that scale proportionally with existing hardware, and evolve to become more efficient, and have greater capability with time:

So in summary what ORBAI is building is an AGI that can take in and analyze large amounts of arbitrary format and types of input data, build models of how its perceived world works, and make predictions and plan using those models, then apply that to specific fields like law, medicine, finance, enterprise planning, administration, agriculture, and others. Because human speech fits this concept of modelling a bi-modal sequence of events, it will be a feature, with the speech anchored to the rest of the memories and world data to give it context and relevance.

From ordering groceries through Alexa to writing an email with Siri, AI has been transforming many aspects of our lives. According to you, how will ORBAI’s 3D characters help people transform their lives and bring a change?

I have personally used the Alexa, Google and Siri voice interfaces in my home and have done my best to integrate them into my life and make use of…


Read More: The Future of Ai Will Take a Different, More General Approach