The Future of AI

9/30/2023

Artificial intelligence has recently come to the foreground of technology. With ChatGPT's popularity now booming, Tesla's sales going through the roof, and tech giants learning your smart home usage stats, our world is rapidly entering a new era. So as we enter the age of Smart Technology, it is important for us to know what smart really means.

What is Intelligence?

Since practically the dawn of man, intelligence has been something we seek to measure. In this search, many metrics have been developed. Intelligence Quotient tests, or IQ tests, are by far the most well-known to date. Many criticisms have arisen in response to this method of testing. One is that it fails to measure anything of real significance, since the IQ test posits that measures of intelligence are universal and immutable. Another criticism is that it measures a very narrow aspect of intelligence, suggesting that the Emotional component is not measured at all. The Emotional Intelligence test claims to address this aspect. However, as with the IQ test, a fundamental assumption of these alternative tests is immutability and universality of intelligence. So, if we can't even talk about intelligence in humans without major disagreements in objective testing methods, how can we measure it in machines?

General Intelligence

One way is by measuring specificity of intelligence. Humans have general intelligence, meaning we excel at adapting to new scenarios and relating them to existing knowledge. Unfortunately, this same pattern recognition often also comes as a hinderance to our learning. Humans are highly susceptible to drawing false relations between existing phenomena, or to incorrectly abstracting a phenomenon. (Side note - schizophrenics are those who have this circuit in their brain, but it is overactive, meaning they draw too many correlations without discrimination, and this is what leads to grand delusions and misconceptions of reality). I believe this to be a primary cause of the Dunning-Kruger effect - that the overconfidence early on comes from falsely correlating a topic to other topics that we may truly be competent in. It isn't until we learn what we don't know about a new topic that we can begin gaining knowledge. It is in this way that existing artificial intelligence systems can excel.

Expert Intelligence

Machine learning systems do not face the challenge of the Dunning-Kruger effect. They do not have prior knowledge to overcome, and act as a true blank slate. This allows them to rapidly reach expert levels, but only in the topics they are trained in.

ChatGPT and LLMs

ChatGPT is composed of two types of models. The first is a large language model , or LLM for short. It has much knowledge about the English language, but does not truly know the meanings of words. All it understands are relationships between words. From a phenomonological perspective, since ChatGPT has no lens to view the world through, it cannot know the meanings of words. This is where the second model type comes in. In addition to being a LLM, ChatGPT has a generative text model component. This means it is really good at using those relationships between words to make responses. It does so with extreme confidence. The problem with this is that it also makes things up and cites false sources . This is almost inevitable with a system that is specialized in a very particular task. It will do what it is good at, and only that. So, from a practical perspective, how might we be able to improve its function?

Path to General

Fractal intelligence

The first route may be a fractal intelligence. Humans have specialized subsystems within their brains. In this way, ChatGPT is already an improvement on previous models - it uses separate models for its speech processing and text generation. However, it does not address the problem of hierarchical knowledge. An approach that may work is a taxonomical view of the world. In a normal language model, all input is pumped through the same model. An alternative approach may be to create layers of increasingly specific classifiers, and to make them more specific than just letting the system handle it. For example, say I passed in an image of a German Shepherd, and asked the system to identify the animal. The first classifier model may distinguish between art, static lifeforms (like plants), natural objects (like rocks), manmade objects (like chairs), humans, and a few different animal subclassifications, like mammals, birds, or fish. The second model may be increasingly specific, narrowing down mammals to dogs. The third model may be even more specific, narrowing down between dog breeds. Finally, the information from the image model would be passed to a generative text engine, which would then come back with some human-readable result about what exactly it was shown - a German Shepherd.

Experiential Intelligence

Another route would be to provide experiential intelligence. Providing a machine with sensors such as a camera module, audio input and output, and other human sense emulators. The next component of this method would almost be like raising the machine as a child, and introducing it to new experiences, teaching it about the world step by step. This, of course, raises a few questions - would this be ethical for either the machine or the human looking after it? This ethical question is well explored in media such as Ghost in the Shell and I, Robot.

Neurocomputing

The next possible route would be to use advanced neurocomputation. Currently, studies are being done into viability of using real human and rat neurons in the context of learning. They have exceeded performance of artificial intelligence models significantly , but are highly costly to maintain in their current state. This final method may be the best route to take - rather than using a simulated neural network, using real neurons to create an advanced computer network. But again, problems arise around ethicality of this, and are best explored in media such as PsychoPass and Deus Ex: Human Revolution.

Hybrid

The final route would be some hybrid of above solutions, or of solutions not included in this document.

Considerations

Despite utilizing any one of the above methods, it may be difficult to define when an artificial intelligence reaches what we consider to be true intelligence. The Turing Test , first suggested by Alan Turing in 1950, may be one way to test a machine's intelligence. Even if a standard Turing Test were passed, the experiential component must be considered. Does the machine truly have perception like a human, and if not, will the machine truly understand human reality? If the machinehas a form of general intelligence but doesn't understand human reality, should it be allowed to exist alongside humans?

The Apple Analogy

A language model may understand how to explain an apple. It may correlate that apples are red or yellow or green, that they are crunchy, sweet, and juicy. It may understand that apples grow on trees in orchards. It may even understand that Apple is a company. But no matter how much it may correlate about that apple, it will have no memories of going apple picking with its father when it was ten years old. This is a memory I have, and one that contributes to my definition of what an apple is. This idea extends to all things in the world. Even if the experiential learning path were taken, the machine may never have a true understanding of human intelligence, and may never grow beyond correlating words and images.

Mass Data Collection

There is one more path. It entails data collection of hundreds of thousands of real humans interacting with real devices that are connecting real data on them. From this, average personalities may be aggregated. These personality profiles could then be run through behavioral simulations to see what actions they are likely to take in the future based on what actions they have taken in the past.

This is the basic concept of advertising data profiles. Every major tech company has a personality profile on you, more complete than you may be willing to accept, and is running simulations on it to determine which actions you are most likely to take in the future. It is currently claimed that this is for the goal of targeted and personalized advertisements.

Algorithmic Fatalism

It is here that I coin the term Algorithmic Fatalism. In a world with this mass data collection, with high attachment to our digital devices, do we truly have free will? Is our fate predetermined by what people call "The Algorithm," the content generation feed that provides us whatever will occupy our attention most for the time, while simultaneously selling us products, and more importantly, ideologies? Can one truly know if they have sanctity of their mind if the media they consume is chockful of subliminal memetics? Is there any difference between an algorithm who knows you better than you know yourself and controls your fate, and a God?