Friday, September 22, 2017

Fast Unsupervised Pattern Learning Using Spike Timing


In my previous article on the problem with backpropagation, I made the case for using timing as the critic for unsupervised learning. In this article, I define what a sensory spike is, I explain the difference between pattern learning in the brain and neural networks and I reveal a simple and superfast 10-step method for learning concurrent patterns. Please note that this is all part of an ongoing project. I will have a demo program ready at some point in the future. Still, I will give out enough information in these articles that someone with adequate programming skills can use to implement their own unsupervised spiking neural network.

Sensors and Spikes

A sensor is an elementary mechanism that emits a discrete signal (a spike or pulse) when it detects a phenomenon, i.e., a change or transition in the environment. A spike is a discrete temporal marker that alerts an intelligent system that something just happened. The precise timing of spikes is extremely important because the brain cannot learn without it. There are two types of spikes, one for the onset of stimuli and the other for the offset. This calls for two types of sensors, positive and negative. A positive sensor detects the onset of a phenomenon while a negative sensor detects the offset.
For example, a positive audio sensor might detect when the amplitude of a sound rises above a certain level. And a complementary negative sensor would detect when the amplitude falls below that level. The diagram above depicts an amplitude waveform plotted over time. The horizontal line represents an amplitude level. The red circle A represents the firing of a positive sensor and B that of a negative sensor. In this example, sensor A fires twice as we follow the amplitude from left to right. To properly sense a variable phenomenon such as the amplitude of an audio frequency, the system must have many sensors to handle many amplitude levels. A complex intelligent system such as the human brain has millions of elementary sensors that respond to different amplitude levels and different types of phenomena. Sensors send their signals directly to pattern memory where they are grouped into concurrent patterns. Every sensor can make multiple connections with neurons in pattern memory.

Pattern Learning: Brain Versus Neural Networks

To a spiking neural net, such as the brain's sensory cortex, a pattern is a set of spikes that often arrive concurrently. To a deep neural net, a pattern is a set of data values. Unlike neural networks, the brain's pattern memory does not learn to detect very complex patterns, such as a face, a car, an animal or a tree. Strangely enough, in the brain, the detection of complex objects is not the job of pattern memory but of sequence memory. Pattern memory only learns to detect small elementary patterns (e.g., lines, dots and edges) which are the building blocks of all objects. The brain's sequence memory combines or pools many small pattern signals together in order to instantly detect complex objects, even objects that it has never encountered before.

Note: I will explain the architecture and working of sequence memory in an upcoming article.

Pattern Memory

Knowledge in the brain is organized hierarchically like a tree. In my view (which is, unfortunately, not shared by Jeff Hawkins' team at Numenta), an unsupervised perceptual learning system must have two memory hierarchies, one for pattern detection and the other for sequence detection. As seen in the diagram below, the pattern hierarchy consists of multiple levels arranged like a binary tree. I predict, based on my research, that the brain's pattern hierarchy resides in the thalamus (there is no other place for it to be) and that it has 10 levels. This means that pattern complexity in the brain ranges from a minimum of 2 inputs at the bottom level to a maximum of 1023 inputs at the top level. I have my reasons for this but they are beyond the scope of this article.

Sensors are connected to the bottom level (level 1) of the hierarchy. A pattern neuron (small red filled circles) can have only two inputs. But like a sensor, it can send output signals to an indefinite number of target neurons. Connections are made only between adjacent layers in the hierarchy. This is known as a binary tree arrangement. Every pattern neuron in the hierarchy also makes reciprocal connections to a sequence neuron (not shown) at the bottom level of sequence memory (more on this later). The hierarchical structure of pattern memory makes it possible to learn as many different pattern combinations as possible while using as few connections as possible.

Fast Unsupervised Pattern Learning

To repeat, the goal of pattern learning is to discover non-random elementary patterns in the sensory stream. Pattern learning is fully unsupervised in the brain, as it should be. That is to say, it is a bottom-up process dictated solely by the environment and the signals emitted by the sensors. Every learning system is based on trial and error, and as such, must have a critic to correct it in case of error. In the brain, the critic is in the precise temporal correlations between the sensory spikes. The actual pattern learning process is rather simple. It is based on the observation that non-random patterns occur frequently. It works as follows:
  • Start with a fixed number of unconnected pattern neurons at every level of the hierarchy.
  • Make random connections between the sensors and the neurons at the bottom level.
  • If the input connections of a neuron fire concurrently 10 times in a row, the neuron is promoted and the connections become permanent.
  • If a connection fails the test even once, it is immediately disconnected. Failed inputs are quickly resurrected and retried randomly.
As soon as a neuron gets promoted, it can make connections with the sequence hierarchy (not shown) and with the level immediately above its own, if any. The same concurrency test is applied at every level but perfect pattern detection is a must during learning. Excellent results can be obtained even if some inputs are never connected. Pattern learning is fast, efficient and can be scaled to suit different applications. Just use as many or as few sensors and neurons as is necessary for a given task. Connections are sparse, which means that bandwidth requirements are low.

Given that sensory signals are not always reliable and that only perfect pattern detections are used during learning, the process slows down as one goes up the hierarchy. This limits the number of levels in the hierarchy and the upper complexity of learned patterns. This is why the number of levels in the pattern hierarchy is only 10. In a computer application, we can use fewer levels and get good overall results. The goal is to create enough elementary pattern detectors to enable object detection in the sequence hierarchy. Note that the system does not assume that the world is probabilistic. No probabilistic computations are required. The system assumes that the world is deterministic and perfect. Errors or missing information are attributed to accidents and the system will try to correct them if possible.

But why require 10 consecutive firings in a row? Why not 2, 5 or 20? Keep in mind that this is a search for concurrent patterns that occur often enough to be considered above mere random noise. The choice of 10 is a compromise. Using less than 10 would run the risk of learning useless noise while having more than 10 would result in a slow learning process.

Pattern Pruning

The pattern hierarchy must be pruned periodically in order to remove redundancies. A redundancy is the result of a closed loop in the hierarchy.

Looking at the diagram above, we see a closed loop formed by sensor D and the pattern neurons A, B and C. This is forbidden because signals emitted by sensor D arrive at B via two pathways, D-A-B and D-C-B. One or the other must be eliminated. It does not matter which. Note that eliminating a pathway is not enough to prevent the closed loop from forming again. In the diagram above, either pattern neuron A or C should be barred permanently. That is to say, an offending pattern neuron should not be destroyed but simply prevented from forming output connections. This prevents the learning process from repeating the same mistake. In the brain, pattern pruning is done during REM sleep because it would interfere with sensory perception during waking hours. In a computer program, it can be done instantly even during learning.

Pattern Detection

Intuitively, one would expect a pattern neuron to recognize a pattern if all of its input signals arrive concurrently. But, strangely enough, this is not the way it works in the brain. The reason is that patterns are rarely perfect due to occlusions, noise pollution and other accidents. Uncertainty is a major problem that has dogged mainstream AI for decades. The customary solution in mainstream AI is to perform probabilistic computations on sensory inputs. However, this is out of the question as far as the brain is concerned because its neurons are too slow. The brain uses a completely different and rather clever solution and so should we.

Pattern recognition is a cooperative process between pattern memory and sequence memory. During detection, all sensory signals travel rapidly up the pattern hierarchy and continue all the way up to the top sequence detectors of sequence memory where actual recognition decisions are made. If enough signals reach a top sequence detector in the sequence hierarchy, they trigger a recognition event. The sequence detector immediately fires a recognition signal that travels all the way back down to the source pattern neurons which, in turn, trigger their own recognition events. Thus a pattern neuron recognizes its pattern, not when its input signals arrive, but upon receiving a feedback signal from sequence memory. This way, a pattern neuron can recognize a sensory pattern even if the pattern is imperfect.

Coming Soon

In an upcoming article, I will explain how to do unsupervised learning in sequence memory. This is where the really fun stuff happens. Hang in there.

See Also:

Unsupervised Machine Learning: What Will Replace Backpropagation?
AI Pioneer Now Says We Need to Start Over. Some of Us Have Been Saying This for Years
In Spite of the Successes, Mainstream AI is Still Stuck in a Rut
Why Deep Learning Is A Hindrance to Progress Toward True AI
The World Is its Own Model or Why Hubert Dreyfus Is Still Right About AI

Wednesday, September 20, 2017

Unsupervised Machine Learning: What Will Replace Backpropagation?

The Great Awakening?

At long last, the AI research community is showing signs of waking up from its decades-old, self-induced stupor. Deep learning pioneer Geoffrey Hinton has finally acknowledged something that many of us with an interest in the field have known for years: AI cannot move forward unless we discard backpropagation and start over. What took him so long? Certainly, the deep learning community can continue its own merry way but there is no question that AI research must retrace its steps back to the beginning and choose a new path. In this article, I argue that the future of machine learning will be based on the precise timing of discrete sensory signals, aka spikes. Welcome to the new age of unsupervised spiking neural networks.

The Problem With Backpropagation

The problem with backpropagation, the learning mechanism used in deep neural nets, is that it is supervised. That is to say, the system must be told when it makes an error. Supervised neural nets do not learn to classify patterns on their own. A human or some other entity does the classification for them. The system only creates algorithmic links between given patterns and given classes or categories. This type of learning (if we can call it that) is a big problem because we must manually attach a label (class) to every single pattern the system must classify and every label can have hundreds if not thousands of possible patterns.

Of course, anybody with a lick of sense knows that this is not how the brain learns. We do not need labels to learn to recognize anything. Backpropagation would require a little homunculus inside the brain that tells it when it activates a wrong output. This is absurd, of course. Reinforcement (pain and pleasure) signals cannot be used as labels since they cannot possibly teach the brain about the myriad intricacies of the world. The deep learning community has no idea how the brain does it. Strangely enough, some of their most famous experts (e.g., Demis Hassabis) still believe that the brain uses backpropagation.

The World Is Its Own Model

Loud denials notwithstanding, supervised deep learning is just the latest incarnation of symbolic AI, aka GOFAI. It is a continuation of the persistent but deeply flawed idea that an intelligent system must somehow model the world by creating internal representations of things in the world. As the late philosopher Hubert Dreyfus was fond of saying, the world is its own model. Unlike a neural net which cannot detect a pattern unless it has been trained to recognize it (it already has a representation of it in memory), the adult human brain can instantly see and understand an object it has never seen before. How is that possible?

This is where we must grok the difference between a pattern recognizer and a pattern sensor. The brain does not learn to recognize complex patterns; it learns how to sense complex patterns in the world directly. To repeat, it can do so instantly even if it has never encountered them before. Unless a sensed pattern is sufficiently rehearsed, the brain will not remember it. And if it does remember it, the memory is fuzzy and inaccurate, something that is well-known to criminal lawyers: eyewitness accounts are notoriously unreliable. But how does the brain do it? One thing is certain: we will not solve the perceptual learning problem unless we get rid of our representationalist baggage. Only then will the scales fall off our eyes so that we may see the brain for what it really is: a sensory organ connected to a motor organ and controlled by a motivation organ.

The Critic Is In the Data

How does the brain learn to see the world? Every learning system is based on trial and error. The trial part consist of making guesses and the error part is a mechanism that tells the system whether or not the guesses are correct. The error mechanism is what is known as a critic. Both supervised and unsupervised systems must have a critic. Since the critic cannot come from inside an unsupervised system (short of conjuring a homunculus), it can only come from the data itself. But where in the data? And what kind of data are we talking about? To answer these questions, we must rely on neurobiology.

How to Make Sense of the World: Timing

One of the amazing things about the cortex is that it does not process data in the programming sense. It does not receive numerical values from its sensors. The cortex only receives discrete signals or spikes. A spike is a discrete temporal marker that indicates that a change/event just occurred. It is not a binary value. It is a precisely timed signal. There is a difference. The brain must somehow find order in the spikes. Here is the clincher. The only order that can be found in multiple sensory streams of discrete signals is temporal order. And there can only be two kinds of temporal order: the signals can be either concurrent or sequential.

This here is the key to unsupervised learning. In order to make sense of the world, the brain must have the ability to time its sensory inputs. In this light, the brain should be seen as a vast timing mechanism. It uses timing for everything, from perceptual learning to motor behavior and motivation.

Coming Soon

In my next article, I will explain how sensors generate spikes and how the brain uses timing as the critic for fast and effective unsupervised learning. I will also explain how it creates a fixed set of small elementary concurrent pattern detectors/sensors as the building blocks of all perception. It uses the same elementary pattern sensors to sense everything. It also uses cortical feedback to handle uncertainty in the sensory data. Hang in there.

See Also:

Fast Unsupervised Pattern Learning Using Spike Timing
AI Pioneer Now Says We Need to Start Over. Some of Us Have Been Saying This for Years
In Spite of the Successes, Mainstream AI is Still Stuck in a Rut
Why Deep Learning Is A Hindrance to Progress Toward True AI
The World Is its Own Model or Why Hubert Dreyfus Is Still Right About AI

Saturday, September 16, 2017

AI Pioneer Now Says We Need to Start Over. Some of Us Have Been Saying This for Years

This Bothers Me

This is just a short post to point out how progress in science and technology can be held back by those who set themselves as the leaders. Artificial Intelligence pioneer Geoffrey Hinton now says that we should discard backpropagation, the deep learning technique used in deep neural nets, and start over. This bothers me because I and many others have been saying this for years. Some of us, including Jeff Hawkins, have known that this was not the way to go since the 1990s. Here is an article I wrote about this very topic back in 2015: Why Deep Learning Is a Hindrance to Progress Toward True AI.

Demis Hassabis, the Champion of Backpropagation

What is amazing about this is that Geoffrey Hinton is a famous Google employee (engineering fellow) and AI expert. He is now directly contradicting Demis Hassabis, another famous Google employee and co-founder of DeepMind, an AI company that has been acquired by Google. Hassabis and his team at DeepMind recently published a peer-reviewed paper in which they suggested that backpropagation is used by the brain and that their research may uncover biologically plausible models of backprop. I wrote an article about this recently: Why Google's DeepMind Is Clueless About How Best to Achieve AGI.

I find the whole thing rather annoying because these are people who are paid millions of dollars to know better. Oh, well.

See Also:

Unsupervised Machine Learning: What Will Replace BackPropagation?
In Spite of the Successes, Mainstream AI is Still Stuck in a Rut
Why Deep Learning Is A Hindrance to Progress Toward True Intelligence
Mark Zuckerberg Understands the Problem with DeepMind's Brand of AI
The World Is its Own Model or Why Hubert Dreyfus Is Still Right About AI

Tuesday, August 29, 2017

Occult Physics Will Blow Your Mind


According to ancient occult physics, the electron is not elementary but consists of four subparticles. We exist in an immense 4-dimensional sea of energy arranged like a crystal lattice. This means unlimited clean energy, free for the taking once we learn how to tap into the lattice. The entire history of the universe is being recorded in the lattice. Ancient megalithic societies may have used this knowledge to transport huge quarried stones weighing 1000 tons or more. This is the first in a series of articles that I am writing on occult physics. I cannot promise that I will ever publish them all but, if or when I do, I can guarantee that they will blow everyone's mind.

Sacred Scientific Knowledge Hidden in Plain Sight

Many years ago, I stumbled on an amazing discovery. It occurred to me that a few ancient occult texts contained revolutionary scientific secrets about the fundamental principles of the physical universe. The secrets can be found in the books of Isaiah, Ezekiel and Revelation. They are written in an obscure metaphorical language that sounds nothing like science. However, once one understands the meaning of some of the metaphors, things begin to fall into place. At one point in my research, I became frightened and stopped thinking about it for a long time. I had concluded that the potential harm to humanity that this knowledge could unleash if it fell in the wrong hands was just too great.

Assyrian Lamassu or Human-Headed Winged Bull - Southern Iraq
Most ancient societies recorded their sacred wisdom in precisely chosen metaphors that only the initiates understood. The Sumerians, Babylonians, Assyrians and Egyptians thought that certain occult sciences were so powerful that they erected huge symbolic stone monuments to preserve them for posterity while keeping their true meaning hidden from the masses.
Two Human-Headed Winged Bulls - Iran
Although the Biblical symbols are not identical to the ones found in Mesopotamia, the many similarities are striking. Both use images of wings, discs (wheels), bulls, lions, eagles, hands, feet and faces to symbolize various aspects of the sacred knowledge.

Sumerian Anunnaki Winged God and Disc
For whatever reason, historians and archaeologists love to associate ancient occult symbology with mythology and religious superstition but they could not be more wrong. It is almost as if some hidden power is hellbent on preventing mankind from learning about their glorious past. None other than Isaac Newton, the father of modern physics, was convinced that there was secret knowledge encrypted in the Bible and in other ancient mythological writings. (Sources: What Was Isaac Newton's Occult Research All About? and Top 10 Crazy Secrets of Isaac Newton).

In my opinion, the Biblical seraphim and cherubim are occult descriptions of fundamental particles of matter and their properties (Sir Isaac would have jumped for joy if he had known about this). I believe this knowledge was known to ancient megalithic societies in Mesopotamia, Egypt, South America and elsewhere because it was the basis of the technology that they used to lift and transport huge cut stones weighing 1000 tons or more. I believe that a mastery of this knowledge will unleash an era of free unlimited clean energy and super fast transportation.

Stone of the Pregnant Woman - Baalbek, Lebanon
What follows is a short summary of the strange "living creatures" mentioned in the Bible and my interpretations. Note: I will not go into what I believe to be potentially dangerous aspects of this research.

Seraphim - Photons

Seraphim (singular, seraph) is a plural hebrew word that means the shining or burning ones. They are mentioned in the books of Revelation and Isaiah. They symbolize pure energetic particles and their properties. I have identified them as photons. There are 4 types of seraphim and each one has a different face property: man, lion, bull or eagle. One of the seraphim (the one with the bull's face) is responsible for electric phenomena and the other three for magnetic phenomena. The face of each seraph is associated with one of the 4 spatial dimensions (degree of freedom) of the cosmos. Each face has 2 possible states or orientations, forward or backward. It is more or less equivalent to what quantum physicists call the "spin angular momentum" of a particle, except that there really is no spin.

In all, the seraphim can have 8 possible orientations or spin states, 2 for each face. Two of the orientations, the ones associated with the face of a bull, determine whether or not the particle is involved with a positive or negative electric field. The other 6 states are responsible for magnetic phenomena.

Every seraph has energy properties which are symbolized by 6 wings. Unlike cherubim (explanation below), seraphim have no bodies or mass. Two of the wings of a seraph are used for motion, two are associated with its face and two with its feet. Yes, all matter particles have a property called feet (bull or calf hooves) which allow them to move in one direction of the 4th dimension at the speed of light. Wings, feet and hands are powerful metaphors the meaning of which I cannot expand on at this time. I will explain them further in future articles.

The Sea of Crystal - Zero-Point Energy

The most amazing thing about seraphim is that they are the constituents of an enormous 4-dimensional "sea of crystal" or "sea of glass" in which the normal matter of the universe exists and moves. It is a sea of wall-to-wall energetic particles (photons), lots of it, arranged as a stationary 4-dimensional lattice. We are totally immersed in it like fish in water and nothing could move without it. In fact, the entire visible universe is continually moving in the lattice in one dimension (bull) at the speed of light. As matter moves in the lattice, it leaves traces in it. In other words, the entire history of the universe is continually being recorded in the lattice down to the minutest details. Ancient Hindu and Buddhist societies were aware of this recording medium which was called the Akasha. Modern theosophists call it the Akashic records.

The closest analog to the lattice in modern physics is the so-called zero-point energy field that physicists believe permeates space but have no idea what it is made of or what its purpose is. Physicist Richard Feynman is reported to have said that "there is enough energy inside the space in an empty cup to boil all the oceans of the world." Gravitational, electric and magnetic phenomena are caused by the motion of matter in the lattice. Again, one day soon, in the not too distant future, society will learn how to tap into the lattice for unlimited clean energy production and super fast transportation. Current forms of transportation and energy production will become obsolete.

Cherubim - Quarter Electrons

Cherubim (singular, cherub) are symbolic winged creatures that modern theologians wrongly associate with angelic beings that fly around and do God's will. The Hebrew word cherubim is derived from the Assyrian term chiribu or kirubi which was the mystical name given to the representation of a winged bull or lion with a man's head. Various types of cherubim are mentioned in the Bible but my research is concerned strictly with the 4 cherubim (living creatures) in chapters 1 and 10 of the Book of Ezekiel. In chapter 10, verse 14, Ezekiel clearly equates the Hebrew word cherub with the face of a bull. He said nothing about angels.

Each living creature or cherub has 4 faces and 4 wings. Each also has a human body, 4 human hands and the feet of a bull. Having 4 faces means that a cherub has both electric and magnetic properties. All four cherubim move together in unison without turning, in any of the 4 dimensions.

My interpretation will come as a surprise. In my view, the cherubim are the 4 particles that comprise the electron or the positron. Yes, the electron is not an elementary particle as the Standard Model of particle physics would have us believe. Each cherub has 1/4 the charge of the electron. But this is not as surprising as it sounds. Physicists have known for some time that the electron is not truly elementary but they are a conservative and highly political bunch. Rather than come out and acknowledge the composite nature of the electron, they have taken to calling its constituent particles, quasiparticles instead. They also use the term quarter electron when they are feeling more liberal.

The 4 human hands of a cherub are special properties that confine them to stay and move together as one particle: they hold onto each other. The body of a cherub is a special kind of energy that physicists call mass. Each cherub also has a wheel or disc associated with it. The 4 wheels act as one wheel and move precisely with the 4 cherubim. In my interpretation, the wheel represents the electric field of the electron.

Coming Soon

In future blog articles, I will explain how particles move in the lattice and how the electric field of a charged particle works.

See Also:

Ezekiel 1: The Four Living Creatures, the Four Wheels and the Crystal Firmament
Ezekiel 10: The Four Cherubim and the Four Wheels
Isaiah 6: The Four Seraphim
Revelation 4: The Four Beasts and the Sea of Crystal
Physics: The Problem With Motion
There Is Only One Speed in the Universe, the Speed of Light. Nothing Can Move Faster or Slower

Tuesday, July 25, 2017

Why Google's DeepMind Is Clueless About How Best to Achieve AGI


In this article, I argue that DeepMind's stated goal of achieving artificial general intelligence (AGI) is hopelessly misguided. I further argue that the blame can be laid at the feet of its co-founder, Demis Hassabis.

Hammer and Nails

In a recent paper published in the neuroscience journal Neuron, Demis Hassabis and members of his team at Google's DeepMind argued that progress in AI will benefit from studying how the brain works. While there is nothing controversial about this, Hassabis et al strongly defend the hypothesis that backpropagation, the mechanism of learning in supervised deep neural networks, is also used by the brain. Here is a quote from the paper, emphasis added:
A different class of local learning rule has been shown to allow hierarchical supervised networks to generate high-level invariances characteristic of biological systems, including mirror-symmetric tuning to physically symmetric stimuli, such as faces (Leibo et al., 2017). Taken together, recent AI research offers the promise of discovering mechanisms by which the brain may implement algorithms with the functionality of backpropagation. Moreover, these developments illustrate the potential for synergistic interactions between AI and neuroscience: research aimed to develop biologically plausible forms of backpropagation have also been motivated by the search for alternative learning algorithms.
Hassabis believes that sensory learning in the brain is supervised. Why would a world renown AI expert believe in something so absurd? The answer is two-fold. First, his knowledge of neuroscience is rather lacking since most knowledgeable neuroscientists know that cortical learning is unsupervised. Second, supervised learning is the only effective type of learning that Hassabis is aware of. His entire perspective on AI is built on supervised learning driven by reinforcement signals. In other words, when all you have is a hammer, everything looks like a nail.

Cortical Feedback Is Not Backpropagation

The brain uses lots of feedback signals. There are feedback pathways from the top level of the sequence hierarchy in the cortex down to the first or entry level. But it does not stop there. The feedback pathways continues even further down into the thalamus where the brain's sensory pattern hierarchy resides. Hassabis and his team are obviously confusing feedback pathways with backpropagation.

Cortical feedback is used only during the recognition process and has nothing to do with learning. It is not backpropagation. Backpropagation is something that is used in a deep neural network as a way to propagate an error signal from the output layer down to the first layer of the network during pattern learning. Backpropagation is an integral part of supervised learning. Unfortunately for Hassabis and deep learning experts, this is not the way the brain learns. Cortical learning is 100% unsupervised and is strictly based on signal (spike) timing.

Hassabis Is Clueless About Learning in the Brain and About How to Achieve AGI

Demis Hassabis pretends to know a thing or two about neuroscience but continues to insist that supervised sensory learning has a role to play in the brain and AGI. This is absurd. Obviously Hassabis has never studied the organization and operation of the human retina or the cochlea. If he had, he would know that the eye is nothing like a camera and that the ear is nothing like a microphone. More importantly, he would know that timing, not backpropagation, is the basis of learning in the brain.

Every learning mechanism is based on trial and error. As such, it must have a critic, i.e., a way to correct errors. DeepMind's roadmap to artificial general intelligence (AGI) consists of using reinforcement signals (pain and pleasure) as the only critic for learning. This is wrong in so many ways. Reinforcement signals cannot possibly teach the brain how to understand the intricacies of the world around it.

Note that, even without a background in neuroscience, anybody with a modicum of common sense can tell that humans learn almost anything about their environment without supervision. We don't need a label to tell us how to recognize anything. We can learn to recognize objects and sounds without reinforcement, directly from the sensory data. So how did Hassabis gain his fame as an AI pioneer while being so clueless? Answer: He did it by using the deep learning inventions of others in various narrow domain applications (mostly game playing) as a way to make a name for himself. Hassabis is clueless about how to achieve AGI. He is a charlatan. Soon he will be just a footnote in the history of AI.

The Danger of the Cult of Materialism

The reader may ask, why am I so harsh on Demis Hassabis? The answer is that Hassabis and almost everyone else in the AI community are materialists. That is, they believe and teach others to believe in all sorts of pseudoscientific dogmas that support their core doctrine that God does not exist. For examples, they believe that matter is all there is, that the universe created itself, that life emerged out of dirt all by itself, that they can gain immortality by transferring the contents of their brains to a computer and that computers can achieve consciousness by some unexplainable magic called emergence.

In my opinion, materialists are not just crackpots and pseudoscientists. They are a formidable danger to humanity in this impending age of artificial general intelligence. Their ultimate goal is to eradicate traditional religions by force, if necessary. Materialism is now a full blown machine-worshipping cult whose members preach that intelligent machines should be treated as sentient beings and be given legal rights similar to human rights. If a significant percentage of mankind begins to worship machines as conscious agents or saviors, we are doomed. What I am saying is that materialism is just as evil and dangerous as the other religions of the world, and possibly even more so.


My goal is to show that the materialist elite is not as knowledgeable or as intelligent as they think they are or as they want others to believe. In fact, in many respects, the level of their stupidity is mind boggling. Demis Hassabis is a case in point. One thing about AI research that brings a smile to my face as much as any other is my conviction that materialists have as much chance of figuring out AGI as my dog. Of this, I am certain.

See Also:

The Missing Link of Artificial Intelligence
Why We Have a Supernatural Soul
Mark Zuckerberg Understands the Problem with DeepMind's Brand of AI

Monday, July 24, 2017

LIGO Is a Billion-Dollar Scam Based on Bullshit Physics

Note: I will repost this article periodically because I believe that scamming the public to the tune of billions of dollars is unacceptable. The original article is still available.

Scientists Must be Made Accountable

I made this argument elsewhere but I thought that it was so damaging to mainstream physics and so important to the integrity of science that it deserved its own post. We are being taken to the cleaners by a well paid group of people whose job it is to come up with the best science that money can buy. Instead, they feed us lies after lies and they spend billions and billions of our money in the process. When they are caught in an outrageous lie, they create even more elaborate and expensive lies to cover it up. The billion-dollar LIGO project scam is a case in point. We must demand accountability from our scientists.

Relativist Pseudoscience

Relativity is a local theory. That is to say, it forbids action at a distance. While Newtonian gravity assumes that gravity acts instantaneously at a distance, General Relativists insist that gravity is propagated at the speed of light. The problem is that a finite speed of gravity would result in unstable orbits. It is a big problem indeed. Relativists claim that GR addresses our legitimate concern about the finite speed of gravity. They then go through an amazing exercise in pseudoscience, bad logic and superstition to explain how GR gets around the problem.

They argue that, by some unknown magic, the sun communicates information regarding its velocity relative to the earth and all other bodies in the universe. This information propagates at the speed of light. This way, the other bodies can somehow (more magic) read the information and more or less guess where the sun is even though they receive the information some time after it was sent. Earth receives the information about 8 minutes after emission. Of course, relativists decline to explain how this information is encoded, transmitted and how the other bodies detect it. They just write some equations and voila! That's the magic part. This part of the theory is strangely immune to falsification. Not one experiment is offered to determine the veracity of the hypothesis. They are essentially telling us with a straight face that they somehow know that gravity acts as if it were instantaneous (the Newtonian assumption) even though they know it isn't. This cannot be tested because the results of GR are the same as the Newtonian results. It is pure pseudoscience. But it gets worse, much worse.

Not Even Wrong

It is a laughably self-contradicting argument simply because there is no way that the sun can "know" about its velocity relative to any other body so as to transmit it to any of them. The problem has to do with the word ‘relative’. It is a problem with all observer-centric, relativity-based, local theories because the word ‘relative’ implies instantaneous knowledge between distant bodies even though such knowledge is forbidden by the local nature of the theory: nothing can move faster than the speed of light. So general relativists are breaking their own rule. On the one hand, they are saying that information must travel at or below the speed of light and this is why changes in gravity must travel at the speed of light. On the other hand, they are using instantaneous information to determine the relative velocity between distant bodies. This is not even wrong. And yet, this stinking pile of bullshit is what the ongoing LIGO project is based on. The public is being forced to pay for a scam but the scammers have found a way to remain immune to public scrutiny. This must stop. Someone needs to blow a loud whistle in order to unmask this bullshit.

Malevolent Alien Takeover?

The relativist argument for the finite speed of gravity is so painfully contrived and so wrong on the face of it that I am tempted to conclude that the physics community has been taken over by a malevolent alien entity hellbent on making humans look and act stupid. We fund scientific research with our money. It is ours. We own it. It is time to kick out the charlatans and bullshitters that have taken control of it.

See Also:

Does Gravity Travel at the Speed of Light? (in which relativist Steven Carlip admits that the GR hypothesis cannot be tested)
Why Einstein's Physics Is Crap
Why Steven Carlip Is Mistaken about the Speed of Gravity or Why LIGO Is Still a Scam
Why LIGO Is a Scam

Sunday, June 25, 2017

The Thalamus Uses a 10-Step Method to Learn Sensory Patterns. How Do I know This?


I got a few extraordinary claims to make about the thalamus. I say "extraordinary" because they describe certain functional aspects of it that are unknown to neurobiologists. I know, for examples, that it contains a 10-level sensory pattern hierarchy and that it uses a 10-step method to learn new patterns. I know that it needs timing signals from the hippocampus and that it needs to be pruned during sleep. I know this, not because I learned it from the scientific literature (most neurobiologists believe the thalamus is mostly a relay center for sensory signals on their way to the cerebral cortex), but because I found out about it from a couple of very old occult books known as Revelation and Zechariah. Years ago, I discovered that the books of Revelation and Zechariah contained revolutionary scientific knowledge about the brain and consciousness written in a metaphorical language intended to hide their true meaning. That is, until now.

The First and the Last: Sensory Signals

Chapters 2-3 of the book of Revelation contain seven metaphorical letters or messages to seven symbolic Churches in Asia. Each message symbolizes a different functional aspect of the brain. I have identified the message to the Church of Smyrna (Rev 2: 8-11) as pertaining to sensory processing in the thalamus. The message begins as follows (emphasis added) :
Rev 2:8. And to the angel of the church in Smyrna write, These things says the First and the Last, who was dead, and came to life:
The "first and the last" is a powerful metaphor. It means that only input signals that are emitted at the onset or offset of a stimulus are used by the thalamus. For example, the waveform in the illustration below represents a varying stimulus such as audio volume or light intensity. The horizontal line represents a given amplitude level. A and B are onset and offset sensors for that amplitude. Each fires a single pulse when the stimulus crosses their amplitude level in a specific direction: up or down.

The text makes a distinction between "poor" sensors (onset and offset) and "rich" sensors. Rich sensors fire continually as long as the stimulus is above a certain level. The thalamus uses only poor signals as revealed symbolically in verse 9:
Rev 2:9. I know your works, tribulation, and poverty (but you are rich); and I know the blasphemy of those who say they are Jews and are not, but are a synagogue of Satan.
This is a different topic but suffice it to say that rich signals are sent to the cerebellum which is symbolized by those who say they are Jews and are not. The cerebellum is described in the message to the gentile church of Laodicea.

Ten Days of Tribulation, Death and Resurrection: Pattern Learning

Pattern learning in the thalamus is fully unsupervised. That is to say, unlike deep learning networks, it does not require that the patterns be labeled during learning. It discovers the patterns automatically from the sensory data. It consists of searching for discrete signals that frequently arrive concurrently. Let us take another look at verse 8:
Rev 2:8. And to the angel of the church in Smyrna write, These things says the First and the Last, who was dead, and came to life:
The phrase "who was dead, and came to life" is important because it explains an essential aspect of pattern learning: if a sensory input fails a test, it is immediately disconnected (it dies) but is quickly reconnected (resurrected) elsewhere. The actual test is explained in verse 10:
Rev 2:10. Do not fear any of those things which you are about to suffer. Indeed, the devil is about to throw some of you into prison, that you may be tested, and you will have tribulation ten days. Be faithful until death, and I will give you the crown of life.
The most important metaphor in this verse is the word "days". My understanding is that a day symbolizes the shortest working interval which is approximately 10 milliseconds in the brain. The only test one can conduct in a single interval (or day) is a concurrency test. The phrase "you will have tribulation ten days" simply means that input sensory connections are tested 10 times in a row for concurrence. If they fail the test even once, they are disconnected. If they pass all the 10 steps, the connections become permanent.

The question that arises is this: why 10 steps? Why not 2 or 20? Keep in mind that this is a search for patterns that occur often enough to be considered above mere random noise. The choice of 10 steps is a compromise. Using less than 10 would run the risk of learning useless noise while having more than 10 would result in a slow learning process.

Lands, Sprouts and Olive Trees: Pattern and Sequence Hierarchies

According to my interpretation of the book of Zechariah (chapters 3-6), there are two hierarchies (the two olive trees) in each hemisphere of the brain, one for patterns (the thalamus) and one for sequences (the cerebral cortex). The pattern hierarchy has 10 levels whereas the sequence hierarchy has 20. This is symbolized by the 10 by 20 cubits flying scroll metaphor in chapter 5. Every pattern detector at every level of the thalamic hierarchy sends its signals to the bottom or entry level of the sequence hierarchy. However, the pattern inputs to sequence memory are not connected willy-nilly. The entry level of the cerebral cortex is organized into different areas (lands) assigned to inputs from different sensory modalities and pattern levels.

One of my more surprising findings is that actual pattern recognition does not occur in the pattern hierarchy, as one would be inclined to believe, but in the sequence hierarchy (not shown). When a sequence detector recognizes a sequence, it immediately sends a recognition signal down the sequence hierarchy, all the way down to the pattern detectors in the thalamus. This is accomplished by a mechanism that the book of Zechariah metaphorically refers to as the "branch" or the "sprout", "sprout" being the actual literal meaning in Hebrew. Indeed, feedback pathways are observed in both the cerebral cortex and the thalamus.

The House of the Thief: Pattern Pruning

I could write an entire book on Zechariah alone. I'm forced to leave a lot of good stuff out because one or two blog articles could never do it justice. There is one aspect of pattern learning that I want to mention here. It has to do with pattern pruning. The pattern hierarchy must be pruned periodically to get rid of redundant connections. A redundancy is a closed loop in the hierarchy.

Looking at the diagram above, we see a closed loop formed by sensor D and the pattern neurons A, B and C. This is forbidden because signals emitted by sensor D arrive at B via two pathways, D-A-B and D-C-B. One or the other must be eliminated. It does not matter which. Note that eliminating a pathway is not enough to prevent the closed loop from forming again. In the diagram above, either pattern neuron A or C (whichever is younger) should be barred permanently. That is to say, an offending pattern neuron should not be destroyed but simply forbidden from forming output connections. This prevents the learning process from repeating the same mistake. In the brain, pattern pruning is done during REM sleep because it would interfere with sensory perception during waking hours. In a computer program, it can be done instantly even during learning.

The book of Zechariah uses the flying scroll metaphor (chapter 5) to describe pruning. In fact, it mentions two types of pruning: pattern pruning (thieves) and sequence pruning (liars). This is shown in verse 4:
Zech 5:4. I will make it go forth,” declares the Lord of hosts, “and it will enter the house of the thief and the house of the one who swears falsely by My name; and it will spend the night within that house and consume it with its timber and stones.”
Neither the thalamus nor the cerebral cortex has the necessary timing mechanisms to implement learning and pruning on their own. Zechariah's text suggests that the testing mechanisms reside elsewhere in the brain. The most likely place for them is the hippocampus which is known to generate all sorts of precisely timed spike trains.


I have been saying for years that true AI will arrive on the world scene suddenly and that it would come from an unexpected place, the one place that neither atheists nor believers would suspect. We are not there yet but the time is drawing close. Hang in there.

See Also:

Two Simple Rules Govern Goal-Oriented Motor Learning in the Brain. How Do I Know This? Part I
Contrary to Claims in the Scientific Literature, the Cerebellum Cannot Generate Speech. How Do I Know This?
Short-term Attention Span Lasts 12.6 s and it Takes 35 ms to Switch from one Subject to Another. How do I Know This?
200 Million Horsemen and the Corpus Callosum