Convolutional Neural Networks

Frank Doherty
5 min readFeb 18, 2021

As you’re standing in the park a beagle runs past you. In your head you think “Oh a dog”. How does your conscious brain recognize this animal to be a dog apart from a deer, plant or wolf. While seeming to be a biological process of understanding animals, this is a computer science question of how image classification is asserted down into a coherent syllable: “dog”. This concept of categorization has puzzled computer scientists for decades. A simpler approach and a way to build the fundamentals of machine learning is to peer inside the cognitive process of numeral recognition. Looking at a piece of paper you see “3” etched onto the piece of looseleaf. How does your brain determine this number to be a three apart from a seven, eight or nine?

The answer is embedded deep within the inner workings of a machine learning algorithm only recently discovered - convolutional neural networks. Neural networks give rise to a new realm of science deciphering what consciousness is and how it operates. Neural networks receive an input and transform it through a series of hidden layers called “neurons”[1]. Each neuron is fully connected to all neurons in the previous layer. I know it sounds confusing hearing neurons and matching the word to axioms in the brain, however these neurons are simply the percentage of likelihood where the next point in each layer will fire. The last fully connected layer is called the output layer and coordinates the input to its weighted output: “three”[2]. To be concise, neurons are weighted to assign a probability of each possible output to a given number (Probability = grayness in Fig 1.). Let’s return to our number “3” recognition scenario and take numeral classification as an example. The etched “3” can be pixelated and inputted to the network as a dimension of columns and rows, hereby rendering the input through a layer of neurons that decipher the image to be the numeral “three”[3].

Utter trash is the probability of each output corresponding to the input

Machine learning has numerous implications for computer vision in terms of how objects and the in-depth overlay of our augmented world is deciphered. It uses the same design as convolutional neural networks where each pixelated unit is inputted into the network as a neuron (circle in Fig.1) and parsed through weighted neurons at each layer. The reason for weighting the neurons is to have accurate outputs based on training data[4]. Training data requires categorizing data by labeling pictures as “deer”, “plant”, “wolf” or “dog”. This allows our machine learning algorithm to accurately align its output with the animal that is visualized. To go back to our beagle in the park example, imagine Chappie standing in the park beside you. “Oh a dog” he says, “How’d you know that wasn’t a cat?” you respond. Well to understand the interwoven mechanics of Chappie’s mind we need to peer into the convolutional network underpinning it. Have a look.

Chappie’s mind

This may leave you wondering “Am I merely a convolutional network” where pixelated images such as words on a page are being inputted through a retina and corresponding neurons embedded deep inside plasticity fire to determine what syllables match the written phrases printed on the pages in front of me? Don’t worry you’re not. Maybe. Well since I bring in Chappie, you may be wondering where convolutional neural networks may be leading us in terms of AI. It’s frightening when you bring in the concept of Moore’s law and apply it to the realm of superconducting qubits exponentially scaling to infinity. However, let’s take a step forward and decipher the benefits of these neural networks in a different field, biomedical engineering. Why create a race of insanely intelligent AI’s when we can use this tech to code some of the 10²⁵⁰ undiscovered proteins[5]. Proteins that when designed correctly can read the genome of every cell within a biological being and detect cancer mutations. Proteins that can detect these mutations and then signal to nano-bots that posses the capability to destroy each individual cell within a given tumor. The ability to create a superconducting qubit near 0^K is gives us tremendous power to create biological proteins with nanotech capabilities. There are seven quantum computing companies with the ability to use superconducting qubits to bring probabilities from parallel realities into our realm’s existence from merely cooling qubits to .0000001^K[6]. That means instead of computing on 0’s and 1’s, qubits now compute on 0, 1 and both states simultaneously.

These companies are Xanadu, Rigetti, IonQ, Google Quantum AI, IBMQ, D-Wave and Zapata. From here forward these companies build proteinic structures comprised of molecular machinery. If you peer deep into genetics the genome G, T, A, C is translated to byte code 00, 01, 10, 11 and if we go back to the undiscovered proteins, there are 10²⁵⁰ molecular machines that can operate inside and between cells. That’s 1 with 250 zeros behind it. A lot of genome editing. These proteins are engineered through convolutional networks based on the principles of weighted neurons. And even then that’s not as far down as the rabbit hole goes. There’s still CRISPR - cas9. CRISPR allows lab technicians to modify the genome of any cell. With this technology, we unlock the power to cure a plethora of diseases, such as kids battling with osteogenesis imperfecta, men and women suffering through the degenerative disease of motor neurons known as ALS and destroying cells carrying the caner mutation. The rabbit hole goes even further when discussing CRISPR for organisms. That might not sound like much but when dealing with an organism such as algae, the genome can be modified in such a way that algae can be converted into a biofuel. No more natural gas, no more oil, simply a biofuel derived from plant.

[1] S. Albawi, T. A. Mohammed and S. Al-Zawi, “Understanding of a convolutional neural network,” 2017 International Conference on Engineering and Technology (ICET),

[2] Khean, Nariddh et al. “THE INTROSPECTION OF DEEP NEURAL NETWORKS-TOWARDS ILLUMINATING THE BLACK BOX Training Architects Machine Learning via Grasshopper Definitions.” (2017).

[3] 3Blue1Brown “But what is a Neural Network? | Deep learning, chapter 1” 2017

[4] Courbariaux “Binarized Neural Networks: Training Neural Networks with Weights” Mar 2017

[5]Filho “H NMR spectra dataset and solid-state NMRdata of cowpea (Vigna unguiculata” Feb 2017

[6] Rose “Geordie Rose of Kindred AI presents Super-intelligent qubits on Earth” Jul 2016

--

--

Frank Doherty

Bringing to light the question of what it means to be human in a time that is waving on an open road .