A memristor (a portmanteau of memory resistor) is a non-linear two-terminal electrical component relating electric charge and magnetic flux linkage. It was described and named in 1971 by Leon Chua, completing a theoretical quartet of fundamental electrical components which comprises also the resistor, capacitor and inductor. Previously, electrical components were only R, L, and C, but one more was added. Resistance (R) comes out from the relationship between voltage and current (V/I=R), a capacitor comes out between voltage and charge (Q=CV, C=Q/V), and an inductor(L = magnetic field/current) is connected between current and magnetic flux. The electrical componet between the magnetic flux and charge is called a memristor (M=magnetic flux/charge = (magnetic flux/time)/(charge/time)=V/I).
Recently, the operation of RRAM devices has been linked to the memristor concept. From the mathematical relationships that make up the characteristic content, it is hypothesized that memristors will behave in the following way. The electrical resistance of a memristor is not constant and depends on the history of currents that have previously flowed through the device (historical information). That is, the current resistance of a memristor depends on how and in which direction a large amount of charge has flowed between the two terminals of the memristor in the past. This element memorizes history so-called non-volatile characteristics. When power is lost, the memristor remembers its most recent resistance until power is restored.
A schematic diagram of Filament type and Gradual type switching, two typical sources of resistance expression of resistive memory semiconductors that can be used as memristors.
Resistance change memory is a memory that stores information by using the resistance change phenomenon that occurs in various insulators. The basic structure of resistive memory is a metal-insulator-metal (MIM) structure with an insulator between the top and bottom electrodes.
There are several theoretical models that can explain the switching behavior of memristors, or resistance memory devices. Some of the most well-known models include:
1. The Space Charge Limited Current (SCLC) model: This model explains the switching behavior of memristors in terms of the movement of ions in the device, which creates a space charge that modulates the flow of current and changes the resistance of the memristor.
2. The Filamentary Switching model: In this model, the switching behavior of the memristor is explained in terms of the formation and movement of conductive filaments within the device.
3. The Drift-Diffusion model: This model explains the switching behavior of memristors in terms of the drift and diffusion of charged species within the device.
4. The Metal-Insulator-Metal (MIM) model: This model explains the switching behavior of memristors in terms of the formation of a thin metal-insulator-metal (MIM) structure within the device.
These models can provide different perspectives on the switching behavior of memristors and can be useful in understanding the underlying physical mechanisms that govern their behavior. However, it is important to note that the switching behavior of memristors is still not fully understood, and further research is needed to fully explain the rich and complex behavior of these devices.
In our research team, we study resistive memory that works on two different principles: filamentary and gradual switching. Filament Switching begins with soft breakdown occurring inside the insulator by an external electric signal called "electroforming". Resistance that can be utilized as a memory is derived from the conductive filament (CF) formed by this forming process. CF becomes a Low Resistance State (LRS) when it is extended by an electrical signal applied from the outside, and becomes a High Resistance State (HRS) when it degenerates. The resistance change memory operating by gradual switching is responsible for the formation of oxygen vacancies according to the oxidation/reduction reaction of the material forming the conduction path and the change of the local metal phase or reduction phase according to their aggregation. Gradual switching-based resistive memory is a memory device that uses the phenomenon in which resistance changes according to the space charge limiting current generated in the MIM structure and the energy state generated by the interface trap.
It will be updated soon.
Gradual type memristor
Gradual type memristor mainly uses MISM(metal-insulator-semiconductor-metal) structure rather than MIM structure. This is to actively utilize the oxidation/reduction reaction of the semiconductor. Our research team conducts research on a memristor capable of gradient switching using a structure in which an oxidation insulating layer is inserted between a bottom electrode and an active layer based on an oxide semiconductor. The bottom electrode created an environment in which an oxide film could be easily formed using aluminum, and an oxygen plasma process is applied between the aluminum and active layers to form a very thin and stable insulating oxide layer. For the top electrode, a sandwich-structured memristor is manufactured using the same aluminum electrode as the bottom electrode.
Unlike the aforementioned filament type, "the gradual type" memristor does not require an electro-forming process. This can act as a very great advantage in utilizing devices and building systems. Bias sets the bottom electrode to ground and applies a voltage through the top electrode, so current is measured as a function of the voltage. From the moment a voltage is applied to the top electrode, a minute current starts to flow from the top to the bottom. Since the thickness of the measured sample is very thin (10 to 30 nm), a fine current flows according to an applied voltage. This flow of current generates a space charge current similar to that of electrons flowing in the deletion layer of the semiconductor transistor, and when a voltage is continuously applied, a change in the flow is caused by a sub-gap state existing inside the active channel layer. Where an applied voltage increases, electron is filled in a trap site inside the active layer, and a trapped space charge limited current occurs, and a current flow rises in a certain range.
A flow of current generated by additional voltage application manifests the same behavior as in the initial space charge current without contribution of trap site, and continuous voltage application causes breakdown of the sample. When the voltage is slowly reduced to 0 V before the occurrence of Breakdown, a current value higher than a current value measured when the initial voltage rises is measured by electrons filled in the trap site. Such a change in the current or resistance causes a memory effect and maintains the stored resistance value until it is measured again. The series of processes mentioned above is called the "SET" process and is a step of inputting memories.
If the voltage is applied in reverse, the same trapped space charge limited current as the set process occurs. If the negative voltage is continuously increased, electrons trapped at a specific voltage are released, which returns to the initial state (the state before the set process) and is called the "RESET" process. If the current level decreases as the reset process proceeds, the resistance level before the set process is obtained.
The SET and RESET process is a phenomenon caused by an electronic trap or de-trap of a sub-gap state generated by a lack of oxygen generated in an oxide semiconductor or by a change in the coordination number of oxygen located around the cation.
Our research team manufactured a memristor capable of gradient switching with a MISM structure using InGaZnOx, which is widely used as an oxide semiconductor. Aluminum was used for the bottom electrode and the top electrode, and an oxide film was generated on the top end of the bottom electrode through oxygen plasma treatment. The SET process was operated at 4.5 V, and RESET was performed at -2 V, and the on/off ratio of resistance was much lower than that of the filament type due to the characteristics of gradient type switching, and the distribution of HRS and LRS was very low, thereby confirming high reproducibility. Due to the long repetition of the SET/RESET process, the hysteresis gap of the SET/RESET collided, and the cause thereof is currently being studied.
An artificial neural network (ANN) is a type of machine learning model that is inspired by the structure and function of the human brain. Like the brain, an ANN is comprised of a large number of simple processing units, or "neurons", which are connected to each other through a network of weighted connections. The network receives input data and processes it through a series of interconnected layers, with each layer applying a set of mathematical operations to transform the input into a form that is more useful for the next layer. The final layer of the network produces the output, which can be a prediction, a classification, or some other type of result depending on the specific problem being addressed. During the learning process, the weights of the connections between the neurons are adjusted through a training process that involves presenting the network with a set of labeled training data. The network compares its predicted outputs to the correct outputs and adjusts the weights to minimize the difference between the two. Through this training process, an ANN can learn to identify patterns and relationships in data, and can generalize these patterns to make predictions or classifications on new, unseen data. ANNs have been used in a wide range of applications, including image and speech recognition, natural language processing, and autonomous systems.
Multi-Layer Perceptron (MLP)
Perceptron is an early form of artificial neural network proposed by Frank Rosenblatt in 1957, and is an algorithm that outputs a single result from multiple inputs. The perceptron is similar to the operation of nerve cells and neurons that make up the actual brain. Neurons receive signals from dendrites and transmit signals through axons when these signals exceed a certain level.
The input and output signals of the nerve cell neurons correspond to the input and output values in the perceptron, respectively.
x is the input value, w is the weight, and y is the output value. Circles in the figure correspond to artificial neurons. In perceptron, weights take the place of axons that transmit signals from neurons in actual nerve cells. The input value x sent from each artificial neuron is passed to the destination artificial neuron along with its respective weight w.
Each input value has its own weight, and the larger the value of the weight, the more important the corresponding input value is.
Each input value is multiplied by a weight and sent to the artificial neuron, and if the total sum of products of each input value and its corresponding weight exceeds a threshold, the artificial neuron at the destination outputs 1 as an output signal; In this case, it outputs 0.lues in the perceptron, respectively.
For this normalization, a step function that selects the importance by dividing by 0 and 1 or a sigmod function that gradually divides the importance between 0 and 1 are used.
A multi-layer perceptron (MLP) is a form in which several layers of perceptrons are sequentially attached. MLP is also called a feed-forward deep neural network (FFDNN). In MLP, there is connection between perceptrons in adjacent layers, but there is no connection between perceptrons in the same layer. Also, there is no feedback that connects you back to the floor once you have passed. Layers other than the bottom input layer and the top output layer are called hidden layers because they are hidden.
From a perceptron perspective, you can build it by layering it up. The difference between a multi-layer perceptron and a single-layer perceptron is that a single-layer perceptron has only an input layer and an output layer, while a multi-layer perceptron has additional layers added in the middle. The layer between the input layer and the output layer is called the hidden layer. In other words, multi-layer perceptron differs from single-layer perceptron in that there is a hidden layer in the middle. Multilayer Perceptron is also called MLP for short.
In the case of a simple problem, the problem could be solved with only one hidden layer, but a multi-layer perceptron originally refers to a perceptron with more than one hidden layer. In other words, to solve other complex problems, multi-layer perceptrons can add many more hidden layers in the middle. The number of hidden layers can be two or dozens, and it is up to the user to set. The structure shown in the shared picture shows the appearance of an MLP composed of a simple hidden layer.
A neural network with two or more hidden layers is called a deep neural network (DNN). Deep neural networks are not only talking about multi-layer perceptrons, but various modified neural networks are also called deep neural networks when they have two or more hidden layers.
So far, we have discussed the structure and definition of MLP. We need to automate the machine to find the weights by itself, which corresponds to the training or learning phase in machine learning. To do this, we use loss functions and optimizers such as linear and logistic regression. A model frequently used in MLP learning is backpropagation, which is used for learning in various ways. And if the artificial neural network for learning is a deep neural network, it is called deep learning, as it is said to learn a deep neural network.
Backpropagation
The back propagation algorithm is effectively an algorithm that finds the derivative (slope) of the function value of each variable. A more accurate name is backward error propagation. In other words, you can understand the backpropagation algorithm if you know three things: backward, error, and propagation. The opposite of reverse is forward. You can think of forward propagation as calculating in the direction of the edge of the graph we created, and backward propagation as going back to the edge.