Over the years, the "Star Wars" series of movies have always shocked the hearts of countless moviegoers: whether it is the Jedi knights who fought against the evil forces despite the sinister, or the courage and sacrifice of the resistance organization in the face of oppression, and finally through the excellent The strategy to win... Beyond that, the brilliant lightsaber duels in the movie, as well as the actions of droids like R2-D2, C-3PO, and BB-8, are impressive. Without these droids, Star Wars might not have had such an amazing ending.
Robots and the metaverse are one of the hottest topics at the 2022 International Consumer Electronics Show (CES). Today, non-humanoid machines that work for us are commonplace, such as delivery robots, self-driving cars, sweeping robots, and aerial drones. Given the impact of CES, we may be on the verge of a new era: Every home will have at least one robot from a sci-fi movie scene like Star Wars.
On the other hand, as contactless services continue to accelerate during the COVID-19 pandemic, metaverse services that combine virtual and reality are gaining popularity and demand for such services is growing exponentially. Many people start using Augmented Reality (AR, Augmented Reality) or Virtual Reality (VR, Virtual Reality) technology. Soon, AR and VR devices will be carried around like smartphones. This will usher in a new era where services will be available anytime, anywhere, which means we no longer need to visit banks or manufacturers, and we can maintain products without entering the factory.

Figure 1: Ocado delivery robot
Eye of the Machine (Machine Vision)
Supported by amazing advances in semiconductor processing and Image Signal Processing (ISP, Image Signal Processing) technology, falling prices, and excellent high-resolution high performance, CMOS image sensor (CIS) technology has become the mainstay of various devices such as smartphones. "Eye". Pixels are what determines camera performance, and competition around them has pushed camera technology to 600 megapixels beyond the human eye.
But are high-resolution images necessarily suitable for machine vision? For the eyes of cutting-edge machines responsible for safety and security, even the sharpest two-dimensional (2D) image data is not enough for them to work in place of humans. Such a machine may not be able to perform missions in tactical operations like the R2-D2. But for self-driving cars and drones, it is necessary to accurately identify the braking moment during high-speed driving; for facial recognition devices, it is necessary to accurately scan faces instead of flat images; for AR devices, real-time Scan large spaces for augmented reality. These machines require not only 2D image data, but also three-dimensional (3D) technical support. A machine can obtain 3D data through a complex computational process without a camera, with aids such as ultrasound or laser equipment. However, a machine with so many additional components would be rejected by consumers in terms of design and price.

Figure 2: Necessary Features of the Eye of the Machine
With the cooperation of the eyes and the brain, people can see objects stereoscopically and recognize depth and distance. Through a similar mechanism, machines can also identify multi-dimensional objects and measure distances through triangulation. For example, stereo vision uses two cameras and a processor to achieve the recognition effect. However, such mechanisms also suffer from drawbacks such as computational complexity, lack of accuracy in measuring plane distances, and low accuracy in relatively dark places, which narrow the scope of such mechanisms. Recently, Time-of-Flight (ToF) method has been put into practical use as an alternative method to overcome these shortcomings. ToF is a simple way to measure distance by calculating the time it takes for light to bounce off an object. This method is easy and fast to run, and has the added advantage of accurately measuring distances regardless of the lighting environment because it uses a separate light source.
ToF: distance is obtained by measuring the round-trip time of the emitted light
Stereoscopic vision: two optical systems viewing the same object from two different points relative to the same baseline

Figure 3: Comparison of Stereo Vision and ToF Object Recognition Methods
time of flight method
ToF can be divided into two categories: direct ToF (d-ToF, direct ToF) and indirect ToF (i-ToF, indirect ToF). The distance is calculated using the phase difference of the returned light. SK hynix developed these two ToF sensor technologies to utilize them in various products. Possibly, robots of the future will have one eye that uses i-ToF to recognize objects at close range, and the other eye that uses d-ToF to explore distant objects.
The purpose of this article is to clarify the i-ToF technology of SK hynix.

Figure 4: Comparative analysis of indirect ToF and direct ToF
The i-ToF method calculates the phase difference from the light source with the ratio of the charges accumulated in more than two different memories within a pixel, and measures the distance accordingly. Compared to d-ToF, this mechanism has some limitations in measuring distance because when light returns from a distance, there are fewer signals that can be separated due to reduced intensity. However, compared with d-ToF, it has the advantage of higher resolution, because of its simple circuit, the pixel can separate the signal by itself, and it is easy to shrink the pixel. To compensate for the limitations of i-ToF and maximize its advantages, a great deal of research is now being done to improve the signal-to-noise ratio (SNR), increase the quantum efficiency (QE) of infrared light sources, or employ techniques to remove background light (BGL). and expand.
The current i-ToF pixel structure can be roughly divided into a gate structure and a diffusion structure. The gate structure method generates a potential difference by applying a modulated voltage to the grating to collect surrounding electrons. The diffusion structure, on the other hand, acts as a current-assisted photonic demodulator (CAPD) to collect electrons using the current generated by applying a modulated voltage to the substrate. Compared to the former, the latter can quickly detect electrons generated in deeper regions, making the transfer more efficient, but requires more power dissipation because it uses a multi-load inferior current. In addition, as the pixels become smaller and the number of pixels increases due to high resolution, the power consumption increases further.
In order to maximize the advantages of CAPD and reduce its limitations, SK hynix has developed 10um QVGA-class and 5um VGA-class pixel technology, using a new structure called VFM (Vertical Field Modulator). Next, let's take a deep dive into VFM technology and its benefits.
Advantages of VFM Pixel Technology
There are various criteria for judging a good distance measurement sensor, but first and foremost, it should be able to accurately detect distance and reduce heating problems through lower power consumption. In other words, a good sensor must detect signals quickly with high efficiency and low power consumption, while it must also accurately separate the signals based on phase differences.
1. SK hynix's CIS back-illuminated (BSI) technology and combination
Like CIS, back-illuminated processing brings a number of advantages to ToF sensor design or performance. The light source used to calculate the time of flight is infrared light (IR) because it must be invisible to the human eye. And, it calculates accurate distances even in low-light environments. Infrared has a longer wavelength compared to visible light, which means that without using a thicker wafer than CIS, most of the light is penetrated, resulting in extremely low signal levels in the pixels. But that doesn't mean the thickness can grow infinitely. It is difficult to quickly collect electrons produced in deeper regions, just as deep-sea fishing is more difficult than fishing at fishing spots. When backside illumination is used instead of front illuminated (FSI), the signal can be detected quickly because backside illumination allows the light to be collected closer together, where the electric field, which acts as a fishing line, is also projected from the opposite side by become stronger with light.

Figure 5: Comparison of front-illuminated and back-illuminated (permeability and light collection per thickness)
The performance of i-ToF depends on its ability to separate signals according to the rate of charge accumulation. In this regard, front-illuminated sensors may cause errors in distance, because when light passes through the pixel surface, it is more likely to directly enter the detection node, ignoring the phase difference. It's like there are other students in the classroom when the roll call is taking place. In front-illuminated, there are also many restrictions on metal wiring to ensure a higher fill factor, while back-illuminated allows a wider choice of metal wiring, like drawing water from the ground than cutting down trees in a dense forest Collecting rainwater is more efficient (Figure 6).

Figure 6: i-ToF charge accumulation rates for different lighting methods (analogous to drawing water underground and cutting down trees in dense forests)
This advantage of back-illumination can be achieved by combining with the CIS back-illuminated technology of SK hynix, which has the technology to create pixels smaller than 1 micron.
2. Small Lens Array (SLA) Trench Structure Optical Waveguide and Quantum Efficiency
According to the i-ToF mechanism using the rate of charge accumulation, we need the maximum level of signal to obtain accurate distance data at longer distances. Therefore, high QE in the infrared wavelength range is essential.
As mentioned above, due to the high penetrating power of infrared light source, its light intensity is weaker than that of visible light, so the depth of light collection is deep. One way to deal with this is to intentionally form microlens structures (small-sized lenses arranged according to the size and number of pixels under the camera lens) high up to achieve better light collection, but the height is limited due to technical constraints. SK hynix has taken a different approach to overcome this shortcoming. By placing several lenses at each pixel that are smaller than the size of the pixel, this method increases the light-collecting depth, thereby increasing the total amount of light received.
In addition, SK hynix also digs out a special pattern structure on the back, so that the incident light touches the structure and is reflected by it, extending the light transmission path and focusing the light on the modulation area, thereby reducing the light loss rate and improving the The transmission efficiency under the same light intensity achieves the effect of killing two birds with one stone. In fact, this confirms that the QE is more than doubled under the 940nm light source. At higher QE, it succeeds in reducing the error between actual and measured distances by nearly 55 percent compared to previous methods.

Figure 7: SLA (left) and trench-structured optical waveguide (right)
3. Ensure low power consumption and high performance
Excluding the power consumption of the light source, the ToF sensor consumes the most power in the circuit that modulates the signal when it operates. The power of the modulation drive circuit is proportional to the current flowing through the board. In other words, we can reduce power consumption by reducing substrate current. Additionally, accurate and precise distance measurements require shorter modulation periods and fast signal detection. The vehicle (photon) has to be accelerated by stepping on the gas pedal in order to travel the same distance (silicon thickness) quickly, which consumes a lot of fuel (or current). As another example, drawing water from a deep well requires a lot of force to lift the pulley. But what if you could pump the groundwater up? You can pull out all the water you need with little effort, just turn on the faucet.
The VFM method increases the depletion region by optimizing the conditions and structure of the pixel ion implantation, enabling it to act like a pump and strengthening the vertical electric field. Therefore, the force of the electric field is added to the current, which can effectively collect electrons, and at the same time, it can also achieve fast collection under the condition of small current, and enhance the power consumption. Extensive experiments have shown that when the current increases, the performance of the VFM pixel is lost, which means that it is a more suitable structure for low power, and the current is no longer an important factor. In other words, the method improves the performance of the pixel by controlling the current through a design that enables a strong vertical electric field so that it only acts as a guide. Compared with the QVGA-class ToF sensor, the 5um VGA-class ToF sensor has a smaller pixel size and higher resolution, but the current per pixel is reduced, and the increase in power consumption is almost zero.

Figure 8: As a ToF sensor, the VFM structure has more efficient power consumption
Summarize
SK hynix contributes to the creation of economic and social value by enabling various module manufacturers to enter a wide range of application markets by providing close technical support and sensors while developing ToF sensor technology.
In the future, we will be able to use AR/VR equipment to travel around the world, use drones to deliver packages, let home robots bring packages to us, ask sweeping robots to clean for us, and even sit in self-driving cars powered by facial recognition. Watch the news in the car. We expect these scenarios to be realized in the new world that SK hynix's deep solution technology is about to open up.




