Another HTC rival? Secret friends VR binocular laser positioning

As a representative of VR helmet space positioning, HTC Vive has been dominating with its Lighthouse laser positioning. Perhaps Oculus and PSVR have their own advantages in terms of user experience and content, but when it comes to Room-Scale positioning, HTC Vive has been launched for a year, and there are no rivals in the world so far. It is a powerful laser positioning solution. First of all, do not play VR under the popular science (do not play VR you see this article dry?), the so-called Room-Scale, refers to the user can freely walk in VR. In fact, the so-called VR three hardware (HTC Vive, Oculus Rift, PSVR) can achieve a certain degree of Room-Scale experience, but Oculus, PSVR camera-based program, the effect is not very good, in most cases It is more comfortable to sit and play. The HTC Vive's laser scheme allows players to truly walk freely in VR, and the stability and walking range are far better than the other two. It can be said that HTC Vive can be black, that is, this position is not black. At CES 2017, domestic hardware maker Da Peng brought a Polaris laser positioning solution to Polaris. It seems that he wants to challenge HTC Vive with self-developed technology and hardware. At the end of March, a press conference was held to invite industry professionals to experience the site. After field tests, Da Peng's binocular laser positioning scheme has been well received. However, we still have to ask, what is the principle of Da Peng's binocular laser positioning? How to achieve it? What are the advantages compared to HTC Vive? In the spirit of inquisitiveness, the reporter specially interviewed Da Peng CEO Chen Chaoyang and revealed his friends for the first time. Technology mysteries. First of all, let's see how the HTC Vive solution works. First of all, if we want to understand where big friends are better than HTC Vive, we must elaborate on the HTC Vive's positioning plan (familiar partners can skip this section and look directly at Big Friend). This solution uses two base stations called Lighthouses, which are placed diagonally, pull out a space, and constantly emit laser light to scan the HTC Vive and handles in the space. So how does the base station work? Now let's dismantle it. Open the base station, it will be found by three parts, namely the LED lamp array and two motors. The motors are used to create two horizontal and vertical laser surfaces. As shown below, where the red surface is a vertical laser surface, the sweep direction is horizontal, blue is the horizontal laser surface, the sweep direction is vertical sweep up and down, and the red shadow is below the blue surface. Whenever a laser face encounters the sensor, it stops scanning and records the angle. If the two surfaces stop as shown in the figure, it can be determined that the sensor is located on the intersection line of the two surfaces, that is, the thin red line in the figure. The location information thus obtained is not comprehensive, because only knowing which line the point is on, and not knowing where on the line it is, is like going somewhere, but only knowing that the street number is not known in a certain street. Therefore, multiple sensors need to be installed. So clever you may have guessed, HTC Vive's sensors are placed on helmets and handles, pits, helmets on the surface like the moon, each pit is a sensor. And the sensor is fixed and cannot move. In fact, the system does not need to determine the location of each sensor. It just needs to determine the straight line where the sensor is located. It's like eating roasted chicken legs. You can directly determine the position of the chicken leg with your hands. You can also use a few signs to dress and the position of the chicken legs is also determined. That is, determining the location does not necessarily require the sensor's point, and it is also possible to determine the location of the helmet with only a sufficient number of lines. Just know the helmet size, according to the following figure will be able to determine the location. After the two laser surfaces have been scanned once, it is determined that the helmet has a linear equation of the distance from the sensor to the base station. Only know that the scanned sensor is on this line, but does not know the detailed distance. This time you need to know the size of the helmet. Then through a special algorithm, understand where the only possible position of the helmet is in determining a few straight lines and knowing the size of the helmet. After two sensors have scanned one laser surface, the sensor will be turned off to prevent it from being scanned again. Wait until the five sensors are swept (Valve's official figure) and the helmet is positioned. All sensors are turned back on and a new round of scanning starts, so iteratively. New LED Tower Base Station's LED Array Changes From the above, theoretically only five sensors are sufficient. However, in practice, since the light reception may be blocked during the positioning, the distance between the sensors must be maintained in order to prevent crosstalk. Therefore, there are dozens of sensors that are used last. So don't blame you for why the helmet is so ugly. It's full of love... The key point is how to implement a complete data flow and complete the changes from the user's movement to their helmet image. We must first determine the key data and collection methods. The two laser faces can define a line (that is, the line where the sensor point is located above). How do you calculate the position of the two sides of the horizontal and vertical lasers? Only the angle is needed to know. The angle is the product of the angular velocity (speed) of the motor and the time. The angular velocity is constant and is equivalent to the known value. come out. The following are the specific implementation steps. For students who do not study science, they may still be a bit complicated and have a spirit! The base station will first communicate with the sensor and sync light so that the sensor will tell the sensor's control chip that it will reset the timer (similar to the function of the stopwatch) to the next 10 millimeters. The laser scans across the surface. When the laser surface touches the sensor, the sensor emits a signal, the timer stops, the time is recorded, and then it returns to zero. Immediately after another laser scan, record the time again and return to zero. This process may seem complicated. In fact, the two motors only need to run stably at a speed of 20 milliseconds (that is, 50 revolutions per second), and the laser emitters on the motor appear “beyond one another”. The time value will then come to the sensor hub, to the data collection chip, and to the computing part of the base station via a WiFi signal in some form of packet and protocol (eg UDP). In this way, two equations that convert time values ​​into angles and convert them into two planes are obtained. When the intersection line is obtained, the equation of the line where the previously mentioned sensors are located can be obtained. Then a number of equations are merged and the position of the sensor on the helmet is calculated by the algorithm, which means that the helmet is positioned. This positioning data will then be updated by the base station to the computer or host, modify the camera position within the 3D engine, recalculate the screen, send it to the head-mounted display, and complete the display of one frame. So, how did Big Brother do it? After finishing the HTC Vive, it was finally our turn to be the main darling of today – Da Peng! According to the reporter’s interview with Da Peng’s CEO Chen Chaoyang, coupled with observations and the HTC Vive’s Lighthouse principle, Da Peng’s Polaris structure should be similar to this: The painter is ugly and will click on it... Yes, you read it correctly. Dafen's base station has three sets of motors, one more than Vive. They are two motors with a horizontal laser surface and one motor with a vertical laser surface. As shown in the figure below, three laser lines will form three lines of intersection. In fact, only two lines (such as red and blue) can be used to determine the point of intersection. This point is the position of the sensor to be determined. In other words, when the three scanning planes are used, the position of the sensor is directly determined instead of determining the straight line where the sensor is located. Obviously, this positioning method can only determine the linear position of the sensor more efficiently than Lighthouse's positioning. As long as three points can be nailed to determine the position, it is equivalent to a minimum of three sensors on the helmet. (Of course, in order to ensure accuracy, there are actually more than three) However, in actual design, in order to prevent horizontal two surfaces from too close, affecting accuracy (as if a person had to rely on the distance between the two eyes to locate). The best way is to place two horizontal motors in the upper and lower positions of the vertical motor. The number of such sensors can be reduced by about half compared to Lighthouse. However, there is a problem that if the laser surface fails to shine on the sensor, there is no effect. In order to achieve no dead angle, the simplest way is to place a base station on the other side of the base station, which can be covered, so that the dual base station design allows the sensor The number is reduced by half again. Probably like this: The final number of sensors needed is about a quarter of Lighthouse (Lighthouse uses dozens of sensors). In fact, Da Peng official also advertised that the helmet needed fewer sensors than Vive, only six. Helmets can be made lighter and, in contrast, big friends' base stations are bigger than Lighthouse. The actual working sequence is as follows: First, the LED array emits a flash, synchronizes with the sensor control chip once, and then starts timing. If you can achieve a delay of about 16 milliseconds, the total cycle time should be within 15 milliseconds. This assumes 15 milliseconds. In the first 5 milliseconds after the flash, the first horizontal laser scans the surface and the sensor records the time after the signal is emitted. In the second 5 milliseconds, the longitudinal laser scans the surface and the sensor records the time after the signal is sent. The third 5 Within milliseconds, the second horizontal laser scans, the sensor signals, records the time, and then the WiFi signal transmission time data packet is sent to the base station for position calculation and uploaded to the computer or host to complete the position data update. In fact, it is very similar to HTC Vive, but it is only one more step of horizontal laser scanning. The position is not the line where the point is, but the coordinates of the point. After reading the principle analysis of reporters, Chen Chaoyang said: "The principle has been written very well. There is nothing to add." But he also stressed: "95% of the difficulty in the project, too many details to consider It's up." The domestic ZVR CEO Guo Wei, who focuses on large space and multi-person positioning, also pointed out to the reporter that the laser solution is mainly an engineering problem. There are many pits in the specific motor tuning, product mass production, and supply chain. Targeting plan related questions In summary, we have interpreted the positioning plan of Da Peng. But as smart as you, there must be some doubts. Yes, on the positioning plan of Da Peng, in principle, we have initiated these problems (see the end of the appendix). The reporter asked Chen Zhaoyang about these issues, and the other party gave an understandable answer: "At present, we have not yet reached the point of opening all our principles." The author expects that as a small number of domestically-developed hardware manufacturers using self-developed technology solutions, Da Peng's positioning program can come out of its own path, speak with experience, and be able to make perfect answers to these problems in the future.