The illusion of movement
A story about the sense of vision, the perception of frames and the refresh rate, motion blur and television screens.
IntroductionYou could hear the term frames per second (FPS), and that 60 FPS is really a good reference for any animation. But most console games go for 30 FPS, and movies are usually recorded on 24 FPS, so why should we aim for 60 FPS?
Frames ... per second?
The early days of filmmaking
The shooting of the 1950 Hollywood film Julius Caesar with Charlton Heston
When the first cinematographers began to make films, many discoveries were made not by a scientific method, but by trial and error. The first cameras and projectors were manually controlled, and the film was very expensive - so expensive that when trying to shoot, they tried to use the smallest possible frame rate, just to save the film. This threshold was usually between 16 and 24 FPS.
When the sound film (audio track) was superimposed on the physical film and reproduced at the same time as the video, manual playback became a problem. It turned out that people normally perceive the variable frame rate for video, but not for sound (when the tempo and pitch change), so that the cinematographers had to choose a constant speed for both. They chose 24 FPS, and now, after almost a hundred years, it remains the standard in the cinema. (In television, the frame rate had to be slightly changed due to how the CRT TVs are synchronized with the power frequency).
Frames and the human eyeBut if 24 FPS is hardly acceptable for a movie, what is the optimal frame rate? This is a tricky question, because there is no optimal frame rate.
Perception of motion is the process of deducing the speed and direction of the elements of the scene on the basis of visual, vestibular and proprioceptive sensations. Although the process seems simple for most observers, it has proven to be a complex problem from a computational point of view and extremely difficult to explain from the point of view of neural processing. - Wikipedia
The eye is not a camera. He does not perceive movement as a series of frames. He perceives a continuous stream of information, rather than a set of individual pictures. Why then do the cadres in general work?
Two important phenomena explain why we see movement when we look at rapidly changing pictures: the inertia of the visual perception and the fi-phenomenon (stroboscopic illusion of continuous motion).
Most filmmakers think that the only reason is inertia of the visual perception , but this is not true; although confirmed, but not scientifically proven, inertia of visual perception is a phenomenon according to which the residual image probably remains about 40 milliseconds on the retina. This explains why we do not see a dark flicker in theaters or (usually) on a CRT.
Phi-phenomenon in action. Notice movement in the picture, although nothing moves on it? <Tgsri>
On the other hand, many people think it's fi-phenomenon the true reason that we see movement for individual images. This is an optical illusion of the perception of continuous movement between individual objects, if they are quickly shown one by one. But even the fi-phenomenon is questioned by , and scientists have not come to a common opinion.
Our brain very well helps to forge traffic - not perfect, but well enough. A series of motionless frames that simulate movement creates different perceptual artifacts in the brain, depending on the frame rate. Thus, the frame rate will never be optimal, but we can approach the ideal.
Standard frame rates, from bad to idealTo better understand the absolute scale of frame rate quality, I suggest you look at the overview table. But remember that the eye is a complex system and it does not recognize individual frames, so this is not an exact science, but just observations of different people from the past.
Demo: what does 24 FPS look like in comparison with 60 FPS?60vs24fps.mp4
I thank my friend Mark Tönsing for creating this fantastic comparison.
HFR: remounting the brain with the help of the "Hobbit""Hobbit" was a popular movie shot on a dual frame 48 FPS, which is called HFR (high frame rate). Unfortunately, not everyone liked the new look. There were several reasons for this, the main one is the so-called " soap opera effect ".
The brain of most people is trained to perceive 24 full frames per second as a quality movie, and 50-60 half-frames (interlaced TV signals) remind us of airtime and destroy " effect foil ». A similar effect is created if you enable motion interpolation on your TV for 24p material (progressive scan). It does not like much (despite the fact that modern algorithms are pretty good at rendering smooth movements without artifacts, which is the main reason why critics reject this function).
Although HFR significantly improves the image of (makes the movements not so intermittent and fights with blurryness moving objects), it is not easy to find the answer to how to improve its perception. This requires a re-training of the brain. Some viewers do not notice any problems after ten minutes of viewing the "Hobbit", but others absolutely can not tolerate HFR.
Cameras and CGI: history of motion blurBut if 24 FPS is called hardly tolerable freiret, then why have you never complained about the intermittence of the video when you leave the cinema? It turns out that the camcorders have a built-in function - or a bug, if you want - that is missing in CGI (including CSS animations!): This is motion blur, that is, blurring the moving object.
After you've seen motion blur, its lack of video games and software becomes painfully obvious.
Motion blur, as defined in Wikipedia, this is
... the visible pull of fast moving objects in a still image or sequence of images, such as a movie or animation. It occurs if the recorded image changes during recording of one frame, either because of rapid movement, or during long exposure. <Tgsrbq>
In this case, the picture is better than a thousand words.
Without motion blur
C motion blur
Images from Evans & Sutherland Computer Corporation, Salt Lake City, Utah. Used with permission. All rights reserved. <Tgsri>
Motion blur uses cunning, depicting a lot of motion in one frame, sacrificing detail . That's the reason why a movie on 24 FPS looks relatively acceptable, compared to video games at 24 FPS.
But how does motion blur initially appear? According to the description of E & S , which first applied 60 FPS for its mega-dome screens:
<i> <blockquote> When you shoot a movie at 24 FPS, the camera sees and writes only part of the movement in front of the lens, and the shutter closes after each exposure to rewind the film to the next frame. This means that the exposure is closed for the same time as it is open. With fast movement and action in front of the camera, the frame rate is not high enough to catch up with them, and the images are blurred in each frame (due to the exposure time). <Tgsrbq>
Here are the graphs, which simplify the process.
Images Hugo Elias . Used with permission. <Tgsri>
Classic movie cameras use the </a> obturator to capture motion blur. The obturator for capturing motion blur. Rotating the disc, you open the shutter for a controlled period of time at a certain angle and, depending on this angle, change the exposure time. If the exposure is small, then less motion will be recorded on the film, that is, motion blur will be weaker; and if the exposure is large, then more movement will be recorded and the effect will be stronger.
<i> Obturator in action. Via Wikipedia
If motion blur is such a useful thing, then why do filmmakers tend to get rid of it? Well, when you add motion blur, you lose detail; and getting rid of it - lose smooth movements. So when the directors want to shoot a scene with a lot of details, like an explosion with a lot of flying out particles or a complex scene with an action, they often choose a small shutter speed that reduces blur and creates a clear effect puppet animation .
Visualization of Motion Blur capture. Via Wikipedia
So why not just add it?
Motion blur greatly improves the animation in games and on websites even on low frame rates. Unfortunately, its implementation is too expensive. To create the perfect motion blur, you would need to shoot four times as many frames of the object in motion, and then perform temporal filtering or anti-aliasing (here is an excellent explanation of from Hugo Eliash). If you need to render at 96 FPS for the release of acceptable material at 24 FPS, then you can simply raise the frame rate, so often this is not an option for content that is rendered in real time. The exceptions are video games where the trajectory of objects is known in advance, so you can calculate approximate motion blur , as well as declarative animation systems like CSS Animations and, of course, CGI movies like Pixar.
60 Hz! = 60 FPS: refresh rate and why it is importantNote: Hertz (Hz) is usually used when talking about the refresh rate, while the frame rate per second (fps) is an established term for frame-by-frame animation. In order not to confuse them, we use Hz for refresh rate and FPS for frame rate. <Tgsri>
If you are wondering why your laptop looks so ugly Blu-ray discs, then often the reason is that the frame rate is unevenly divided into frequency Updating the screen (as opposed to them, DVDs are converted before the transfer). Yes, the refresh rate and frame rate are not the same. According to Wikipedia, "[..] the refresh rate involves re-drawing identical frames, while the frame rate measures how often the source video will output a full frame of new data to the display." So the frame rate corresponds to the number of individual frames on the screen, and the refresh rate corresponds to the number of times the image on the screen is updated or redrawn.
Ideally, the refresh rate and frame rate are fully synchronized, but in certain situations, there are reasons to use the refresh rate three times higher than the frame rate, depending on the projection system used.
New problem for each displayMovie projectors
Many people think that during the work the film projectors scroll the film in front of the light source. But in this case we would see a continuous blurry image. Instead, shutter is used here to separate the frames from each other, as is the case with movie cameras. After the frame is displayed, the shutter closes and the light does not pass until the shutter opens for the next frame, and the process repeats.
<i> The shutter of the film projector in action. From Wikipedia . <Tgsri>
However, this is not a complete description. Of course, as a result of such processes you will see the same movie, but the flicker of the screen due to the fact that the screen remains dark 50% of the time will drive you crazy. These shadows between frames will destroy the illusion. For compensation, the projectors actually close the shutter two or three times on each frame.
Of course, this seems illogical - why, as a result of adding <i> additional flicker, it seems to us that is less than ? The task is to reduce the period of blackout, which has a disproportionate effect on the visual system. The flicker fusion threshold (closely related to the inertia of the visual perception) describes the effect of these fades. At about ~ 45 Hz, the darkening periods should be less than ~ 60% of the frame display time, which is why the method of double shutter release in movies is effective. For more than 60 Hz, the blackout periods can be more than 90% of the frame display time (required for displays such as CRTs). The whole concept is a little more complicated, but in practice, how can you avoid flickering:
Use a different type of display, where there is no blackout between frames, that is, it constantly displays a frame on the screen.
Apply constant, unchanged dimming phases with a duration of less than 16 ms
CRT monitors and televisions work by directing electrons to a fluorescent screen, which contains a phosphor with a low afterglow time . How short is the afterglow? So little that you'll never see the full picture! Instead, during the electron scan, the phosphor lights up and loses its brightness in less than 50 microseconds - it's 0.05 milliseconds ! For comparison, the full frame on your smartphone is displayed for 16.67 ms.
Refresh the screen, taken at a shutter speed of 1/3000 seconds. From Wikipedia . <Tgsri>
So the only reason why a CRT generally works is the inertia of the visual perception. Because of the long dark gaps between the backlights, the CRTs often seem flickering - especially in the PAL system, which operates at 50 Hz, unlike the NTSC operating at 60 Hz, where the flicker fusion threshold is already in effect.
To further complicate matters, the eye does not perceive the flicker the same on every section of the screen. In fact, peripheral vision, although it transmits a more blurred image to the brain, is more sensitive to brightness and has a much shorter response time. Probably it was very useful in ancient times to find wild animals jumping off to the side to eat you, but this is an inconvenience when watching movies on a CRT from a close distance or at an odd angle.
Blurred LCD displays
Liquid crystal displays (LCD), which are classified as sampling and storage devices , are actually pretty amazing, because they do not have any blackouts at all between frames. The current image is continuously displayed on it until a new image is received.
Let's repeat: There is no flicker on the LCD screens caused by updating the screen, regardless of the refresh rate of .
But now you think: "Wait, I recently chose a TV, and every manufacturer advertised, dammit, a higher screen refresh rate!" And although basically it's pure marketing, but LCDs with a higher refresh rate solve the problem - just not the one you are thinking about.
Visual blur in motion
Manufacturers of LCD displays all increase and increase the refresh rate due to screen or visual motion blur . And there is; not only the camera is capable of recording motion blur, but your eyes can also! Before explaining how this happens, here are two demo-related demos , which will help you to feel the effect (click on the image).
In the first experiment focus the view on the immobile flying alien at the top - and you will clearly see the white lines. And if you focus on the moving alien, the white lines magically disappear. From the Blur Busters website:
<i> <blockquote> "Because of the movement of your eyes, vertical lines are blurred each time you update the frame into thicker lines, filling black voids. Displays with a small afterglow (such as CRTs or LightBoost ) eliminate such motion blur, so this test looks different on such displays. "
In fact, the effect of tracking the look of various objects can never be completely prevented, and often it is such a big problem in the cinema and production that there are special people whose only job is to predict what exactly will track the viewer's view in the frame and ensure that nothing the other will not hurt him.
In the second experiment the guys from Blur Busters try to recreate the effect of the LCD display compared to the screen with a small afterglow, simply inserting black frames between The screen shots are amazing, but it works.
As shown earlier, motion blur can become either a blessing or a curse - it sacrifices sharpness for the sake of smoothness, and the blurring added by your eyes is always undesirable. So why is motion blur so big a problem for LCD displays compared to a CRT where such questions do not arise? Here is an explanation of what happens if a short-term shot (received in a short time) linger on the screen longer than expected.
The following quotation is from the excellent article by Dave Marsh on MSDN about the temporal oversampling . It is surprisingly accurate and relevant for the article 15 years ago:
<blockquote> When addressing a pixel, it is loaded with a certain value and remains with this light output value until the next addressing. From the point of view of drawing an image, this is not right. A concrete instance of the original scene is valid only at a particular instant. After this moment, the scene objects must be moved to other places. It is not correct to hold images of objects in fixed positions until the next sample arrives. Otherwise, it appears that the object suddenly jumps to a completely different place. <Tgsrbq>
And his conclusion:
<blockquote> Your view will try to smoothly follow the movements of the object of interest, and the display will keep it stationary in the whole frame. The result will inevitably become a blurred image of the moving object. <Tgsrbq>
Here's how! It turns out that what we need to do is to light the image on the retina, and then let the eye, together with the brain, perform interpolation of the movement.
Optional: so to what extent does our brain perform interpolation, in fact?No one knows for sure, but definitely there are many situations where the brain helps to create the final image of what is being shown to him. Take, for example, this blind test spot : it appears that there is a blind spot in the place where the optic nerve joins the retina. In theory, the spot should be black, but in fact the brain fills it with an interpolated image from the surrounding space.
Frames and screen updates do not mix and match!As mentioned earlier, there are problems if the frame rate and screen refresh rate are not synchronized, that is, when the refresh rate is not divided without remainder on the frame rate.
Problem: gaping the screenWhat happens when your game or application starts drawing a new frame on the screen, and the display is in the middle of the update cycle? It literally breaks the frame into parts:
This is what happens behind the scenes. Your CPU / GPU performs certain calculations to compose the frame, then passes it to the buffer, which must wait for the monitor to call the update via the driver stack. Then the monitor reads this frame and starts to display it (here you need double buffering, so that one image is always sent, and one is made up). A break occurs when the buffer that is currently </i> is displayed on the screen from top to bottom, is replaced by with the next frame that the video card is issuing. As a result, it turns out that the top part of your screen is obtained from one frame, and the bottom part from the other.
Note: to be precise, a screen gap can occur even if the refresh rate and frame rate match! They must coincide both the phase and the frequency .
The screen break in action. From Wikipedia
This is clearly not what we need. Fortunately, there is a solution!
Solution: VsyncThe screen gap can be eliminated with Vsync, short for "vertical synchronization". This is a hardware or software function that ensures that a break does not occur - that your software can draw a new frame only when the previous screen update is completed. Vsync changes the frame removal rate from the buffer of the above process so that the image never changes in the middle of the screen.
Therefore, if the new frame is not yet ready for rendering on the next screen update, the screen will simply take the previous frame and re-draw it. Unfortunately, this leads to the next problem.
New problem: jitterAlthough our frames are no longer torn, the reproduction is still far from smooth. This time the reason for the problem is that it is so serious that each industry gives it its names: jadder, jitter , statter, junk or hitching, jitter and hitch. Let's look at the term "jitter".
Jitter occurs when the animation is played back at a different frame rate compared to the one at which it was shot (or intended to be played). Often this means that jitter occurs when the playback frequency is unstable or variable, not fixed Since most of the content is recorded at a fixed frequency). Unfortunately, this is what happens when you try to display, for example, the content of 24 FPS on the screen, which is updated 60 times per second. From time to time, since 60 is not divisible by 24 without a remainder, one frame is shown twice (if you do not use more advanced transforms), which spoils the smooth effects, such as panning the camera.
In games and on websites with a lot of animation it's even more noticeable. Many can not play the animation on a constant, dividing frame without rest. Instead, the frame rate varies greatly for different reasons, such as the independent operation of individual graphic layers, the processing of user data input, and so on. It can shock you, but the animation with a maximum frequency of 30 FPS looks much, much better than the same animation with a frequency that varies from 40 to 50 FPS.
It is not necessary for me to believe in the word; look with your own eyes. Here is a spectacular demonstration of a micro-reader (microstatter) .
When converting: "telecine projector"is the method of converting an image on a film to a video signal. Dear professional converters like those used on television, perform this operation mainly through a process called control of the motion vector (motion vector steering). He is able to create very convincing new frames to fill gaps. At the same time, two other methods are still widely used.
When converting 24 FPS to a PAL signal at 25 FPS (for example, TV or video in the UK), it is common practice to simply speed up the original video by 1/25 second. So if you ever wondered why the "Ghostbusters" in Europe are a couple of minutes shorter, then here's the answer. Although the method works surprisingly well for the video, it has a terrible effect on the sound. You will ask, how much worse can the 1/25 accelerated sound without additional pitch change? Almost half a ton worse.
Let's take a real example of a major failure. When Warner released an expanded Blu-Ray collection of "Lord of the Rings" in Germany, they used for German dubbing an already adjusted PAL version of the sound track, which was previously accelerated by 1/25, followed by a lower tone to correct the changes. But since Blu-Ray goes to 24 FPS, they had to reverse the video, so they slowed it down again. Of course, from the very beginning, it was a bad idea to perform such a double conversion, because of the losses, but worse, after slowing down the video to match the frame rate of Blu-ray, they forgot to change the tone back to the sound track, so all the actors in the movie suddenly became to sound super-depressing, talking on a semitone below. Yes, this is a real story and yes, she was very insulting to the fans, there were a lot of tears, many bad copies and a lot of lost money after a big recall of the discs.
The moral of the story: changing speed is not the best idea.
Convert the film material to NTSC, the American television standard, will not be a simple acceleration, because the conversion of 24 FPS to 29.97 FPS corresponds to an acceleration of 24.875%. If only you really do not like chipmunks, this will not be the best option.
Instead, a process called 3: 2 pulldown (among others), which has become the most popular conversion method. Within this process, 4 original frames are taken and converted into 10 interlaced half-frames or 5 full frames. Here is an illustration that describes the process.
3: 2 Pulldown in action. From Wikipedia. <Tgsri>
On an interlaced display (i.e. CRT), the video fields in the middle are displayed in tandem, each in an interlaced version, so they consist of every second row of pixels. The original frame A is divided into two half-frames, both of which are displayed on the screen. The next frame B is also broken, but the odd video field is displayed twice, so this frame is divided into three half-frames. And, in sum, we get 10 distributed half-frames of video frames from 4 original full frames.
This works well when displayed on an interlaced screen (such as a CRT TV) with about 60 video fields per second (almost half-frames), since the half-frames are never shown together. But such a signal looks horrible on displays that do not support half-frames and must make 30 full frames together, as in the rightmost column in the illustration above. The reason for the failure is that every third and fourth frames are stuck together from two different frames of the original, which leads to what I call the "Frankenframe". This is particularly horrible in the rapid movement, when there are significant differences between neighboring frames.
So pulldown looks elegant, but it's also not a universal solution. What then? Is there no ideal option? As it turns out, it does exist, and the solution is deceptively simple!
When displaying: G-Sync, Freesync and limiting the maximum frame rate<img src = "http://i.umumble.com/img/topic-14 Instead of struggling with a fixed update rate, it's certainly much better to use a variable refresh rate, which is always synchronized with the frame rate. This is exactly what the technologies <a rel="nofollow" href="http://www.geforce.com/hardware/technology/g-sync"> Nvidia G-Sync and AMD Freesync . G-Sync is a module built into monitors, it allows them to synchronize with the GPU output instead of forcing the GPU to synchronize with the monitor, and Freesync achieves the same goal without the module. These are truly revolutionary technologies that eliminate the need for a "telekinoprojector", and all the content with variable frame rate, like games and web animations, looks much smoother.
Unfortunately, both G-Sync and Freesync are relatively new technologies and not yet widely distributed, so if you as a web developer make animations for websites or applications and can not afford to use full 60 FPS, then it's best to limit the maximum frame rate so that it is divided without any rest by the refresh rate - in almost all cases the best limitation is 30 FPS.
Conclusion and subsequent actionsSo how to achieve a decent balance with all the desired effects in mind - minimal motion blur, minimal flicker, constant frame rate, good motion display and good compatibility with all displays - without much encumbrance of the GPU and the display? Yes, ultra-large frames can reduce blurring in motion, but at a high cost. The answer is clear and after reading this article you should know it: 60 FPS .
Now that you're smarter, do your best to run all of the animated content at 60 frames per second.
a) If you are a web developerGo to jankfree.org , where Chrome developers collect the best resources on how to make all your applications and animations flawlessly smooth. If you have time for just one article, then select Paul Lewis's excellent article <i> The Runtime Performance Checklist .
b) If you are an Android developerCheck out our " Best Practices for performance" in the official Android Training section, where we collected a list of the most important factors, bottlenecks and optimization tricks.
c) If you work in the movie industryRecord all content on 60 FPS or, even better, at 120 FPS, so you can reduce it to 60 FPS, 30 FPS and 24 FPS if necessary (unfortunately, to add support for 50 FPS and 25 FPS (PAL), you have to raise the frequency frames up to 600 FPS). Play all the content on 60 FPS and do not apologize for the "soap opera effect". This revolution will take time, but it will happen.
d) For all otherRequire 60 FPS for any moving pictures on the screen, and if anyone asks why, send it to this article. <Tgsrbq>
|Vote for this post
Bring it to the Main Page