Lcd Tv Input Lag Comparison Essay


Introduction - What is Input Lag?


Input lag is described as the lag between the output from a graphics card and the image which is displayed on the screen you are using. For LCD screens this should not be confused with pixel response time which describes the speed at which a pixel can change from one orientation to another. Pixel response times impact aspects such as motion blur and ghosting in moving images. On the other hand input lag is a delay between what is sent to the monitor, and what you actually see. This can have impacts particularly in gaming where if the screen is lagging at all, it can have adverse affects on first person shooter games and the likes where every millisecond counts.

 

The level of lag really depends on the TFT display, and is controlled by many signal processing factors including, but not limited to the internal electronics and scaling chips. Some manufacturers even take measures to help reduce this, providing modes which bypass scaler chips and options which reduce the input lag. These are often reserved for gamer-orientated screens but the results can be quite noticeable in some cases.

 

We are making some improvements to the way in which we record input lag here due to reader feedback and improvements that have been made in recent years to testing methods. We're always keen to improve our reviews and testing methodology and so I hope this comes as positive news to our readers. We have also spoken at length to Thomas Thiemann who has carried out extensive studies around input lag and has written studies including a well-known and regularly referenced article over at Prad.de. He is also the man behind the SMTT tool which some readers may be familiar with. We will discuss all of this throughout this article.

 

 


Input Lag Measurement Techniques

 

vs.

 

We firstly wanted to give an overview of some of the different techniques used to test input lag commonly by end users and review sites. These methods have changed somewhat in recent years so here is a summary of them all.

 


The Stopwatch Program

 


Above: an example stopwatch program, produced by Flatpanelshd.com

 

Traditionally input lag was widely measured by hooking up a CRT screen to the same graphics card and PC as the TFT display. By cloning the graphics card output, the user could provide a comparative test of the output of the CRT vs. the output of a TFT. It is assumed in this test that a CRT would show no lag on top of the output from the graphics card which is vital for those wanting to play fast games, where reaction times are key. This is what many users are used to, having come from older CRT displays. Many high end gamers still use CRT's as well for high refresh rates and frame rates and so the move to a TFT can be worrying, especially when you start throwing in a conversation about lag of the output image.

 

By running the screens side by side in this way in clone mode, you can often see that the TFT lags behind the CRT. This is sometimes noticeable in practice even if you clone a game or move windows around your screen, but stopwatch programs have been used for many years to give a way to display and synchronise the output so that the difference could be recorded in some way and quantified to a figure in milliseconds. High shutter speed photographs can then be taken to show just how much the TFT lags compared with the CRT.

 


Above: another example stopwatch program, Xnote stopwatch

 

This stopwatch method has been used for many years by many review websites and end users. It's easy to set up, doesn't cost anything and allows a reasonable comparative view of a CRT output vs. a TFT output. It can also be useful for providing a comparison between different models over time.

 

The method is not 100% accurate however. There are areas of inaccuracy inherent to this method. Some stopwatch programs are based on flash which can introduce issues with frame rate and update rate support, especially when viewed from an internet source and browser. All basic stopwatch programs can introduce a degree of error if vsync is active and due to 2D native refresh rate settings of 60Hz. There's never been a defined standard for measuring input lag and so this has been used for a long time and widely accepted as a decent enough representation of what a user may experience.

 

I will say that this method has been used for many years by many sources and although there is likely a varying degree of error introduced in this method, it can still allow you to give a reasonable comparison between displays. Classification of the lag into low, medium and high for instance is possible and the method can help give you an idea of the relative output of a TFT compared with a CRT. It's an indication though as opposed to a precise measurement. More advanced and reliable methods are of course preferred where possible. We will talk about the limitations of these kind of programs a little later on in this article as well in the interview with Thomas Thiemann.

 

 


SMTT - An Improved Stopwatch Process

 

 

SMTT stands for 'Small Monitor Test Tool' and is a program which has been produced by Thomas Thiemann. SMTT has been around for some time, but Thomas has just completed a full refresh of the tool with many improvements in the new SMTT v2.0 which we will talk about in a moment.

 

Thomas has carried out a lot of research into input lag testing methodology and readers may well have read his excellent in depth article over at Prad.de in which various methods were tested and compared including stopwatch programs, SMTT 1 and more advanced oscilloscope + photosensor methods.

 


Above: a screen shot of the timer test from SMTT v1.0

 

The SMTT tool re-defines and improves the stopwatch method and helps to overcome some of the potential issues explained before. It runs without vsync active and is capable of very high refresh rates of more than 1000 fps. This program provides multiple stopwatches on the screen at once and so refresh rates can also be recognised and accounted for when used which helps reduce a large part of the error associated with a single stopwatch.

 

There were some drawbacks of this SMTT tool however including difficulty in reading some numbers, hard to read output in some cases which can lead to interpretation errors, and compatibility issues with DirectX and Windows platforms. All of these have been addressed in SMTT v2.0 which we will look at now.

 

 

 


SMTT v2.0 - Updates and Improvements

 

 

SMTT 2.0 has been redeveloped from scratch. With the acquired knowledge from his own input lag studies and the previous development of SMTT v1, including all its strengths and weaknesses, Thomas Thiemann replaced the old DirectX 9 based code with a modular and future-oriented DirectX 11 based approach that combines full DirectX 10.0 support with a solid basis for future updates.

Due to the implied optimizations SMTT 2.0 is not just offering marketing relevant, insubstantial, forward-looking features you can print on a paper box. In fact it highly improves the display performance and accuracy at the same time providing some real benefits to the user right now.

 

The primary key feature of SMTT 2.0 is still the input lag measurement that you can find in the software as “High Precision Counter” test. This test has not just been updated by the already mentioned transition to DirectX 11 but also by a new display technique called “I.R.O.N.”. I.R.O.N. stands for Improved Readability Of Numbers. This new display technique adds regular column switches to the output on screen to omit any overlapping of the displayed values which reduces the time needed in the evaluation process and increases the accuracy of the readings.

 

 

In addition to the input lag testing element of the tool, SMTT also provides other display tests which are useful.

  • The “Deep Color Test” pattern has been redesigned to assists the human eye in the percipience of the very fine contrasts that are displayed on screen.

  • The Deep Color Map, a 10 bit per colour capable multi-colour transition, shows flawless transitions in all possible directions while the earlier versions of SMTT suffered from a diagonal restriction in some cases. As the Deep Color Test the output will be automatically limited to 8 bit per colour as soon as your system is unable to display 10 bit color depth.

  • Customizable colour ramps can be defined in the Gradients section of SMTT. Instead of offering just some presets it is up to the user to select start and end colours of a transition as well as the number of steps within this very special transition. So if your monitor has a display issue with certain colour shades you can focus the displayed colour ramp on the corresponding shades.

  • Scaling tests that focus on typical HD resolutions are new to SMTT 2.0. The implied test patterns should help to determine if Full HD (1080p) or 720p signals are displayed with their 16:9 aspect ratio and without any cut offs even if zoomed in to fill the whole area of a monitor that offers a higher resolution than the signal that should be displayed.

Last but not least the whole user interface has been redesigned to offer the best ease of use possible. Intuitive user controls, non-blocking tool-tips and live previews supersede the manual most of the time. SMTT 2.0 is designed to support Windows Vista, Windows 7, Windows Server 2008 and Windows Server 2008 r2. Caused by the use of shaders that require DirectX 10.0, SMTT 2.0 does not support DirectX 9 only graphics cards or Window XP. SMTT v2 is a user-oriented modern test software with lots of technical improvements that alleviate the usage and enhance the accuracy.

 

We will look at the improvements made in v2.0 along with more detail about why it is superior to traditional stopwatch programs later on in this article as well in the interview with Thomas Thiemann.

 

 

 


Other Methods - Oscilloscopes

 

Some websites take this whole area one step further and even use an oscilloscope and photosensor to measure the input lag of a display. This is of course an even more precise measurement and can help you show the true image lag along with the typical response times of a pixel transition. This is then used to give you both the overall experienced 'lag' of the image and the lag specifically between the electronics and the pixel change instruction (the pure signal processing time). It is the total of the signal processing lag and the pixel response time which gives you a measurement of the perceived and experienced delay of a TFT compared with a CRT. We do not have access to such a method and the costs are extremely high for such devices. If you are particularly bothered about input lag then I would encourage you to compare results between sources and refer to other review sites as well where methods like this are used.

 


Above: Tektronix DSA71254 oscilloscope used in Thomas' input lag studies

 

Keep in mind that even with high-end measurement equipment there is no standardised measurement method that exactly describes what has to be measured to be called 'input lag' or 'display lag'. There is no rule that forces testers to imply or to separate the response time from the raw processing time. Don't fool yourself by taking measurements that show the raw processing time without response time and believing that this would be the time you see a new picture in practice. Be careful when taking these measurements that the definition of the 'input lag' figure quoted is clear. Results using these methods may also differ depending on the measurement equipment used, especially the optical sensors.

 

 


In Summary - A New Input Lag Testing Method

 

The SMTT stopwatch method will give a more accurate indication of a screens input lag compared with older stopwatch methods. It offers many advantages to overcome the errors associated with older methods and v2.0 has made some great improvements compared with older versions. The results from our tests will measure the value of a delay between the image content displayed visibly on each screen and should give you a good indication of the perceived and experienced lag between a TFT and a CRT display.

 


Above: input lag testing methods compared by Thomas Thiemann

 

As part of Thomas Thiemann's studies he has tested the input lag measurements across various different techniques and the results are shown above as an example from the NEC LCD2690WUXi monitor. This provides the results recorded using an oscilloscope method, SMTT v2, SMTT v1 and then traditional stopwatch methods both with V-sync disabled and enabled. As you can see, SMTT 2.0 provides a very good level of accuracy and has made some noticeable improvements over v1.0 as well.  We don't want to over-praise SMTT though as a well done oscilloscope measurement with high end equipment is superior to SMTT and any other photo-based test. However, SMTT 2.0 is very close to the results from an oscilloscope if you add the response time measurement to the signal processing time. It is much better than any plain stopwatch test or a "time code" (which is nothing else than a pre-recorded single stopwatch) as these comparisons show and for the reasons discussed in this article.

 

 

For our reviews we have also introduced a broader classification system for these results to help account for any remaining error in the method and classify each screen as one of the following levels:

  • Class 1)Less than 16ms / 1 frame lag - should be fine for gamers, even at high levels

  • Class 2) A lag of 16 - 32ms / One to two frames - moderate lag but should be fine for many gamers

  • Class 3) A lag of more than 32ms / more than 2 frames - Some noticeable lag in daily usage, not suitable for high end gaming

 

 


Interview with the Author of SMTT

 

TFT central: What is SMTT designed to do?

 

T. Thiemann: SMTT is a multipurpose monitor test tool. The combination of implied test screens and interactive tests should help end users and editorial writers to rate the display quality of a monitor. Including input lag measurements, scaling, colour differentiation and colour resolution for 8 bit and 10 bit outputs.

 

The HPC (High Precision Counter) test to determine the input lag, also known as display lag, outperforms any other plain stopwatch application available and produces almost oscilloscope-grade results while keeping the costs at a reasonable level. Therefore it does not only help reviewers to rate a monitor, gamers can use it as well to rate their monitor. All that is needed is a CRT and a camera with adjustable fast shutter speed.

 

The Gradients help to rate the basic colour reproduction and differentiation in all colour shades. If your monitor is prone to clipping colours at the upper or lower end of a gradient or if banding within linear colour ramps is an issue you will find it with ease. You may also set the part of the ramp to be displayed by selecting the starting and end colour of the gradient on screen to magnify the area of interest.

 

The Deep Color Test is, as far as I know, still unique. I am one of the happy people that have access to NVIDIA Quadro graphics cards that are able to reproduce 10 bit per colour if you have a proper display connected. But there is no test pattern available. Neither in the driver nor in the software deployed with the monitor. Displaying a plain image is always risky as the software may be limited to 8 bit colour processing or the colour management of the software or operating system may cause strange colour shifts and other effects like colour clipping and banding which may reduce the number of colours that are finally displayed on screen. All of these mentioned problems are circumvented accessing the output directly. The deep colour test pattern is easy to evaluate. You won’t need more than five to ten seconds to find out if your display shows an 8 bit output or a 10 bit output.

 

And there are the new scaling test screens that offer some nifty little details that assist the reviewer for multimedia monitors and TV sets to rate the scaling performance. When I had to test the scaling performance myself I wished to have a clear display for the cut off pixels. Usual test screens offered e.g. white arrows without a clear mark at their tip. Displayed at the outer pixels of the screen it was often hard to say if the first pixel at the arrowhead was still displayed or already cut off. The new scaling test patterns do not suffer from this limitation.

 

 

TFT central: Why is SMTT 2.0 better than other stopwatch programs?

 

T. Thiemann: There are several reasons. At first all other approaches to measure input lag with one stopwatch or time code on screen assume that two displays that are connected to the same graphics card and setup to run in clone mode are totally synchronized. That means that they assume that the vertical sync signal which introduces every new frame is being sent at the same time to both monitors and that both monitors run at the very same frequency.

 

During my studies I tested this assumption as one of the first and realized that it is absolutely wrong. Connecting a TFT with the digital DVI-D and a CRT using analogue D-Sub cabling to a graphics card caused the monitors to have unsynchronized vertical sync pulses and slightly different frequencies depending on the used graphics card.

 

Using a high end oscilloscope, like the Tektronix DSA71254 that I used, offers the ability to trigger on the unique bit stream inside the digital TMDS (transition minimized differential signal) 8b/10b coded signal that represents the v-sync pulse. When you assume that both signals are totally synchronized both v-sync pulses should be transferred synchronized or with negligible delay. But that’s not the case. If you use the very same hardware and just unplug the CRT and reconnect it to the computer the vertical sync pulse is set to a new random point of time. This may fit to the v-sync within the digital signal or it may differ by any amount within a range of zero to 16.6 ms.

 

 


Figure Analogue v-sync 5.2 ms ahead of digital v-sync.2 (right): Analogue v-sync 3.1 ms delayed after reconnection. Click for larger versions.

 

As long as the frequencies are identical you can randomize the v-sync lag by reconnecting the monitor. Depending on the drivers it may be even enough to disable the secondary display and enable it again. If this is taken into account for display lag measurements you have the first reason why measurements with single stopwatches may differ by up to 16.6 ms. It just depends on the time the monitor is activated.

 

But the frequencies may be slightly off as well. You setup both monitors to run at 60 Hz as usual but the frequencies are just close to 60 Hz. In these cases the relative positions from analogue v-sync to digital v-sync differ slightly from frame to frame. Depending on the actual difference it may take several seconds or even minutes until the v-sync signals pass each other for the next time. In the example the difference is 0.02 Hz. With every frame the temporal difference between the v-sync pulses grows as the faster signal advances by 1/3001 of a single frame. It takes about 50s until the higher frequency signal is that far ahead that it reaches the next frame. Of course the signal is not travelling to the future – it has to display the same frame two times causing the advance to be reduced by 16.6 ms.

 


 

Figure 3 (left): Frame rate within the digital signal: 60.02 Hz. Figure 4 (right): Frame rate within the analogue signal: 60.04 Hz. Click for larger versions.


 

The point of this is that the delay shifts over time. If you take your photos at the beginning there will be almost no delay, if you wait 20s there will be a mediocre delay and if you take your photo at the end of such a period you will encounter the maximum delay. But this has nothing to do with the display lag. It’s caused by the graphics card but not by the internal signal processing of the monitor. The signals that were presented before are taken both from the input side of the monitors.

 

So the most important assumption for a plain single stopwatch based measurement has been disproved. It does not matter where you place the stopwatch on screen: On top, in the middle or at the end of the screen. It does not matter what time source you use: A stopwatch application, a time code from a video or whatever. When the first monitor starts updating its content the other monitor will most likely update some other part of the screen. As screen updates are done pixel by pixel, line by line from the upper left hand to the lower right one of the attached monitors may update the first few lines of the screen while the other is updating some lines at the lower half.

 

This fact opposes up to three different update areas on one screen to the same number of areas on the second screen. Depending on the position of the single stopwatch and the exact point of time the photo is taken you will measure one of the possible three measurement results. Almost totally random but affected by the speed of your camera and used graphics card.

 

Figure5: Opposing update areas with activated v-sync.

 

This example is taken with the old version of SMTT just for display purposes. Consider each line to represent the possible position of a single stopwatch on screen.

 

In the area marked with a green “A” you compare the updated area of the CRT with the updated area of the TFT. The values are identical as vertical sync forces one screen buffer to be kept as long as needed until the screens are completely updated. In this case it does not mean that the TFT has no input lag it just means that the input lag is not “several frames” long.

 

In the area marked with the red “B” you compare the old screen content of the CRT with the already updated new content of the TFT. In this case the TFT seems to be ahead of the CRT! I can assure you that the CRT has been tested for input lag and that it does not have any remarkable input lag that could cause this delay. It is just caused by the temporal difference of the screen refresh caused by asynchronous v-sync pulses.

 

Area “C” is almost like Area “A”, just that it opposes two old frames. The added thick yellow bars represent the actual position of the screen update. These positions change over time, they are of course not fixed. The figure is just an example for a specific point of time.

 

Now consider to place your plain stopwatch application at the top of the monitor: It will show no lag at the given point of time the example deals with. But what if your stopwatch is in the middle of the screen, in Area “B”? You will see 16 ms of “negative lag”. That is of course impossible. If this TFT would have 16 ms true input lag it would show 0 ms in the middle of the screen. This would make the tester to feel much more comfortable but would still be wrong.

 

In fact it does not matter where you place your stopwatch. The results will not represent the input lag of the monitor but signal delay + input lag + response time + x. That’s one of the reasons why SMTT displays many counters on screen (without v-sync): No matter at which position the monitor is updating its content it will always present the most up-to-date value that was available at that position of the update process. So the missing synchronization is a non-issue using SMTT.

 

Usual stopwatches are running as plain applications on the desktop or even in the browser using flash for their output. Flash has a very harsh limitation in terms of the speed the code is processed: It is run once per frame of the output. The problem is that the maximum frame rate of flash is limited. The highest frame rate that I could reproduce with flash was at 120 frames per second. Usual Flash based programs run at much lower frequencies, some browsers limit Flash to 60 Hz that is reasonable for usual flash based applications and 60 Hz as typical screen refresh rate. So the maximum theoretical precision of a flash based stopwatch is limited to 1/120 s. Displaying three digits after full seconds means 1/1000 s or single milliseconds. Offering an accuracy that is 8 times worse is somehow strange but no one seems to care.

 

Even if you do not use flash most stopwatches are bound to the 2D refresh rate of the Desktop. Well, to be more precise: All are bound to the 60 Hz of your monitor for their output but many of them may be bound to their precision to the same update rate. It does not matter how great your programming skills are: One value on screen will be updated only once during every screen update that takes place every 16.67 ms if you are running a monitor at 60 Hz. You are already lucky if this very special value is up to date. During my tests I realized that the Windows desktop seems to use some sort of v-sync that can’t be disabled. Remember that the V-Sync setting in the driver of your graphics card is meant for 3D only.

 

I was told to use a stopwatch once that used a font which looked like an old fashioned seven segment display. Even such little things like a bad font may causes problems. At that time I tested some monitors that had a relaxed response time so that two subsequent frames overlapped. I recognized that the values I wrote down accumulated the numbers 8 or 9 as last digit.

Figure 6: Accumulation of values using improper fonts.

 

 

The given example shows overlapping of the digit “4” with all other possible values. As you can see there is just one overlapping (underscore) that does not look like a valid digit. This is an optical effect. The CRT won’t show the overlapping but the TFT will. Using such digits on a single stopwatch with a fixed position will add some randomness in the lag by wrong readings.

 

And there are more traps to fall in. Did you know that the timer inside of every operating system which counts the passed milliseconds is not updated every millisecond? Using Microsofts Sysinternals ClockRes you can check the timer resolution of your system. Usually the resolution of this timer is somewhere between 10 ms and 16 ms.

 

Figure 7: System clock resolution displayed by Sysinternals ClockRes

 


So using the so called “system ticks” as time reference for a stopwatch is OK as long as you do not want to get a temporal resolution of milliseconds. SMTT uses high precision timers that offer much higher precision than needed and are updated often enough to resolve milliseconds.

 

So with a plain stopwatch you get a single counter somewhere on screen whose output is updated every 16 milliseconds during the screen refresh showing the content that has been stored in the screen buffer for up to 16 milliseconds and that may be calculated with a temporal precision of another 16 milliseconds, depending on the used code. Maybe based on system ticks with a resolution of about 16 ms. Lots of potential errors that can sum up to a total error you can’t get rid off by taking the average of your measurements. SMTT updates its high precision counters with every frame while running at several thousand frames per second. It is of course impossible to display more than 60 frames per second on a screen that runs at 60 Hz per second but each displayed picture on your monitor that is the results of the SMTT output is a patchwork that consists of approximately 32 pieces from different frames, assuming SMTT to run at 2000 fps. Such high frame rates are quite usual for SMTT 2.0.

 

SMTT does not need to take control of the picture processing time inside the graphics card. All that is needed is a regular and high-speed update of the output buffer that is accessed by both monitors. While the content of the buffer is transferred to the monitors it is still updated. What would cause so called tearing in a game is used to display many values line by line during one update process. All you have to do is disable v-sync for 3D output. High precision counters, optimized high speed code and several thousand updates per second try to guarantee true millisecond precision on screen.

 

On the other hand you have to consider that Microsoft Windows is no “real time operating system”. There is no guarantee that a given task will be performed during a fixed period of time. So SMTT may suffer from slight fluctuations if there are other tasks that interrupt the processing. But if you take a look at a picture that has been taken from the SMTT HPC test you will recognize that the values are updated every single millisecond almost every time. So all you have to do is to find and compare the most up-to-date values on both screens. These are most likely not at the same position on the screen, but that is intentional. You compare last-displayed values that represent the lag difference between the monitors instead of additional lags that are caused by asynchronous graphic outputs, inaccurate programming or vertical sync offsets.

 

 

TFT central: What’s been improved in v2 from v1?

 

T.Thiemann: Well, I had to start from scratch as the old DirectX 9 based code could not be updated to DirectX 10 or DirectX 11. The last versions of SMTT v1 featured a HPC test that was strictly DirectX 9.0 based which used sprites for text rendering while the old Deep Colour tests used DirectX 10 as colour depths of 10 bit per component are not supported by DirectX 9. That was hard to maintain and prone to cause errors on the long run. A simple switch to DirectX 10 was impossible because sprites are not supported in DirectX 10. I was stuck between two features that could not be run from the same code basis. Now the code is written for DirectX 11 but uses DirectX 10.0 features only. The sprites have been removed with my own bitmap font that improves the display performance.

 

SMTT 2.0 does no longer have any known memory leaks. So you may try to start the tests as often as you like without any crashes. The graphical user interface has been redesigned to offer the highest possible ease of use and needed precision for adjustments without being too complex or looking overcrowded. There is a setup process that checks for software requirements and tries to resolve them if needed. The High Precision Counters are optimized to run at even higher frame rates than they used to and the accuracy has been improved with I.R.O.N.. And there are the complete new scaling tests and the easier to use deep colour tests. Even a display issue with the deep colour map has been resolved.

 

Well, almost everything has been improved without sacrificing the key features. The software is still easy to handle, small, does not “call home”, it does not collect your data. It is still a small monitor test tool.

 

 

TFT central: Tell us more about I.R.O.N.?

 

T. Thiemann: I.R.O.N. stands for improved readability of numbers. This new display technique adds column switches every 16 ms to the output of the counters on the screen to omit any overlapping of the displayed values which reduces the time needed in the evaluation process and increases the accuracy of the readings. SMTT v1 displayed two columns displaying identical values on both sides of the screen. It was meant to be symmetrical so that the reviewer could place his CRT on the left or the right side of the TFT. You could zoom in to the area where the monitor frames touched one another and still had all counters visible. Well, that’s the theory. The problem was the latency of the TFT displays. Several values overlapped causing complicated readings that were prone to errors, reviewers to “guess” values or to waste a lot a time finding an up to date value.

 

I.R.O.N. avoids these overlapping values. Depending on the intensity of the text you can easily determine which values are new or old and it is easy to find the most actual value on screen – the value that is needed to determine the input lag of the monitor. Improved speed for each evaluation and higher accuracy by improved readability.

 

 

TFT central: What is coming up next?

 

T. Thiemann: There is a 32 bit version planned for Vista and Windows 7. The most time consuming work for a 32 bit version will be the setup process. SMTT code is licensed by an American start-up with great experience in game development that may incorporate SMTT core functionality and know-how to future game development platforms for reducing the overall lag you experience in computer games. SMTT is programmed on a modular basis and it is designed to be extended in the future. Therefore additional features are possible in upcoming versions of SMTT.

 



Conclusion

 

Hopefully this article has given you a good insight in to how input lag can be measured and the various pros and cons of each method which can be used. I'd like to thank Thomas Thiemann for his input and thoughts on the matter and would recommend you also read his other studies which have already been linked to in this review over at Prad.de. As the studies have shown, there are a lot of possible issues with traditional simple stopwatch methods which can result in a varying degree of error. This is more apparent if you then try to compare the results between different sources and different methods. SMTT v1.0 made some good improvements to the stopwatch method although had some initial limitations with its use, mainly around readability and some possible interpretation errors. SMTT v2.0 is a great improvement offering a host of updates to make measurement of input lag easier and more accurate, and also providing some new useful tools for testing your display. An oscilloscope can offer a higher level of accuracy if it is well defined and the equipment is of a high standard. Be wary of input lag figures though which crop up without explanation as to how they were obtained as there is no defined standard for what input lag is when using these kind of tools. Overall as long as the signal processing time and pixel response time are accounted for then an oscilloscope should provide a very high level of accuracy and a good measurement of the lag experienced by the end user of the display.

 

Through Thomas' tests and in his production of SMTT 2.0 he has managed to create a very useful tool for measuring input lag without the need for very expensive equipment like this, and also providing a high level of accuracy even when compared with such a method. It is acknowledged that this tool provides the ideal photo method for measuring input lag and so we will be using this method in our tests to help improve accuracy from now on and in the absence at the moment of very expensive oscilloscope methods.

 




Further Reading

 

SMTT -  Thomas Thiemann, http://smtt.thomasthiemann.com
Input Lag Tests and Studies - Prad.de (Thomas Thiemann, translated version)
Input Lag Tests and Studies - Prad.de (Thomas Thiemann, original language version)

 

 

Introduction

If you ask a user what they look for when buying a monitor they may respond by saying “a good resolution”, “good colours” or “good image quality”. Some users, particularly gamers, are also quite particular about the motion performance of a monitor and will be looking for something that is responsive. As with other factors affecting the overall image quality, there are a lot of different aspects to consider here. Unfortunately for the consumer, a true picture of ‘responsiveness’ is never painted by manufacturer specifications. In this article we break through the confusion, taking a detailed look at the key factors affecting responsiveness.

Input lag

When considering the responsiveness of a monitor you must consider what the user feels when trying to interact with the monitor as well as what they see with their eyes. Input lag is all about the delay between the graphics card sending a frame to the monitor and the monitor displaying that frame. The basic component of input lag which affects the feel is referred to as the signal delay and is commonly measured in milliseconds. There are of course other sources of latency beyond simply this signal delay and not all of it comes from the display itself. This is covered in this excellent article by AnandTech, but we shall be focusing on just the monitor here. A lower input lag is advantageous because it leads to a snappier feeling when you interact with the display using your mouse or other controller.

Monitors will process the image in various ways before outputting it – some models do this more extensively than others. It is not too uncommon for higher end screens in particular to use internal scalers to handle non-native resolutions, which can add significant input lag. Sometimes the signal must pass through the scaler even if scaling is not required (i.e. running the monitor at its native resolution). Manufacturers will sometimes give PC monitors a dedicated mode which will bypass much of the signal processing; sometimes a dedicated ‘game preset’ or an ‘instant’ or ‘thru’ mode that can be activated through the OSD (On Screen Display). To measure the signal delay accurately requires specialist equipment such as an oscilloscope and photo diode. This will allow you to specifically determine the signal delay rather than the overall latency. Often when websites or users measure input lag they will be using a camera to capture apparent differences between a display of known input lag and their display of choice. This will be done using a stop clock or special software such as SMTT (Small Monitor Test Tool).

Such methods can give a reasonable rough representation of input lag, particularly if you have a good range of reference screens (known input lags) to work with. Because they rely on a visual interpretation of a display’s output, though, they are influenced by the pixel transitions themselves (response time) and not just the pure signal delay. It is important to differentiate between these two as the signal delay has a significant effect on how responsive a monitor feels whereas response times primarily affect how the monitor looks, as we focus on later. The pixel response affects what we like to call ‘visual latency’ rather than ‘felt latency’ but is often included in what some websites and users will refer to as input lag. Many people have become obsessed with comparing input lag values, even without appreciating the inherent inaccuracies of many of the figures they’re seeing. Sometimes users will sweat over a few milliseconds difference, the sort of difference that could be accounted for purely by the measurement method’s margin of error. However the figures have been derived, it’s important to appreciate that different people have different tolerances to input lag. Some would much prefer a screen with next to no input lag (

Refresh rate

Fixed refresh rate

The next factor worth considering affects how responsive the monitor feels and looks to the user; refresh rate. The vast majority of LCD monitors will run at a refresh rate of 60Hz under their native resolution. This means that up to 60 discrete frames of information can be displayed every second with a 16.66ms ‘gap’ between frames. This value can be altered to a degree, but the value must be pre-selected; a normal monitor can’t adjust its refresh rate on the fly so to speak. There are a select few LCD screens which run in their native resolutions at a refresh rate of 120Hz or sometimes higher (e.g. 144Hz). A 120Hz refresh rate allows the monitor to display twice as much information every second, outputting up to 120 discrete frames of information with a 8.33ms ‘gap’ between frames. The diagram below gives a visual demonstration of these differences.

In this diagram the 60Hz monitor shows a progression of a single frame, between Frame 1 (red blob) and Frame 2 (yellow blob). Over this same 16.67ms time period the 120Hz monitor has progressed two frames, displaying ‘Frame 2’ after 8.33ms and moving on to ‘Frame 3’ (green blob) by the end of the 16.67ms period. What this means in practice is that the 120Hz monitor is able to output content at up to 120 frames per second. At this frame rate each discrete frame is displayed for half the length of time compared to a 60Hz monitor running at 60fps. This reduces the level of blur and increases the visual fluidity of scenes, as we’ll come onto later. The monitor also responds twice as frequently to user input updates, such as mouse movements, which when combined with relatively low input lag gives the user superior feedback and a much more ‘connected’ feel. For these reasons such models are popular amongst gamers and also facilitate smoother active 3D playback. Models with even higher refresh rates should become increasingly widespread in the future, too – particularly when alternative display technologies such as OLED become mainstream.

Variable refresh rate (Nvidia G-SYNC and AMD FreeSync)

As you can hopefully appreciate from the above there is a very close relationship between refresh rate and frame rate – or the monitor and the rest of the system. It’s all good and well having a monitor with a high refresh rate, but to gain maximum benefit from it the frame rate needs to keep up with the refresh rate. Some users like to use ‘VSync’ to prevent the frame rate exceeding the refresh rate and to ensure that the GPU only sends new frames to a monitor when it’s ready to move onto its next refresh cycle. With the GPU holding frames in this way there is an inherent delay which adds to the overall input lag. This ‘penalty’ becomes less severe as the refresh rate increases but it still exists. With VSync enabled you also get a degree of stuttering on occasions where the frame rate falls below the monitor refresh rate. This leads to a situation where the screen has finished drawing a frame and should be moving onto the next frame, but the GPU isn’t ready to send it. The GPU therefore sends the first frame to the monitor again instead of sending a new one – so the monitor redraws the frame. To minimise latency and stuttering as much as possible you can disable VSync instead, which many gamers do. But when the frame rate doesn’t match the refresh rate, which is often the case, things are left to go out of sync. The monitor ends up displaying new frames in the middle of its refresh cycle. Because monitors typically refresh from top to bottom you end up with a new frame being displayed only on the top of the monitor, whereas the bottom of the monitor is still displaying the old frame. This gives a distinct and potentially distracting ‘tearing’ which is exactly what leads some users to turn to VSync. But there is an alternative to solution created by Nvidia; G-SYNC.


Essentially what the little chip above does is dynamically adjusts the refresh rate of the monitor to match, in real time, the frame rate of a game or other content. That way you gain the traditional benefits of having Vsync enabled with the benefits of Vsync disabled all at the same time. The chip can only be used on certain monitors, most of which will have it pre-installed in the factory. If you’d like to know more about the experience, check out our dedicated G-SYNC article which tells you more about the technology and the benefits it can bring alongside links to any relevant news pieces or reviews of G-SYNC models we’ve tested. AMD also has a variable refresh rate technology dubbed ‘FreeSync’ which doesn’t require the same sort of specialist hardware inside the monitor itself. The technology was initially demonstrated on a laptop using the native capabilities of its eDP (embedded DisplayPort). For desktop monitors the DisplayPort 1.2a and more recent DP specifications support variable refresh rates (referred to by VESA as ‘Adaptive-Sync‘), via an optional extension. This has been pivotal to making AMD FreeSync a reality for the consumer. Newer iterations of the technology also work over HDMI on specific monitors. Adaptive-Sync can technically be used by other graphics processor manufacturers, but whether they choose to adopt it is another matter entirely. There are now a growing number of FreeSync monitors on the market, with many models listed on this page (click the ‘Monitors’ tab near the bottom). Our own site also has ever-expanding news coverage on the topic. Nvidia do not currently support this open standard on their GPUs and given their investment in G-SYNC it seems unlikely that they will support this for the time being.

Response time

The refresh rate clearly has a significant bearing on how responsive a display looks and feels to the user but it certainly isn’t the end of the story. In the previous ‘blob diagram’ you will recall there were ‘gaps’ between frames. On a CRT monitor these gaps are literally blank spaces where nothing is displayed on the screen – at 60Hz a CRT will simply flick from one frame to the next like clockwork every 16.66ms, displaying each frame very briefly. This is why, particularly at lower refresh rates, the user may notice a flickering as the monitor alternates between displaying the gap and displaying a frame of information. The vast majority of LCDs (and some other non-CRT technologies) use a technique called ‘sample and hold’ to display their images. This means that a frame (sample) is displayed to the user (held) for the entire duration of the ‘gap’, after which the next frame is sampled and then held.

Drawing the next frame isn’t instantaneous on an LCD, either. It is influenced by the pixel response time; the time taken to transition a pixel from one colour (state) to another. It doesn’t actually depend on the specific colour (e.g. red to blue vs. green to blue) but on the lightness or intensity of the shade. This is known as the greyscale or ‘grey’ value, running from the darkest shade (0% grey = black) to the lightest shade (100% grey = white). A transition from black to white, for example, will typically take a different length of time to a transition from 25% grey to white. Remember that in this instance grey simply refers to the intensity of the shade – it could in fact represent a colour such as dark blue (25% grey) or light blue (75% grey). It’s also worth noting that although the response time isn’t influenced by anything other than the grey values, some colour transitions may give more obvious trailing due to how receptive people are to particular colours.

Pixel response times are commonly quoted by manufacturers as ‘grey to grey’ values with figures such as 2ms or 5ms. Unfortunately there is no common measurement standard for this and, as explained above, not every pixel transition will occur at the same speed. Often the manufacturers will cherry pick their values so that they represent one of the most rapid pixel transitions a PC monitor will perform. Whilst some transitions may occur at the quoted speed, others might not happen anywhere near as quickly. The diagram below illustrates the difference between a 8ms pixel transition and one occurring twice as quickly (4ms). For clarity a standard sample and hold LCD with 60Hz refresh rate is used in this example.

The top row in the diagram shows a transition occurring between a red blob (frame 1) and a yellow blob (frame 2) at a response time of 8ms grey to grey. After 8ms the completed yellow blob is displayed for the remaining duration of that frame (an extra 8.67ms). The bottom row shows this same transition but at a response time of 4ms grey to grey. The completed yellow blob (frame 2) is show after only 4ms and then held for the remaining 12.67ms of the frame. The shorter time spent in the transitional phase between red and yellow leads to what is essentially less trailing or ghosting.

In this example the transition occurs between one state (red) and another (yellow) and would go no further until a new transition is called for in the next frame. In reality rapid response times such as this are typically achieved on LCDs by using a pixel overdrive circuit external to the panel itself. This overdrive process is also known as RTC (Response Time Compensation) or grey to grey acceleration. Voltage surges are applied to ‘push’ the pixels into the desired state more rapidly – something that is very common on LCDs of all panel types. If the disparity between the native speed of a transition and the speed of the accelerated transition is great then it sometimes requires an aggressive voltage surge to achieve it. This will invariably lead to a situation where the transition won’t just stop at the desired endpoint but will actually ‘overshoot’. The consequences include visible artifacts (RTC errors) such as inverse ghosting and bright trails, shown below, which can actually be more distracting than regular trailing.

On modern TN monitors most grey to grey transitions occur at around 4-10ms without overdrive but can be pushed to as low as 2-3ms using moderate overdrive. Without overdrive IPS and PLS panels are more sluggish, giving response times during grey to grey transitions typically around 8-16ms. With moderate overdrive some grey to grey response times on IPS and PLS can fall to around 4-6ms which can significantly reduce trailing. Other transitions on IPS/PLS will remain closer to 10ms unless overdrive is extremely strong with accompanying RTC errors, however. On most VA panels grey to grey transitions are very sluggish and usually occur between 14ms and 30ms without overdrive. With moderate overdrive you can bring some of these transitions down to around 4ms whilst others will stubbornly remain well above 10ms. For all of these panel types balance is key. It is possible to drive down response times even further using more aggressive overdrive, but often the consequences of this (visible artifacts) outweigh the benefits. The image below gives an extreme example of the visual joys that can accompany the use of very aggressive pixel overdrive.



Sampling method

CRTs vs LCDs

Any long-term CRT users, particularly gamers, will recall that there was a distinctly different feel to gaming on a CRT. Modern LCDs have very rapid response times and high refresh rates which certainly help reduce trailing and perceived blur. But still there is something missing. Objects that remained sharp during brisk movements on the CRTs may seem relatively blurry on even the fastest LCDs. As mentioned in the opening paragraph of the previous section, there is a distinct difference between how CRTs and most non-CRT monitors (such as LCDs and hypothetical OLEDs) sample images. It is this that is the missing piece of the jigsaw and the major factor in perceived motion blur. LCDs typically adopt a ‘sample and hold’ (or ‘follow and hold’) approach to displaying an image whereby a frame (sample) is displayed to the user until the next frame needs to be drawn (hold). In contrast, CRTs use an ‘impulse-type’ approach whereby each frame is flicked on momentarily and then nothing is shown on the screen until the next frame is required. Let’s summarise these differences with another infamous ‘blob diagram’, this one being a sort of conglomeration of the two previous diagrams.

The typical LCD is always displaying information, whereas the CRT only displays information for very short periods of time. The sample and hold approach used by your typical LCD monitor has consequences for the perceived clarity of motion. When your eyes track movement on such a display they are fed a continuous stream of information and are continuously moving. Your eyes are at various different positions throughout the screen refresh. This results in perceived motion blur – a blur that would persist even if the pixels themselves were transitioning extremely quickly. Some studies suggest that response time actually only accounts for around 30% of perceived blur on a 60Hz monitor with 16ms response time (Pan et al. 2005). On models with faster pixel response times it accounts for even less of the perceived blur. Refresh rate also plays an interesting role in all of this. At a refresh rate of 60Hz the pixel response times are only really a limiting factor in terms of overall perceived blur if they are above about 8ms, which is half of the frame refresh cycle. On a 120Hz PC monitor you need pixel response times to fall below about 4ms for optimal performance, which is again half of the refresh cycle. This is why it is typically TN rather than IPS-type or VA matrices that make the best candidates for high refresh rates in LCD form and why OLEDs will give a lot of headroom in this area. It is also why you can see a difference on 120Hz+ TN models with adjustable overdrive settings and will want to take the acceleration as high as you can without introducing too many RTC artifacts.

Pixel responses aside, the significance of eye movement and refresh rate really cannot be understated when it comes to motion blur. You will recall from earlier in the article that an increased refresh rate on an LCD improves the smoothness of motion because visual information is being fed to the user more rapidly. Trailing appears greatly reduced despite the pixel response behaviour typically remaining similar to when the monitor is running at 60Hz using the same overdrive settings. This increased smoothness is actually largely down to a decrease in perceived motion blur. Frames are being held for a much shorter duration and your eyes are being fed a greater number of distinct frames – as a result, your eye movements are reduced. But there is still a greater degree of eye movement and hence blur than on a CRT. On a CRT the information is flashed at you extremely briefly followed by no information (a blank screen). As a result, your eyes aren’t spending much time at all tracking motion and the perceived blur is significantly decreased.

A simple demonstration

You can perform a quick demonstration of how eye movement influences perceived blur by looking at your mouse pointer on a sample and hold display. Stare at a fixed point on the screen while moving the mouse pointer across this point, moving the mouse at a moderate pace from side to side. The white space above this paragraph should give you a normal mouse cursor and an area to focus on. You should see a set of distinct mouse pointers as you move your mouse from side to side. Now allow your eyes to follow the mouse pointer and notice that you see a slightly blurry pointer rather than several distinct pointers. Don’t move the pointer too rapidly or do this too many times as you will probably make yourself dizzy. There are also some useful demonstrations of how influential eye movement is on this page. This is part of a broader collection of tests called ‘UFO Motion Tests’ which are designed to help users analyse the motion performance of their display.


PWM (Pulse Width Modulation) usage

PWM (Pulse Width Modulation) is a method used to modulate backlight brightness on some sample and hold LCDs and backlight or pixel brightness on some sample and hold OLEDs. There is an excellent article dedicated to this on TFT Central, well worth a read if you’re interested in learning more about it. Rather than using a varying direct current to modulate brightness, the PWM-controlled light source is rapidly ‘flicked’ on and off to achieve a given brightness. Some people are sensitive to this rapid flickering effect and can suffer from visual discomfort. The flickering also has repercussions for how ‘blur’ on moving objects is perceived on a monitor. Because the image essentially disappears very briefly when the PWM-regulated light source flicks off there can be visible fragmentation in the blur we perceive when viewing moving images. The fragmented blur is termed a PWM artifact. The video below gives a rough idea of this effect. It is obviously constrained by the limitations of the camera recording the video and the video output (particularly the low frame rate), but the PWM artifacts manifest themselves in a similar way in practice.

LightBoost and strobe backlights

Introducing strobe backlights

‘Normal’ LCDs and CRTs use a completely different sampling method, as explored previously. However; it is possible to modulate the backlight of an LCD monitor in such a way that it samples frames like a CRT and therefore provides a reduction in perceived motion blur. Scanning or strobe backlights are those that pulse ‘on’ and ‘off’ in much the same way as a CRT, allowing an LCD to display information to a user for only a fraction of a second every frame and display nothing at all for the remaining time. This impulse-type behaviour not only reduces motion blur by reducing the amount of time the eye spends moving, it also hides much of the pixel transition process – including overdrive artifacts that may be generated by aggressive grey to grey acceleration. One of the most popular systems used on LCD TVs is Sony ‘Motionflow’. Basic ‘Motionflow’ involves the use of MCFI (Motion-Compensated Frame Interpolation) technology whereby intermediate frames are created and inserted between real frames to increase refresh rate. ‘Motionflow XR’ combines this MCFI with a strobe backlight. ‘Motionflow Impulse’ uses a strobe backlight exclusively, without any sort of interpolation. Samsung uses an alternative called ‘Clear Motion Rate’ (CMR) that combines a strobe backlight on LCDs and strobe pixels on OLEDs with other motion enhancements. Panasonic also uses a strobe method they refer to as ‘Backlight Scanning’ (BLS) on some of their TVs.

Nvidia LightBoost – for PC users

Strobe backlight technologies such as these are readily used by LCD TV manufacturers, but until recently these have too much input latency for gaming. For PC monitor users there hasn’t been the same sort of adoption of such technologies, but there are some interesting developments along these lines. LightBoost is a low-latency strobe backlight technology developed by the well-known visual computing company called Nvidia. It is designed to complement Nvidia’s 3D Vision 2 stereoscopic system. The shutter glasses, which are an integral part of 3D Vision and any ‘active 3D’ system, have the left and right lenses alternately open and close so that each eye sees a different frame and a 3D picture emerges. LightBoost-compatible monitors are able to shut off their LED backlights in between frames and momentarily pulse them on at very high brightness to display each frame. The ‘on phase’ (or pulse) may last a couple of milliseconds, if that, and peaks at a brightness exceeding the monitor’s usual static 100% brightness. The off phase lasts for the remainder of the frame duration; until a new frame needs to be shown to the user and the next momentary brightness pulse occurs. Using this technology, where the backlight itself is acting as a shutter, allows the shutter glasses themselves to remain open for longer and let more light in – hence the primary purpose of the technology and the name itself.

Another advantage of this system is that the monitor is no longer ‘sampling’ like a regular LCD but rather like a CRT. Your eye movements are reduced due to the very short ‘on phase’. During its normal intended operation, where 3D content is being viewed with 3D Vision 2 glasses, crosstalk is reduced. But where things get really interesting is when you take away the shutter glasses entirely and start viewing 2D content. Because LightBoost is specifically designed for 3D viewing, using it for smoother (‘CRT-like’) 2D viewing can’t be done ‘officially’. However; it can be done very easily without any risk to your monitor or the rest of your system. An individual who goes by the pseudonym ‘ToastyX’ has developed a utility called ‘StrobeLight’ which allows a user to very simply toggle LightBoost on and off on compatible monitors without needing a 3D Vision 2 set or even an Nvidia graphics card.

As explained in this forum thread there are drawbacks to enabling LightBoost. Whilst the fluidity benefits are great if you’re running at a frame rate matching the refresh rate, there is rapid degradation in smoothness as the frame rate dips even slightly below this. In particular stuttering becomes much more pronounced as it isn’t masked by motion blur. The relative drop in apparent smoothness is far greater than for the equivalent frame rate drop with LightBoost disabled and some users will actually find disabling LightBoost preferable. In other words if you can’t maintain high frame rates ideally equal to the refresh rate then you can end up with an inferior rather than superior motion experience. Another key issue is that OSD control of the image is shut off and image quality is adversely affected – it is, after all, designed for a 3D viewing environment with active shutter glasses rather than direct 2D viewing. The image appears dimmer, more muted and colour balance is affected to varying degrees (depending on monitor model). There is also a mild flickering similar to what you would observe on a 120Hz CRT. There are individuals who are adversely affected by the PWM (Pulse Width Modulation) used on many LCD monitors but find this CRT-like flickering perfectly tolerable. There are other users who find the added fluidity of the image actually helps with eyestrain and related problems, whilst others dislike the CRT-like flickering which is of a different nature to PWM flickering. Some of the image imbalances mentioned can be partially addressed by adjusting values such as gamma and colour channels in the Nvidia Control Panel or Catalyst Control Centre. It also helps to set contrast as high as it will go without lighter shades becoming badly crushed or bleached, using an appropriate reference that will show when light shades start blending into bright white too readily. Image quality aside, the motion blur reduction that can be achieved by LightBoost during normal 2D viewing (at high frame rates) is reason enough for some people to use it. We’ll explore just how much difference it can make to motion blur shortly, but first we’ll take a quick look at some alternative computer-based technologies that follow similar principles.

Other PC monitor strobe backlight technologies

Enabling LightBoost makes a significant difference to the level of motion blur a user experiences. The activation process may seem a bit ‘hack-like’, but that’s because LightBoost is only officially endorsed for use as a 3D feature and is being exploited for its 2D blur-reduction benefits. Samsung’s now discontinued SA750 and SA950 series 120Hz monitors had similar functionality integrated into it with its ‘Frame Sequential’ 3D mode. This sets the backlight into a strobe mode that was intended for 3D viewing but could also be used for viewing in 2D with reduced motion blur. Some manufacturers have adopted strobe modes designed specifically for blur reduction during 2D viewing. This is advantageous as the image can be optimised for 2D viewing and activating the strobe function doesn’t have quite the same negative effect on colours as LightBoost does. As with LightBoost there is a reduction in perceived brightness and the monitor will ‘flicker’ like a high-refresh rate CRT as the backlight strobes. You also get rapid degradation in the smoothness of motion and the appearance of quite noticeable stuttering if the frame rate drops much below the refresh rate of the monitor; the strobe must be closely synchronised with the monitor’s refresh rate to be effective in reducing motion blur.

EIZO, a manufacturer focused primarily on high-end monitors, has adopted strobe backlight technology designed for 2D viewing. The EIZO FDF2405W and FG2421 are two such monitors using this technology. These models employ stroboscopic backlights, using a process dubbed ‘Turbo 240’ on the gaming model, to help reduce motion blur and overcome some of the inherent responsiveness limitations of their VA LCD panels. Here some of the inherently slow VA pixel transitions can be partially hidden during the extended ‘backlight off’ periods. BenQ have also adopted a strobe backlight mode named simply ‘Motion Blur Reduction’, first seen on the XL2720Z, XL2411Z and XL2420Z 144Hz gaming monitors. This can be used in conjunction with a range of refresh rates which will help users who are unable to maintain a frame rate matching the refresh rate of the monitor (for example 144fps on a 144Hz model).


You may recall that we touched upon Nvidia’s G-SYNC earlier in the article as a variable refresh-rate technology designed to reduce latency, eliminate stutter and prevent tearing. If you have read our article on the topic you may have noticed that a ‘low-persistence mode’ was explicitly mentioned which now has the official title ‘Ultra Low Motion Blur’ (ULMB). This is something that can be activated on all G-SYNC monitors. At time of writing the technology is under development so we can’t confirm anything about its inner workings and the end result. It seems that Nvidia and the monitor manufacturers have noted the phenomenal interest in LightBoost from a 2D perspective, particularly amongst gamers, and are willing to implement officially endorsed strobe backlight modes. As with the manufacturer-specific strobe modes this will be specifically optimised for 2D rather than 3D viewing. A range of refresh rates will be supported which is useful for users who can’t match the refresh rate with their frame rate. A 144Hz monitor with this ‘low-persistence mode’ could be run at 85Hz if a user can only maintain around 85fps, for example. At this stage it appears to be a feature that can be enabled on G-SYNC capable monitors instead of G-SYNC itself, rather than at the same time. As mentioned previously the backlight’s strobe frequency is closely linked to the refresh rate and once you introduce a dynamic refresh rate into the equation things become a bit complicated. We’re sure as the technologies evolve a combination of both a strobe backlight and variable refresh rate will be implemented.

Measuring motion blur – strobe vs. sample and hold

The static photography approach

If you are familiar with some of our earlier reviews you will also be familiar with a small tool called PixPerAn (Pixel Persistence Analyser) that can be used to analyse pixel responsiveness. It is also useful to help reinforce some of the earlier points about what a strobe or impulse-type display does and how this is different to a traditional sample and hold display. The image below shows a photograph taken from PixPerAn on the Samsung S27A750D (120Hz LCD) using its ‘Faster’ response time setting. Because the backlight is constantly illuminated, this is fairly representative of the pixel response behaviour at any given time when running this test.

You can see a ‘woven’ trail behind the original image, indicating the presence of mild overdrive artifacts but a decent overall pixel responsiveness for these transitions. The three images below show the sequence of events that can be captured with the S27A750D set up in exactly the same way apart from having its ‘Frame Sequential’ strobe backlight mode activated.

The first photo shows the dark phase, which is the state the monitor is in most of the time when its backlight is set to strobe. The backlight is off and no image can be seen. The second image shows the backlight during its bright phase, with the backlight very briefly pulsing to a brightness that exceeds anything possible with ‘Frame Sequential’ disabled. You can also observe that the trailing is very faint indeed and it is essentially hidden along with any overdrive artifacts.

As noted in this article and in some of our more recent reviews, though, the movement of our own eyes is a significant cause of motion blur. This isn’t something that is reflected by analysis using PixPerAn, traditional static photography or videos. But what if you capture a photo using a moving camera rather than a stationary one, creating motion blur in the image that is similar to that created by eye movement? That is something we will come onto shortly.

The numbers approach

The UFO Motion tests aren’t just good for providing visual demonstrations of some of the concepts explored in this article, they’re also useful if you want to try to quantify the differences as well. You can use this test to calculate a value known as MPRT (Moving Picture Response Time). Put simply the MPRT reflects the overall level of perceived motion blur on a monitor taking into account eye movement primarily with lower values indicating less motion blur. The test allows you to employ a range of pixel transitions, ranging from black (grey 0%) to white (grey 100%) with 25%, 50% and 75% grey steps in between. Because MPRT is designed to reflect the ‘overall visual responsiveness’ the refresh rate and sampling behaviour of the monitor are really the primary factors. Particularly slow pixel responses can increase MPRT values slightly as well; trailing may be visible in such cases that goes beyond the scope of perceived blur due to eye movement. This is why you should assess as many different pixel transitions as possible to gain a representative MPRT. Another thing you tend to find on modern monitors is that certain pixel transitions may be particularly affected by overdrive artifacts where aggressive grey to grey acceleration is used, as discussed previously. Such artifacts can certainly affect the perceived quality of motion but don’t generally have a significant effect on the MPRT and should therefore be considered separately.

The graph below shows the Moving Picture Response Times (MPRTs) for a range of PC monitors. All screens are set up as detailed in their respective reviews, but note that the S27A750D in frame sequential mode has its brightness set to ‘100’. The MPRTs given for each display are an average including transitions between every grey level available in the test (0, 25, 50, 75 and 100). The transitions are done both ways, for example white to black is tested as well as black to white. The monitors include a range of different panel types (TN, VA, IPS and PLS) as well as refresh rates (60Hz, 72Hz, 120Hz, 144Hz and 240Hz). Models that are regular ‘sample and hold’ are given blue bars whereas those with strobe backlights have green bars.

The graph above shows that 60Hz sample and hold displays all have MPRT values of around 16.67ms. The 120Hz Samsung has an MPRT of 8.33ms which is half that of the 60Hz displays, whereas the VG248QE and PG278Q have MPRTs of 6.94ms at 144Hz. The XG2530, meanwhile, has an MPRT 4.16ms at 240Hz. Some of these numbers may sound rather familiar, especially if you have carefully read the ‘Refresh rate’ section of this article. The figures mirror the delay between frames and again stress the importance of refresh rate as the predominant limiting factor in perceived fluidity on a modern sample and hold display. The displays with IPS panels (such as the Dell P2414H and AOC q2963Pm) have typical pixel response times of around 6-8ms. Nonetheless they are not outperformed by the ASUS VG248QE here when it is set to the same 60Hz refresh rate, despite the ASUS having very snappy pixel response times typically around 2ms. Also interesting is that when the AOC q2963Pm is overclocked to 72Hz, the MPRT decreases despite the monitor making no adjustments whatsoever to its pixel response times. All of this indicates that refresh rate is the main influence on the level of motion blur for these displays and hence the MPRT; mirroring the previously explored theory.

The models with VA panels have slightly higher MPRT values than you might expect from their 60Hz refresh rates. The reason for this is that some of their pixel transitions are slow enough to create noticeable trailing that isn’t ‘hidden’ by the perceived blur from eye movement. Because the MPRTs given here are averages across all transitions, these slow transitions increase the final figure. On the other end of the spectrum we have the monitors where strobe backlights are employed, giving MPRT values of between 1.30ms and 2.33ms. These values are far lower than can be achieved at their respective refresh rates using sample and hold, indicating the significant impact that a strobe backlight has on perceived blur and MPRT. Another important point to raise here is that with LightBoost set to 10% on the VG248QE, the MPRT is significantly lower than with LightBoost set to 100%/Max (1.39ms vs. 2.33ms). A similar pattern is observed on the ULMB PW (Ultra Low Motion Blur Pulse Width) set to ’10’ instead of ‘100’ on compatible models. That is because the strobe length (i.e. length of time the backlight is illuminated) is decreased when LightBoost or Pulse Width is set to a lower value, which decreases motion blur but also decreases brightness. Even with LightBoost or Pulse Width set to ‘100’, though, the improvement in motion fluidity at high frame rates is significant.

If you’re interested in further study into MPRT there is a wide range of ever-expanding existing literature available on Google Scholar. Another way of looking at this data is to consider a value known as MMCR (Measured Motion Clarity Ratio). As with MPRT this measurement gives a quantitative representation of the level of perceived blur when viewing moving images on a monitor and can be calculated for a display using the UFO Motion tests. We saw above that the MPRT value on a sample and hold display closely mirrors the delay between frames on that display, with lower values representing lower levels of perceived blur. In order for much lower MPRTs to be recorded, an impulse-type sampling method must be used. This is achieved by a strobe backlight, for example. Let’s take a look at what the MMCR values look like for the displays we tested.

You can see in the graph above that for sample and hold displays the MMCR closely mirrors the refresh rate. To reinforce some points raised earlier; at a higher refresh rate the delay between frames is shorter and each frame is ‘held’ for a shorter length of time. The eyes are given more unique positional information on the screen and spend less time moving, reducing blur from eye movement. This strongly influences the MMCR, where this time a higher value indicates a lower level of perceived blur. You may also notice, looking at the figures, that the VA models have MMCRs that are lower than the other panel types at the same refresh rate. This is because, as explored in the MPRT analysis, there are some pixel transitions that are slow enough to create additional motion blur on top of that linked to eye movement. The models with strobe backlights are again able to break free from the constraints of refresh rate and cause the eyes to move a lot less, greatly reducing blur and providing significantly higher MMCR values as a result.

The pursuit photography approach

Earlier sections of this article have focussed on why we can’t just rely on pixel responsiveness or static photography to show how motion on a monitor will appear to a user. The movement of our own eyes is the most significant contributor to motion blur on typical modern (sample and hold) monitors, rather than pixel responsiveness. It is possible to give an accurate representation of what the eye sees in terms of motion blur and pixel responsiveness imperfections by using a technique called ‘pursuit photography’. By moving the camera at a steady speed, which matches the pace of action on the screen, it is possible to give an accurate representation of what the eye sees when it observes movement on the monitor. To capture the images below we followed a similar methodology to that explained by Blur Busters. The technique has also been covered in a peer-reviewed research paper, which is an interesting read for solid scientific background on the technique – the science behind it and what exactly it shows. The images below were captured with the UFO Motion Test for ghosting running at 960 pixels per second, with the UFOs moving from left to right as always. The middle row of the test (medium cyan background) was used. This is a good practical speed for such photography and allows accurate analysis of both perceived blur and pixel response behaviour.

Four monitors are used in this analysis. The ‘reference’ monitor is the Samsung S27A750D, capable of a 120Hz refresh rate and able to use three main backlight operating modes. The very first picture shows this monitor set to 60Hz using its ‘Faster’ response time setting and ‘100’ brightness so that the backlight is DC regulated. Here the pixel responsiveness is fast enough for optimal 60Hz performance (i.e. comfortably

Moving onto the second row, the first image there shows the S27A750D at 120Hz with response time set to ‘Fastest’ and brightness set to ‘100’. This allows the monitor to perform the pixel transitions shown in this test fast enough for optimal 120Hz performance (

The final row is there to demonstrate the effect of further increasing the refresh rate, or introducing a strobe backlight into the equation instead. The first image shows the ViewSonic XG2530, set to 240Hz and using its default ‘Faster’ pixel overdrive setting. You can see clearer details and sharper focus of the UFO, reflecting a decrease in perceived blur from the increased refresh rate. The second image in this final row has the S27A750D set to its ‘Frame Sequential’ setting, causing the backlight to strobe at 120Hz. The final image shows the XL2730Z using its ‘High’ AMA setting and ‘Blur Reduction’ enabled with the refresh rate at 144Hz. This forces the backlight to strobe at 144Hz. In both cases you can see the main object (UFO) is significantly more distinct than on any of the other images, showing sharp detail for both the alien and its spacecraft. This reflects the massive reduction in perceived blur that accompanies the strobe backlight solutions and reinforces the sorts of MPRT and MMCR figures explored in the previous subsection. You can also see some distinct trails accompanying the UFO, but these are far fainter than the object itself and doesn’t cause that to lose clarity. The 120Hz example shows a bit of conventional trailing from pixel transitions not quite keeping up with the demands of the backlight strobe, whereas the 144Hz example shows some overshoot from the strong pixel overdrive solution used. All images here give a very accurate impression of how the eye actually perceives movement on a monitor at any moment in time and we have therefore adopted this photography method in our reviews.

Conclusion

When it comes to judging a monitor’s responsiveness manufacturers give us very little to go by. One of the key figures quoted in the specifications is ‘grey to grey response time’. In this article we’ve explored the importance of response times and why they are significant, but also why you must look beyond the single value specified by the manufacturer. We’ve also looked at why refresh rate is important and how response times and refresh rate intertwine to form a key part of how well a monitor will handle motion. Another layer to consider is ‘input lag’ which primarily affects how a monitor feels in response to a user’s input but depending on what is being measured may also affect how a responsiveness the monitor looks. This isn’t specified by manufacturers and is a concept that confuses a lot of users. The term is tossed around all too freely without a clear understanding of what exactly ‘input lag’ is referring to – pure signal delay vs. taking into account pixel response time as well. It must be stressed that, as with all elements of responsiveness, subjectivity is extremely important. Not every user has the same level of sensitivity to input lag or general motion performance and tolerances do differ.

Another key piece of the jigsaw which users aren’t typically aware of is the role of sampling method and how the monitor’s illumination behaviour affects perceived motion blur. If the monitor is constantly displaying information (sample and hold) then the movement of our own eyes is the primary cause of motion blur in most cases. Increased refresh rate combined with increased frame rate can improve things here but there is a more efficient method for improving the situation. If the monitor’s light source is pulsing on and off (impulse-type display) then our eyes spend less time moving and you get less perceived blur. On modern LCDs strobe backlights can be used to great blur-reducing effect – and it seems that manufacturers are really starting to push this sort of technology through to PC monitors and not just TVs where it’s used more broadly. As things move away from LCD towards OLED technology we can expect massive improvements in pixel response time and greater breathing space for higher refresh rates. But we can’t necessarily rely on the required super-high frame rates to accompany this as detail level and effects in games will not remain static. Because of this sampling method will remain just as important as it is with LCDs; for optimum motion performance manufacturers will need to adopt a strobe light source.

0 Thoughts to “Lcd Tv Input Lag Comparison Essay

Leave a comment

L'indirizzo email non verrà pubblicato. I campi obbligatori sono contrassegnati *