Frequently asked questions

 

How do we perceive three-dimensionally?

In real life, our eyes perceive reality under slightly different angles. Our brain analyzes the disparity between both images and produces a three-dimensional feeling. The eye vergence muscular effort is also taken into account, as well as several monocular indicators, such as perspective, occlusion of objects, their relative displacement, shadows or reflections.

In the cinema, we can see movies like Avatar in 3D because the projector alternates different images, some created for the left eye and others for the right eye. The special stereo glasses that we wear prevent each eye from seeing images due for the other eye. The brain is tricked into perceiving two distinct images and therefore rebuilds a three-dimensional feeling, even though the display on which the images are projected is flat.

When using Alioscopy 3D displays, it is no longer necessary to wear special glasses. The lenticular array covering the display shows a different image depending on the angle from which it is looked from. Since both eyes are not the same place, they both see a different image, which enables the brain to recreate a three-dimensional feeling, the more natural that no accessories are required.

What are the main advantages of Alioscopy 3D displays compared to glasses 3D screens?

Enjoying three-dimensionality without wearing glasses is the first obvious advantage of Alioscopy displays. They can be used whenever handling 3D glasses is not possible (digital signage, events, etc.). Glasses are cumbersome and cut viewers from the environment. They are ill-suited for collaborative work and make communication between viewers more difficult.

There are other significant advantages. Alioscopy 3D displays preserve brightness, color and image contrast, whereas regardless the technology, 3D glasses absorb 50% to 80% of the original display brightness and they alter colors.

Alioscopy 3D displays offer the additional advantage of delivering a dynamic 3D sensation. There are 8 multiplexed points of view in one single Alioscopy image, combining into 7 different stereo pairs. When moving laterally over a distance of 45.5 cm, viewers actually see a different stereo pair every 6.5 cm. They can enjoy the 3D scene from slightly different angles, as if they had moved sideways in front of a window at the same distance as the display. As opposed to this, viewers wearing 3D glasses moving in front of a standard 3D TV set will actually see the 3D scene distorting. This sensation is produced by the brain: since the same volume is seen unchanged despite moving, it must have distorted at the same time. The problem does not occur while watching a 2D screen, since the brain does not expect to see a plane under alternate angles when shifting position.

Why are 8 points of view required, instead of 2 in 3D cinemas?

Two separate points of view are sufficient to sense 3D. As a matter of fact, Alioscopy displays only allow each eye to see one point of view out of eight at a time, but the perceived stereo pair changes when moving. The greater the number of views multiplexed in a single image, the wider the area where 3D can be enjoyed continuously. This is known as the "sweet spot". On Alioscopy displays, the sweet spot width is 45.5 cm (7 intervals of 6.5 cm, the average distance between our eyes).

While this quality makes the image exceptionally rich, the sweet spot width is unfortunately limited. When viewers reach the edge of a sweet spot, they eventually see a transition stereo pair combining images 8 and 1. This is a 6.5 cm wide transition zone separating two sweet spots, where depth is inside out. The left eye sees image 8 while the right eye sees image 1, due to the modulo effect. Obviously, 3D cannot be properly enjoyed when standing in a transition zone but all it takes is to move slightly to the left to find stereo pair [7-8] or to the right to find stereo pair [1-2].

If autostereoscopic displays were designed to show only two points of view, every second position would be a transition zone and the odds of being ill-positioned would be 1 to 1. When showing 8 points of view, the odds of standing in a sweet spot are 7 to 1, making Alioscopy 3D displays natural and intuitive to use. Moving a couple of centimeters aside will help stepping out of a transition zone.

Under such conditions, one might well ask why just 8 points of view and not more? This choice is a trade-off between the perceived resolution of each point of view and the nominal resolution of the display, available to encode all this information. Alioscopy 3D displays take full advantage of physiology in order to deliver a high-definition 3D sensation. Displaying more than 8 viewpoints on an HD display would impair the sensation of sharpness perceived in the image. When higher definition displays become more widely available, Alioscopy will produce 3D displays with 16 points of view or else increase the resolution available for each point of view.

How is content created for Alioscopy 3D displays?

Alioscopy 3D displays require that 8 slightly offset views of a scene be mixed into a single image to be displayed in 3D. Original images can be computer generated, videos or photographs, as well as real time 3D, providing that either 8 points of view or a depth map can be output.

8 adequate views can be generated quite simply in computer graphics. Alioscopy provides its customers and partners with scripts available for main 3D animation software on the market. These additional modules can be used to generate 8 cameras in a 3D scene, combining all the required settings (focal length, stereo base and distance to the display plane). Computer artists must have knowledge of the specific 3D grammar and constraints to be complied with, but their work is considerably eased thanks to these tools.

Shooting 8-view video films is still tedious. It requires using 8 cameras, both perfectly aligned and synchronised. Alioscopy can operate with prototype camera rigs. Please contact us if you have specific requirements. Since there is a high demand for live action, 8-camera rigs should appear on the market in the future.

Photographs are easier to produce, especially when shooting stills rather than living objects. The camera is simply shifted sideways by a given step according to the distance to the subject and 8 photographs are taken. After postproduction, these photographs can be mixed into an Alioscopy image or even used to create an animated sequence can, by moving the camera in the photograph.

Real time 3D is an endless source of content: medical or scientific imaging, gaming, CAD, design, prototyping, oil, geological or mining prospection, training, military applications, security scanners, etc. Most applications can be adapted to generate 8 images on the fly, mixed into a single Alioscopy image to be displayed in 3D.

Can Alioscopy 3D displays show flat 2D content?

Yes, 2D content can be displayed on an Alioscopy 3D display and it will show like on any ordinary screen.

Can flat 2D content be converted in 3D?

Intensive research on 2D to 3D conversion is being conducted throughout the world. Glasses-3D TV manufacturers built in automatic conversion procedures in their hardware but results are uneven. More elaborate solutions used by Hollywood studios to "dimentionalize" 2D films remain costly.

There are special circumstances where 2D content can be easily converted in Alioscopy format. This applies to rotating objects, some aerial filming or travelling, which may give spectacular results.

Is it possible to view stereoscopic 3D films on Alioscopy 3D displays?

Converting 2-view stereoscopic 3D in Alioscopy format is a more mature process. The number of contractors developing skills in this field is growing and results become promising. Alioscopy displays will become eligible for the consumer market when conversion will be more widespread.

Are there alternate auto-stereoscopic technologies to the one used by Alioscopy?

Two technologies are mainly represented on the market: parallax barrier and lenticular. Alioscopy 3D displays are equipped with lenticular lenses, manufactured in France by the company.

A parallax barrier is a grid alternating transparent and opaque zones obtruding part of the screen. Through these regular thin slits, the left and right eye see two slightly different parts of the screen. When these coincide with stereoscopically compliant information, it produces a 3D sensation. Parallax barriers darken the screens correspondingly to the number of hidden points of view. Increasing the number of views disfavors this technology.

As opposed to this, lenticular lenses deliver the full brightness of the screen, regardless the number of points of view. The higher the LCD panel resolution, the better the lenticular displays, who can afford to increase the number of points of view without darkening the image. As an example, Alioscopy 3D panels are printed at a resolution of 2400 dpi, multiplexing 60 different points of view, covered by 0.6 mm lenses. This shows how 3D displays can improve in the future.

Can any LCD screen be transformed into an Alioscopy display?

No, because the lenticular array is manufactured according to the physical characteristics of the screen: number and shape of pixels, specific sub-pixel layout, pixel inter-space, video signal (RGB or YUV), electronics, etc. Furthermore, the optical lens must be bonded to the screen with an extreme precision. Therefore, it is impossible to use any given display and those used by Alioscopy were chosen very specifically to comply with a number of specifications

What are the differences between holography and Alioscopy?

Holography is a volume imaging technology using interferences produced by a laser source. Several systems exist, from single laser to more complex devices intended to create colored images. Cost and operation constraints are very limiting.

On the contrary, Alioscopy 3D display are of very simple use. They connect in DVI to a computer as would any monitor. Content creation is far more affordable and conditions of use are more flexible.

Why aren't glasses-free 3D displays wider spread today?

Autostereoscopic displays require specific content, more elaborate than the one projected in 3D cinemas. 8 points of view are required instead of 2, in order to offer spectators large sweet spot areas. These displays are mainly used wherever content can be created specifically. This specially applies to digital signage, event communication, museums or medical and professional imaging.

How many lenses are there on Alioscopy 3D displays?

Alioscopy 3D display are covered by 720 cylindrical micro-lenses aligned in a slant. The resolution of a Full HD screen is of 1920 x 1080 pixels. Every pixel consists of 3 sub-pixels (Red, Green, Blue), so that every line accounts for 1920 x 3 = 5760 sub-pixels. Every lens covers 8 sub-pixels. Consequently, there are 5760 / 8 = 720 micro-lenses.

Alioscopy images mix 8 points of view. Does it mean that each point of view's individual resolution is equal to one eighth of the displays resolution?

No, this is not the case and most viewers generally reckon that image quality perceived on Alioscopy 3D displays is comparable to that of flat HD. There are 720 cylindrical micro-lenses on Alioscopy 3D displays and every lens overlaps 8 points of view. The actual image resolution perceived by each eye on screen is therefore 720 x 1080. However, not only do both eyes see a different image at this resolution, but they also see a different color component for every pixel. This complementarity results from Alioscopy's patented 8-images mixing algorithm. The actual resolution of the stereoscopic image perceived by the brain is equal to the monocular resolution multiplied by the number of planes identified in-depth in the 3D image. This explains why the resulting quality sensation is the one of HD rather than that of a low definition screen.

It must be mentioned that any discrete information thinner than 3 pixels, for instance small characters, will be difficult to read on screen, because lenses will remove a part of the information.

Resolution perceived by each eye is 720 x 1080 sub-pixels and not 720 x 1080 square pixels. Despite the fact, why is the resulting image quality so good?

In order to answer this question, a few words must be said about the physiology of vision. The human eye involves three kinds of cones, each having a different sensitivity to the various wave lengths within the visible light spectrum. Thanks to an additive color reproduction process, the eye combines the red, green and blue light emitted from separate colored sources to produce all other colors. This enables reproducing on-screen millions of colors only using three primary RGB colors only.

Each RGB sub-pixel contributes to the luminance ( Y ) and chrominance of each pixel according to the following equation Y = 0,3R + 0,11B + 0,59G where R, G and B are the levels of the three primary colors. Image compression algorithms as well as terrestrial broadcasting rely on this relationship to reduce bandwidth.

Variations in brightness enhance the sensation of sharpness and the perception of details within an image. Every color component contributes to the total luminance and as such conveys part of the details within the full image. In terms of luminance, the 720 x 1080 sub-pixels resolution perceived by each eye is similar to the original 720 x 1080 square pixels, as far as the sharpness of details is concerned, thanks to the image mixing process: sub-pixels belong to a series of neighboring pixels and the resulting sub-sampling is compliant with the circular permutation of the chosen color required by the slant of the lenses covering the screen. If sources images have details or motives as small as a pixel, this mixing process will not be able to restore this level of detail. This is the case with very small fonts for instance. It is then necessary to use a low-pass filter to regain readability but the tradeoff is a loss in sharpness.

Why is the 3D sensation milder on a Short View display?

Sensing depth when watching 3D on a screen or a photograph relies on the combination of two equally important factors:

  • The perception of depth results from the disparity within the stereoscopic pair seen by the eyes. This disparity is the difference between the left and right images, simulating what each eye would see in real life. The two eyes are separated on average by roughly 6.5 cm (2.56"), and therefore they perceive the same scene with an offset parallax, in other words a difference in viewpoint, which leads to binocular disparity.
  • The perceived disparity is processed by the brain which operates binocular fusion, in order to produce a single sensation. This treatment is variable and depends on the conditions of observation among which proprioceptive perceptions.

Identical 3D images will produce different sensations when viewed at different distances. Thus, when showing the same content with the same disparity, Long View displays will produce a greater 3D sensation (both in depth and pop out) than Short View displays. In fact, one would expect to perceive a greater disparity from close by, as one would in real life.

Alioscopy displays are calibrated for an ideal viewing distance and they also have a minimum and a maximum viewing distance. Depending on the purpose, one must choose the appropriate display and adjust content creation settings accordingly. To ensure a natural 3D sensation on displays seen from twice closer, it is recommended to double the distance separating the cameras (stereoscopic base), in order to meet psychophysiological expectations.

Syndicate content