Understanding Stereoscopy and the Cardboard Effect
Examination of influence quantities on the pictorial design of stereoscopic productions – An analysis of the impact of various parameter on the depth effect on the basis of/based on stereophotography.
In collaboration with my project partner Stephan Sobiesinsky.
The whole, very elaborate Bachelor thesis (in German) can be purchased via Stephan or I. Please contact either of us for further information.
Photography is not limited to reproducing lines and surfaces. It has found its complement in the stereoscope, which gives to a design the most irresistible appearance of relief and roundness, insomuch, that nature is no longer content to reproduce her superficially, she gives, in addition, the complete idea of projections and contours; she is not merely a painter but a sculptor.(M. A. Belloc, 1858)
Stereo productions aim to make photographed (or recorded) objects appear three-dimensional to the viewer and create a sense of depth. Impressive effects can be accomplished nowadays, and it seems that modern stereoscopy has reached a level which imitates the human vision almost perfectly.
The first effect of looking at a good photograph through the stereoscope is a surprise such as no painting ever produced. The mind feels its way into the very depths of the picture.(Oliver Wendell Holmes, 1864)
Human Perception | Depth Cues and 3D Vision
Criteria which provide information about the depth separation of objects are split into monocular and binocular depth cues; thus into perception of depth with only one or both eyes.
Monocular Depth Cues
Various depth cues giving evidence about the position of objects in 3D space work monocularly. Information about the layout of a room can be extracted even though the image appears to be ‘flat’. Those cues can be divided into different categories:
- - Masking of Objects
- - Size of Objects
- - Perspective Distortion
- - Relative Height in Visual Field
- - Atmospheric Perspective
- - Color Perspective
- - Depth of Field
- - Light and Shade
Binocular Depth Cues
Monocular depth cues enable us to place objects in space. Nevertheless we only gain and perceive the full spatial impression via binocular vision, which is the basis of stereoscopy.
Parallax and Retinal Disparity
The mind perceives an object of three dimensions by means of the two dissimilar pictures projected by it on the two retinae.(Charles Wheatstone, 1838)
Due to the specific positioning of the human eyes two slightly different images enter – via the optic nerve – the brain and a three-dimensional image will be formed.
A person fixates point A. Due to the convergence of the eyes this point will be projected onto the area of the highest visual acuity of the retina. Point B lies in greater distance and directly behind Point A for the right eye’s view. It will therefore be projected onto the same area as point A; in the left eye, point B’s projection onto the retina will be to the right of point A. The left eye perceives point B in a different, non-corresponding area. The distance point B seems to travel between Bleft and Bright is defined as ‘parallax’, the angle as ‘parallactic angle’. ‘Retinal disparity’ is the difference of the distance of two corresponding images on the retina.
The retinal disparity is dependent on the distance to the viewer’s focus area. When a person fixates an object it will be projected onto corresponding areas of the retina. All those points which will be projected onto corresponding areas at the same time are lying on a theoretical ‘horopter’ (a in a horizontal plane lying circle; difference of the distance between two points on the retina = 0).
The horopter is defined by the points which are being projected onto the retina under the same angle. Fixating a different point under a different angle will create a new circle. However, the actual horopter is much less curved.
The area around the horopter is called “Panum’s fusional area”. In this area our brain can fuse disparate retina information and generate an image. Outside the Panum’s fusional area the image fragments into two different single images. Nevertheless due to the viewers focus on one point and the oppression of the fragmented image of the recessive eye those double images are perceived as very blurred or go totally unnoticed.
Vergences and Accommodation (eye)
Accommodation (lens curvature) is effective in a range up to roughly seven meters. When a person focusses on an object which is several meters away, rays hit the flat lens almost parallel. If the object moves closer to the viewer, the lens contracts (convex lens) and refracts the light stronger in order to produce a sharp image.
Accommodation is accompanied by convergence of the eyes. Convergence is dependent on the distance of viewer to object and most effective in short distances as the eyes have to converge extremely for close objects while they are almost parallel for objects in greater distance.
A different vergence movement is ‘divergence’ which is only possible in a slight extent. Instead of eyes moving inwards (convergence) they move outwards and extend the parallel position.
Monocular masking is one of the most important depth cues. In binocular masking two images with monocular masking depth cues will be compared: Due to the parallactic shift (perceiving slightly different images per eye) one eye (e.g. the right) gathers information of overlapping objects while the other eye might receive more information about the rear object. This helps our brain to calculate the position of both objects (to one another), especially in short distances.
Effectiveness of Depth Cues
Due to rapidly decreasing differences in the images our two eyes perceive in greater distance:
Stereoscopic accuracy decreases with distance, to a point where it’s useless, somewhere around the 150-yard mark(Mendiburu, 2009)
This is equivalent to approximately 140 meters. Monocular depth cues are therefore and due to the fact that they are multifarious in general more important and effective in daily life.
There are various methods to capture and playback stereo productions.
Recording procedures are usually divided into ‘monocular’ and ‘binocular recording’, Playback options into ‘unaided stereo vision’, ‘passive’ and ‘active systems’ and ‘auto-stereoscopy’.
The following paragraphs address all of them briefly.
The binocular method uses two lenses (either in one or two bodies) and captures the images simultaneously.
Please click on section of interest
Two identical cameras rigged next to each other, e.g. by using a stereo slide (distance between both photographs is defined as ‘stereo basis’). The cameras can be either triggered on after another (with static scenes) or synchronised by using controller or special cables.
A big disadvantage of this method is that the stereo basis cannot be freely chosen (no reduction possible at some point).
A stereo camera has two lenses in one body. Exposure and focus settings are automatically linked and both images are taken simultaneously.
Disadvantage: the stereo basis is usually not alterable.
Only one camera is used by the monocular recording technique.
Please click on section of interest
When using the successive recording a single camera will be moved after taking the first picture to record the second image. The big advantage is that you can use any camera and you can decide on whichever stereo basis. Nevertheless you can only capture still live. The use of a stereo slide is recommended.
The stereo lens (prime lens) contains multiple mirrors resp. two lenses and captures both images at the same time. The advantage of this technique is that you can capture moving objects and the lens is relatively affordable. Nevertheless you cannot change the stereo basis, the images will be in landscape format and you’ll find very restricted aperture settings.
Unaided Stereo Vision
No tools are needed to view stereo productions with this method. In order to be able to perceive a three-dimensional image you will need to practise as you will need to separate convergence and accommodation of the eyes (for parallel and cross eyes viewing).
Please click on section of interest
Two stereo images need to be placed next to each other. For this version of the unaided stereo vision the viewer needs to fixate a point in great distance in order to reach a convergence angle of 0/parallel view – the left eye fixates on the left, the right eye on the right image. Then subsequently the viewer needs to try to overlap both blurry images by focussing onto the image layer without changing the convergence angle (separation of convergence and accommodation). This stereo vision requires some practise and not everyone will succeed.
Cross eyed viewing is very similar to the parallel viewing technique. However the image for the right and left eye are interchanged (left image: right eye; right image: left eye) – the convergence point lies in front of the images. In order to see a three-dimensional picture the eyes have to focus onto the image plane without changing the convergence angle.
Please click on section of interest
In the anaglyph viewing method the image for the right and the one for the left eye are being coloured in using two complementary colours (most common combination: red-cyan) and overlapped. In order to receive a three-dimensional impression a pair of glasses with the same colour combination will be used to filter and separate the images into the one for the left and the one for the right eye. The red image information will be filtered by the red part of the glasses (passes through) while the cyan elements will be blocked and appear black through the red glass.
We differentiate between: real anaglyphs, grey anaglyphs and coloured anaglyphs.
The polarisation technique uses different polarisation properties of light to separates the left from the right image. The image will be projected using two projectors, each polarising the ‘image’ in a different way. The left part of the corresponding glasses will let the left image pass through and the right part the right image.
We differentiate between: linear and circular polarisation
Similar to the unaided viewing procedures (parallel/cross eyed viewing) the images will be displayed next to each other or one above the other. In order to view the 3D image special glasses will be used: prisms help to visually overlap the two images and direct the left image to the left and the right image to the right eye.
The interference filter technology uses slightly shifted wavelengths of red, green and blue for the right and left eye. Specific glasses (which are rather expensive to produce) filter the corresponding wavelengths for the left and right eye and therefore produce a 3D image.
Please click on section of interest
This technique uses specific ‘active’ glasses which works by only presenting the image meant for the left eye by blackening the right eye’s view as well as presenting the right-eye image by blackening the left part of the glasses. This will be repeated extremely rapidly (usually >100Hz) and is unperceived to the viewers eyes. The result is the illusion of a single 3D image.
Please click on section of interest
Basics and parameter of stereophotography
Stereoscopy is aiming for a representation of the real world which is as realistic and three-dimensional as possible. Thereby it is much more than capturing two slightly different images and overlapping them in postproduction. There are a few characteristics and constraints to bear in mind in order to create a convincing image. While shallow depth of field seems to create more depth in 2D photography it can be perceived as intruding in stereoproductions. Close ups can be troublesome due to cutting objects at the edge of the screen (cf. stereo window)
Convergences and Accommodation (eye) in Stereoproductions
As mentioned before, convergence and accommodation of the eyes need to be separated in order to see two overlapping images as three-dimensional impression. The segregation of ocular movement and curvature of the eye lense is unnatural for humans and usually does not happen in the real world. It is however crucial to view stereoscopic images – in contrast to the real world the actual distance to an object (distance to the screen/image) does not (always) match the distance the viewer perceives. Objects can be perceived as ‘coming out’ of the image or ‘lying behind’ while the image information is for all objects on the same image plane.
When stereo-images are being projected, they overlap – the bigger the discrepancy between corresponding points of the left and right image (called ‘screen parallax’ or ‘pixel parallax’), the further the 3D vision appears from the actual image plane. If there is no pixel parallax between two points, the objects lies and is perceived on the image/screen plane.
In summary: the viewers eyes accommodate (‘focusses’) onto the screen plane while the eyes converge onto a point either before, on or behind the image plane.
Based on Lipton there exist four different states of convergence:
1. Crossed screen parallax: Objects in front of the screen; points for the right eye are displayed left of the corresponding points for the left eye
2. Zero/positive screen parallax: Objects on screen plane (and behind); points for left and right eye do not have any screen parallax or: points for the right eye are displayed right of the corresponding points for the left eye)
3. Infinity parallax: Objects far behind the screen; are the eyes fixated onto a point in infinity behind the screen, the convergence of the eyes is parallel
4. Divergent parallax: Objects behind the screen causing divergence; discrepancy between left and right image is too big to be converted into a 3D image, the eyes diverge outwards, causing exhaustion
Fusion range and image decay
As mentioned before, humans are able to fuse disparate retina information in order to generate an image within the Panum’s fusional area. Objects located too far from the fixation point and outside the Panum’s area therefore ought to create the impression of two completely separate images (image decay). Those double images are being suppressed in real life – the information the recessive eye perceives is restrained and objects appear blurry.
This suppression mechanism does not really work for stereo productions; accommodation and convergence are working separately while viewing stereo images and even huge translations/offsets will appear crisp and clear. This condition causes the viewer fusion problems – the stereo image fragments and disorientation occurs.
In order to prevent image decay for the viewer it is important to limit the offset to a maximal suitable value.
The so called ‘(screen) deviation’ is defined as the difference between biggest and smallest parallax offset on screen. It depends on the maximum parallax angle α (difference between αnear and αdistance), the screen width and the ratio of screen width to distance of viewer to screen (cf. fig right).
By rearranging dependencies:
Ratio of acceptable screen deviation to screen width = 1/30.
The 1/30 rule is valid until the screen parallax exceeds a value of 63mm – which is the average distance between human eyes. Any value bigger than that would cause painful divergence of the eyes and is unadvisable.
In stereo productions the screen is not only projection plane but furthermore limits the virtual 3D world. For objects being seen on or behind the image plane the borders of the screen restrict the view like a window which seems very natural to the viewer. Nevertheless it is much different for the area of negative parallax: objects appearing to be in front of the screen can be cut by the stereo window which lies in greater distance. This occurrence creates a conflict for the viewers brain: the masking of objects indicate a position behind the stereo window but the negative parallax and three-dimensional illusion imply that the object is positioned in front of the stereo window.
It is advisable to properly frame objects which should appear in front of the stereo window while shooting, i.e.: objects in front of the stereo window should not touch the picture margin. Nevertheless it is possible to fixed objects violating the stereo window in postproduction. By masking out the redundant image information (cf. fig. left: masking of sphere in left image in order to create the same clipping as in the right image), a ‘floating stereo window’ will be created; the objects is now coplanar with this virtual screen.
Other problems can occur e.g. when an object in front of the screen is cut at top and/or bottom – while it generally seems to float in front of the screen, the top and/or bottom part of the object is forced onto the projection plane in greater distance (since it seems to lie behind the window). The image will appear warped to the viewers eye.
Stereoscopy is the attempt to reproduce the impression of the ‘real world’ – it complies with own rules, has its limits and has certain characteristics which do not exist in natural vision. Due to the nature of stereo productions two offset images are needed which can contain brightness, contrast, colour or size mismatch and lead to disturbing effects while viewing. These differences can be adjusted in post production whereas the so called ‘cardboard effect’ can be found in many productions and is hardly to fix.
The cardboard effect makes objects in a scene look like cardboard standups in a three-dimensional space – they lack plasticity.
The cardboard effect makes 3-D images look layered, i.e., consisting of flat objects and flat background though an observer can grasp the situation before and behind the target.Okano, Okui und Yamanoue (2006)
Examples for the Cardboard Effect
Examples for films where the cardboard effect occurs (occasionally) include all kinds of famous films, such as:
- Avatar (2009)
- StreetDance 3D (2010)
- Pirates of the Caribbean: On Stranger Tides (2011)
- Priest (2011)
The examples above contain films entirely being filmed in stereo, films containing CG as well as real filmed footage, and films which were converted after being shot (traditional 2D). It can be assumed that various parameter can generate a cardboard-like effect.
Even if the described effect is most probably less disturbing than ghosting or image decay, it does diminish the spacial impression. Therefore the stereo image forfeits its fascination and impact to a certain extend: to create the illusion of a realistic impression.
The so called ‘stereoscopic plasticity‘ (SP) is a value describing the ratio of virtually perceived to actual depth of an object. If SP = 1, the object is presented in its proper ratio. SP > 1 makes objects appear as being stretched; and the cardboard effect occurs when the value of the stereoscopic plasticity is virtually zero (SP → 0).
The SP is depended on different factors which can be grouped into ‘recording’, ‘post production’ and ‘playback’ which will be discussed later on.
The factors include the specific stereo basis (recording) and eye distance, the distance between camera and filmed object and between viewer and screen during playback, and the focal length during recording.
Note: I will not go into greater detail and the mathematical principles in this article; I refer to our detailed Bachelor thesis (German) which can be purchased through Stephan or I.
Influencing factors during recording
Stereo-photography tries to echo the natural vision where the recording cameras are the equivalent to human eyes. The distance between the lenses of those cameras is called ‘stereo basis’ and it suggests itself that the distance is equal to the average interocular distance (63mm). In some settings/situations however bigger or smaller values are being used which in turn influences the stereoscopic plasticity (cf. fig. right) – the bigger the stereo basis the bigger the visual angle under which you’ll perceive the depth of an object. Furthermore it can be observed that the visual angle is getting smaller the further afar an object is – objects lying in bigger distance will be appear as being less three-dimensional (cf. chapter ❯ Human Perception – Effectiveness of Depth Cues.
An extreme increase of the stereo basis can be useful to capture plasticity e.g. for faraway scenery (to prevent image decay, no objects should be in the foreground). The ratio of stereo basis to interocular distance exceeds 1 by far (hyper stereoscopy); the so called miniaturisation effect occurs. Objects appear like miniatures as the human interocular distance will stay fixed at 63mm: the viewer get the impression that the objects must have been close to the camera due to their plasticity (and therefore quite small) during recording.
- The bigger the stereo basis the more three-dimensional an object appears.
Most important factors to understand this parameter affecting the stereoscopic plasticity (SP) are the ‘near-point distance‘ (NP; distance between camera and nearest point), the ‘camera-to-subject distance‘ (CS; recording distance to objects) and the ‘stereo window distance‘ (SW; distance between camera and the plane which is used as projection plane during playback).
In the general case the SP = 1 if NP = CS = SW while the SW depends on stereo basis, focal length and deviation.
Important statements are:
- The further an object is positioned behind the NP and therefore the SW, the bigger the CS: it appears less three-dimensional (SP < 1).
- The smaller the stereo basis with unaltered distance, the smaller the SW and therefore the SP (SP < 1). In order to prevent a reduction of the plasticity the NP would have to be adjusted to the SW.
- The bigger the NP while stereo basis and SW stay the same, the smaller the SP (SP < 1). In order to prevent the flattening of objects the SW would need to be adjusted by increasing the stereo basis.
In traditional (2D) photography different focal lengths are being used to capture different image sections. Wide-angle lenses make it possible to capture a vast angle of view (ideal for landscape photography) while tele lens narrow the view but zoom into the object of interest (often used for capturing animals). Lenses with a focal length of 50mm are defined as normal lens (only applies to cameras with full-sized chips) as the angle of view roughly corresponds to the human perception. A focal length of > 50mm is defined as tele lens, < 50 as wide-angle lens.[/vc_column_text][vc_column_text]
In regards to the figure above (top line) it can be said that an object in great distance which is captured with a tele lens appears to be bigger but not more three-dimensional (wide-angle to tele lens from left to right).
The second set (bottom) shows the stone ball in a comparable size, captured with different focal lengths. In order to achieve this the distance to the object had to be altered according to the focal length.
- The bigger the focal length and therefore the distance to the object, the flatter it appears (flat/tele lens to three-dimensional/wide-angle lens from left to right).
Long lenses make poor 3D; short lenses makes great 3D.Mendiburu (2009)
Tele lenses compress space (little plasticity) while wide-angle lenses seem to stretch it (high plasticity). In order to bring the stereoscopic plasticity (SP) closer to SP = 1, which meets the natural vision, various parameters can be altered. It is however quite difficult to find a satisfactory method to adjust the cardboard effect while using a tele lens.
Shallow depth of field (only small area is displayed fully in focus) is a stylistic element in film and photography. The photographer can direct the view of the audience to certain objects of interest and separate objects from the background. While those rules also apply to stereoscopy it has to be mentioned that a shallow depth of field defeats the purpose of three-dimensionality. It would be much more authentic if the viewer had the change to properly ‘look around’ in a completely crisp stereo image. Furthermore due to shallow depth of field the 3D scene appears layered and elements look like being cut out.
- The blurrier an image the more extrem the cardboard effect
Lighting plays an important roll in both traditional and stereo photography. It should be used in a way which promotes the depth impression. The image above shows different lighting examples including a CG character casting virtually no shadow (appears very flat) to lateral light casting a soft shadow (appears most three-dimensional).
Similar to traditional photography, a general depth impression is already ensured by positioning objects in various depths (monocular depth cues) and dividing a scene into fore-, mid- and background. Distinct lines of sight (objects positioned accordingly) let the scene appear much more spacious than objects arranged in a parallel manner (cf. fig. right: missing of monocular depth cues, only parallel lines → images appears flat, even in ‘2D’).
Influencing factors during post production
Horizontal image translation
Even after recording stereo images you can change the stereoscopic plasticity by applying the so called ‘horizontal image translation’ (HIT) method in post-production:
The probably most common method for HIT is the parallel camera arrangement (cameras will not be swivelled in onto a common point). Recording two images with this technique means that the images can be translated to each other without causing major problems. Images displayed without applying HIT would create big parallax offsets for foreground elements while objects in the far background would overlap completely as the stereoscopic accuracy decreases with distance to virtually zero. This means that the whole scenery would be displays in the area of negative parallax: in front of (and for distant objects on) the screen which would automatically create a violation of the stereo window (cf. fig. below: a) – image display without HIT). In figure b) the foremost area of the object has been aligned – the scene appears behind the stereo window but loses image information at the sides.
If the previously computed distance between camera and stereo window will be altered in post production (using HIT), it will result in a changed size perception. Objects being pushed into greater distance will appear bigger to the audience (the size of the displayed object on screen is unchanged while it seems further away), objects dragged into the front will appear smaller.
The 3D sizing effect can be used with a storytelling purpose. Say the heroes embark on a boat trip and get caught in a hurricane. You will start the sequence with a massive boat, far behind the screen. When the weather turns bad, bring the boat forward, it will shrink to the size of a train wagon. At the screen plane, it is the size of a van, much weaker in the moving waters. By the end of the sequence, bring it further in front of the screen, where it looks like a small toy, and all its might has vanished.Tim Sassoon in Mendiburu (2009)
As mentioned before for the general case the rule applies: near-point distance = stereo window distance (NP = SW). This means that the nearest point of a stereo image would be positioned in a plane with the stereo window and the most remote point virtually in infinity. If the whole scene is left (cf. text above) or purposely moved into the area in front of the window, the spatial extent will be compressed into a small range (corresponds to half the distance between viewer and screen). Objects will appear flat – the cardboard effect occurs.
Apart from the horizontal camera arrangement it is worth mentioning the converging camera arrangement (cameras are swivelled in onto a common point) and censor shifting (combines advantages of parallel and converging camera arrangement; mostly used in CG production as virtually impossible to achieve good results in reality).
The basic method of a 2D-3D-Conversion comprises in creating two horizontally translated perspective views from a single two-dimensional image.
There are attempts to create automated conversion tool (e.g. based on depth from motion and colour segmentation.
Traditional conversion techniques include the creation of depth maps (grey-scale image which provides information about the position of an object – the darker the pixel the further it was located from the camera). This information will be used to displace the objects horizontally for left and right image (but: displaced objects leave gaps in images). Other techniques are: replication of the scene in 3D (very complex) and creation of cut-outs which will be horizontally translated (will leave gaps, too).
Influencing factors during playback
The alteration of the playback size itself does not change the three-dimensional impression. As mentioned earlier the maximal deviation on screen is calculated following the 1/30 rule (ratio of deviation to screen = 1/30). This is valid up to a screen width of 1.9m – above that the deviation would exceed a value of 63mm and lead divergence of the eyes. For bigger productions the deviation is calculated via the ratio of interocular distance to projection magnification factor (with increasing magnification factor the maximal screen deviation decreases).
The stereo plasticity remains (more or less) the same with increasing screen size as the viewing distance will usually increase accordingly.
- If the viewing distance remains unchanged with increasing screen size, the stereo plasticity will decrease (cf. following subchapter).
The viewing distance matters significantly in regards to the three-dimensional impact of a stereo production.
In order to understand why a short distance to the screen produces a flatter image than a large distance, we need to have a look at how natural vision works in comparison to stereo vision.
1) The width G of a box will be seen under angle γ, section A’ (equivalent to depth T) by the right eye under angle α (g is eyes-to-subject distance, A interocular distance). With increasing distance g, width G and angle γ decreases accordingly (linear reduction). A’ as well as angle α decrease squared. Therefore ratio T/G resp. A’/G decreases linearly with increasing distance.
2) In stereo productions the depth impression depends on the recorded parallaxes. The offset (A’ or in stereo productions V’) does not change when the viewer de- or increases his viewing distance. Situation 2a) shows a viewer positioned close to the screen (viewing distance: a). The box seems wider than it is deep, while the viewer in position 2b) will see a box which is deeper than it is wide.
- Even if the plasticity will decrease for shorter distances to the screen, people might prefer sitting in the front rows as a stereoscopic production might feel more tangible, immersive and intensive than from afar.
In order to verify the constructed theories, part of the Bachelor thesis was a practical analysis. We presented 19 different stero images to 12 test persons. The images included examples with different stereo basis, camera-to-subject distance, focal length, focus, lighting, image structure, viewing distance and horizontal image translation (including attempts to fix image decay etc.).
The in-depth experimental setup and execution as well as the stereo images can be found in our Bachelor thesis.
Over the course of researching stereoscopy and the cardboard effect, we worked out that factors like a small stereo basis, high distance to the subject and the use of tele lenses play a part in restricting the three-dimensionality of objects in stereo productions. The test persons agreed that artistic elements like shallow depth of field, missing lighting and the lack of monocular depth cues also contribute to a cardboard-like impact – nevertheless they only seem to intensify the effect generated by factors like tele lenses but appear to be virtually ineffective on their own.
It was observed that most plasticity-forfeiting factors can be compensated by adjusting other factors – e.g. a small stereo basis can be compensated by a shorter focal length. It is however hard or sometimes impossible to fix (stereoscopic plasticity SP = 1) extrem long focal lengths without causing painful disparities on screen which can lead to image decay.
You cannot cheat the space when you shoot 3D.Rob Engle in Mendiburu (2009)
Moreover one should be mindful of how comfortable it is to look at a stereo production. Even if the stereoscopic plasticity is mathematically correct it does not mean that an image is perceived as being perfectly three-dimensional and in some circumstances can appear as unpleasant to view.
Ultimately stereoscopy should not only be based on mathematics – aesthetics and pictorial design should be considered as well in order to create a convincing stereo image.
Bibliography and additional literature
This is the full list of literature we used for our 150-page Bachelor thesis.
Hi, I'm Anni. I'm a VFX Artist. 3D lover. Multimedia Producer. Travel enthusiast. Nature lover. DIY fan. Music devotee. And much more.
- TOP-BLOG July 6, 2015
- TOP-2D July 6, 2015
- TOP-3D July 6, 2015
- Nostalgia – Bournemouth & surrounding (2012-2013) July 6, 2015
- Oxford June 22, 2015
- Sunday morning drawstring bag May 31, 2015
- London Musicals May 29, 2015