I thought I will share my thoughts on this subject to the Whole World Out There(WWOT). In this post I will be sharing viva questions that I asked my students and their responses. I knew them very well and the questions asked were meant to test their knowledge as well as reveal/glean some new information to get them interested in the subject. The questions are in no particular order and I like the last question.
1) what is an image? Answer being a collection of pixels.
2) How does an image look like? Ans: In the shape of a square or rectangle.
3) What is resolution? Ans: Dots per inch(DPI) or pixels per inch(PPI) for printers and monitors resp.
4) What is aspect ratio? Ans: Ratio of the width to the height of an image. Usually 4:3 or 16:9 for Television and HDTV
5) What is compression ratio? Ans: Ratio of original(uncompressed) file size to compressed file size
Usual values(approx.) ranging from 1.1 to 1.5 for text compression to 5-10 for audio, 5-20 for images and 15-20 for video.
7) What is the usual frequency range of music? ans: 50-20000 hertz
8) What should be the minimum sampling rate of sound for a signal with highest frequency component of hf?
Ans: According to sampling theorem, sampling must be carried out at twice the highest frequency component(2*hf) or at nyquist frequency.
9) What are geometric transformations?
Transformations are change in position of points. Ex: translation, rotation, scaling.
If a transformation does not change the relative position of points and preserves lengths and angles then it is called rigid body transformations. Rigid body transformations are formed of composition of translation and rotation only.
10) What is composition of transformation?
Transformations can be cascaded where one can be performed after another. Composition means performing two or more transformation together. The composition indicates that transformation matrix will be the multiplication of individual transformation matrices.
11) Is a transformation of translation followed by rotation same as first performing rotation and then translation?
Ans: This is a high level question designed to check the higher order intelligence of students. This pertains to transformations being commutative. Ie the question is TR = RT
The answer is No. Transformations are not commutative. Why? It can be easily checked with simple drawing.
12) Are successive translations additive?
Yes. Parallelogram rule applies.
13) Are successive rotations additive?
Yes.
14) Are Successive scalings additive?
No. They are multiplicative. Ie. if a scaling done by sx1,sy1 and then by sx2,sy2 then total transformation is by sx1.sx2,sy1.sy2
example: if a body is scaled by 2,2 and then by 3,3 the total size increases by 2x3,2x3 ie 6,6
15) What is non uniform scaling?
Scaling factors sx,sy if they are not equal the object gets scaled by different amounts in horizontal and vertical directions and this is called non uniform scaling.
16) What is the increase in size of a 2D object if scaling is done by a factor of 1,1?
The object size or position does not change.
17) Does a transformation matrix always needs to be square?
A transformation matrix needs to be square because only a square matrix preserves number of dimensions after transformation.
18) What significance does determinant of a transformation matrix have?
If a unit square or a unit cube is transformed by a matrix(2D or 3D) then the determinant gives the area/volume of the square/cube after transformation.(Please visit
https://en.wikipedia.org/wiki/Determinant or Nykamp DQ, “The relationship between determinants and area or volume.” From
Math Insight.
http://mathinsight.org/relationship_determinants_area_volume)
19) What is the distance between two points (x1,y1) to (x2,y2)?
It is sqrt((x2-x1)^2+(y2-y1)^2)
20) If a translation is done by tx,ty by what distance does the object move?
It moves by sqrt(tx*tx+ty*ty)
21) How can you find the normal to a plane?
Take three points on the plane and find two vectors between one point to the other two. Cross product of the two vectors gives two vectors one going into the object formed by the plane and another going out. Use the vector going out as the surface normal.
22) How do you normalize a vector?
To normalize a vector means change the length of the vector to unity(1) without changing its direction. If a vector has components v=(a,b) then the normalized vector is given by a/|v|,b/|v| where
|v| is the length of the vector.
23) What is the length of the vector v=(a,b)?
length of vector v i.e. |v| is sqrt(a*a+b*b)
24) How do you find the mean of a set of data points in n dimensional space ?
Add all corresponding dimension values and divide by the number of data points gives a point which lies at the center of the cloud of a set of data points.
25) A given transformation matrix what significance does its eigen value and vectors have?
Eigen vectors gives all those vectors which do not change direction after the rotation/transformation. Eigen value gives the magnitude of stretching along each of the eigen vector directions. In the image taken from wikipedia the transformation matrix
A =
preserves the direction of vectors parallel to
v = (1,−1)
T (in purple) and
w = (1,1)
T (in blue). The vectors in red are not parallel to either eigenvector, so, their directions are changed by the transformation.
(This question is PhD level and cannot be expected to be answered by a graphics student but i thought it will inspire some and it is good to know)
The following questions might be useful for students studying Computer Graphics with OpenGL
1) What are the applications of Computer Graphics?
Computer Graphics finds applications in Display of Information, Design
of architectural buildings and VLSI circuits, Simulation and Animation
and in User Interfaces.
2) What are the two types of Display Devices based on CRT Technology?
A Raster CRT and Vector CRT. In Raster CRT the electron beam traces the
screen from left to right in row wise order and at the end of the last
row retraces back to the first row. SUch devices require Scan
Conversion. In Vector CRT or Calligraphic CRT, most often used as
oscilloscopes, the electron beam can be moved from any position to any
other position. The beam can be turned off to move to any new position.
Raster CRT can be further classified as Interlaced and Non-Interlaced.
3) In a Color CRT, What role does the shadow mask play?
The shadow mask made of metal screen has small holes that ensures that
an electron beam excites only phosphors of the proper color.
4) What are the Basic Transformations in Computer Graphics?
Translation- Moving objects from one point to another, Rotation-
Rotating objects about a fixed point either inside the object or
outside, Scaling- Increasing or decreasing the size of the objects and
Shear- Causing the objects to change shape as a result of force applied
on one side.
5) Give an introduction to the types of geometric primitives that can be drawn with OpenGL?
The type of primitive that needs to be drawn can be specified as
parameter to glBegin(). We can draw points(GL_POINTS), lines(GL_LINES),
polylines (GL_LINE_STRIP, GL_LINE_LOOP), polygons(GL_POLYGON), triangles
and quadrilaterals(GL_TRIANGLES,
GL_QUADS) and strips and
fans(GL_TRIANGLE_STRIP, GL_QUAD_STRIP, GL_TRIANGLE_FAN). The vertices of
the primitive can be specified using glVertex2f or glVertex3i where f
stands for float, d for double and i for integer.
6) How can you specify a viewer?
If you are writing a 2D
program then a viewer can be specified using gluOrtho2D with parameters
left, right,bottom,top which specifies a viewing rectangle within the
projection plane which is the z=0 plane or x-y plane. Using this
function will cause all points outside the clipping rectangle to be
invisible.
If it is a 3D program we need to specify a view volume. The view volume
shape depends on the type of projection. If it is an Orthographic
projection then view volume is a cuboid specified by glOrtho with
parameters left,right,bottom,top,near,far. Any 3D object within
this volume will be visible and any object outside will be not. The
projection plane is again on z=0 plane.
If it is a perspective projection we can specify a view volume which is
in the shape of a frustum of a pyramid using glFrustum with parameters
left,right,bottom,top,near,far or using gluPerspective with parameters
fieldofview, aspectratio , near, far.
We can specify a viewer position in such cases using gluLookAt with parameters eyex,eyey,eyez,atx,aty,atz,upx,upy,upz.
Remember that a 3D program can also draw 2D objects and not vice-versa.
7) What is hidden surface removal?
Hidden surface removal or visible surface determination is a technique
used to achieve realism in 3D. When objects(polygons) are drawn on
screen the final image generated will show polygons drawn last
completely overlapping those drawn before. We should not draw polygons
which are not visible. Hence there is an order in which if we draw the
polygons they will look real. This method of enforcing an order while
drawing polygons is called painter's algorithm. There are other
techniques like z-buffer algorithm which rely on removing parts of the
image which are farther from the viewer and hence obscured by other
polygons. While projecting points on a 2D surface the transformation is
not invertible as all points lying on a projector map onto the same
point. To perform hidden surface removal we retain depth information-
distance along a projector-as long as possible.
8) What is projection normalization?
We use a technique
called projection normalization, which converts all projections into
orthogonal projections by first distorting the objects such that the
orthogonal projection of the distorted objects is the same as the
desired projection of the original objects.
9) What is the minimum refresh rate required for real-time video?
Screen
needs to be refreshed to draw slightly different images in a video. The
small changes in the images is visible if the refresh rate is less than
24 frames per sec. Hence real-time video refers to a refresh rate of at
least 24 images per sec. Images consequently need to be generated
within 1/24th of a second or 41.67 millisecond.
10) What are the stages of a graphics pipeline? How is pipelining useful?
The
graphics pipeline contains four stages of Vertex Processing, Clipping
and Primitive Assembly, Rasterization and Fragment Processing.
Pipelining is used whenever multiple sets of data need to be processed
the same way. For example a complex graphics scene might consist of
millions of polygons which need to be processed. If the vertices of the
polygons are sent into the pipeline one at a time then the four stages
mentioned earlier can process them in parallel. Pipelining increases the
throughput of processing the data elements appreciably while the
latency of processing each element increases slightly.
11) What are the differences between additive colors and subtractive colors?
Examples
of additive color devices is a CRT monitor or projectors or
Slide(positive) film. Examples of subtractive colors is color printers
which use cyan, magenta and yellow colors. In additive color the primary
colors like Red, Green and Blue add together to give the perceived
color. With additive color, primaries add light to an initially black
display yielding the desired color. In subtractive color we assume that
white light hits the surface, a particular point will be red if all the
components of the incoming light are absorbed by the surface except for
wave lengths in the red part of the spectrum, which are reflected.
12) What is a color look-up table?
Suppose that the frame
buffer has k-bits per pixel. Each pixel value or index is an integer
between 0 and 2k-1. Suppose that we can display colors with a precision
of m bits ie 2m reds,2m greens and 2m blues hence we can display any of
the 23m colors but the frame buffer can specify only 2k of them. We
handle this through a user defined lookup table that is of size 2k x 3m.
The user program fills the 2k entries(rows) of the table with the
desired colors. Once the LUT is populated, we can specify a color by its
index in the LUT. For k=m=8 a common configuration we can choose 256
colors to be used in any image out of 16 million colors. These 256
colors are called the pallet.
13) What is frame buffer? What value is stored in the frame buffer? What is color depth? What is double buffering?
Frame
buffer also called video RAM is a memory buffer which stores the image
that is currently being displayed on the display screen.
The frame buffer stores the pixel values. The number of pixel
values depends on the horizontal and vertical resolution of the display.
Ex: 1024x768.
Color depth is the number of bits required to store a pixel value, usually mentioned as bits per pixel.
Double buffering is used to speed up animation. Animation requires
us to draw objects with slight displacements. These calculations need to
be done in real time at the rate of 24 frames per sec for smooth
animations. Double buffering can help to refresh image displayed on
screen with just a change in the base address of the VRAM. Usually when
double buffering is used the frame buffer size is twice that required to
store one image on screen.
14) How are shadows generated in OpenGL?
A shadow in order
to be drawn needs to be generated by projection of light rays coming
from a light source. In Viewing projections are already used to generate
images. The same transformations can be reused to generate shadows. The
Centre of Projection(COP) is now the position of the light source. The
shadow is assumed to be produced on a projection plane say on the
ground(y=0 plane). The shadow polygon so generated is drawn with shadow
color and passes through graphics pipeline as would any other
primitive.
15) How can a polygon have a 2D-texture on its surface?
At
various stages we will work with screen coordinates, object
coordinates, texture coordinates which we use to locate position within
the texture and parametric coordinates which is used to describe curved
surfaces.A 2D texture can be created by populating an array such as
GLubyte my_texels[512][512][3];
This array is then passed as parameter to the function glTexImage2D(
GL_TEXTURE_2D, 0,GL_RGB,512,512,0,GL_RGB,GL_
UNSIGNED_BYTE,my_texels); the parameters meaning target,level,iformat,width,
height,border,format,type,
tarray.
Level and border give us fine control over how texture is handled. We
must enable texture mapping as we do other options using
glEnable(GL_TEXTURE_2D); There are seperate memories like physical
memory, frame buffer, texture memory. Texture coordinates should be
mapped to the vertices by calling glTexCoord2f(s,t);
A following block of code will assign texture to a quadrilateral
glBegin(GL_QUAD);
glTexCoord2f(0.0,0.0);
glVertex3f(x1,y1,z1);
glTexCoord2f(1.0,0.0);
glVertex3f(x2,y2,z2);
glTexCoord2f(1.0,1.0);
glVertex3f(x3,y3,z3);
glTexCoord2f(0.0,1.0);
glVertex3f(x4,y4,z4);
glEnd();
16) What are quaternions?
Quaternions are an extension of complex numbers that
provide an alternative method for describing and manipulating rotations.
Quaternions provide advantages for animation and hardware
implementation of rotation. Complex numbers arise in representation of
rotations using eulers formula.
Adieu.... BBye