【计算机图形学】习题课:Viewing

这篇具有很好参考价值的文章主要介绍了【计算机图形学】习题课:Viewing。希望对大家有所帮助。如果存在错误或未考虑完全的地方,请大家不吝赐教,您也可以点击"举报违法"按钮提交疑问。

如果这篇文章对你有帮助,欢迎点赞与收藏~

CS100433 Computer Graphics Assignment 2

1 Proof the composed transformations defined in global coordinate frame is equivalent to the composed transformations defined in local coordinate frame but in different composing order.

  1. Global (or World) Frame Transformations: Transformations are applied relative to a fixed global/world coordinate frame. When multiple transformations are applied, they are composed in the same order that they are applied to the point or object.
  2. Local (or Body or Object) Frame Transformations: Transformations are applied relative to the object’s own local coordinate frame. As the object moves, its local frame moves with it. When multiple transformations are applied, the order of composition is typically reversed because each subsequent transformation is applied in the new local frame created by the previous transformation.

2 Describe the differences between orthographic and perspective 3D viewing processes? (Draw the view volume of the above two viewings)

Orthographic Projection:

  • In orthographic projection, all projection lines are parallel. Objects are projected to the viewing plane at the same size, regardless of their distance from the viewer.
  • Orthographic projection does not exhibit perspective effects; that is, the size of objects on the viewing plane does not change with distance. Objects far away appear the same size as those that are near.
  • The view volume for orthographic projection is a rectangular box, often referred to as a “view frustum,” although in the case of orthographic projection, it’s technically a rectangular prism.
  • Orthographic projection is commonly used in engineering drawings and certain types of games (like 2D platformers), as it accurately reflects dimensions and angles without distortion.

Perspective Projection:

  • In perspective projection, projection lines radiate from a point (the viewer’s eye) and spread outward, causing objects that are further away to appear smaller, creating a sense of depth.
  • This type of projection mimics the way the human eye observes the world, with closer objects appearing larger and distant objects appearing smaller.
  • The view volume for perspective projection is a truncated pyramid, with the apex at the viewer’s eye and the base corresponding to the far clipping plane.
  • Perspective projection is used in most 3D games and simulation environments because it provides a more natural three-dimensional appearance.

【计算机图形学】习题课:Viewing,计算机图形学,图形学,CG,同济

【计算机图形学】习题课:Viewing,计算机图形学,图形学,CG,同济

3 Which one defines the default NDC? Why?

glm::ortho(-1., 1., -1., 1., -1., 1.)
glm::ortho(-1., 1., -1., 1., 1., -1.)

Between glm::ortho(-1., 1., -1., 1., -1., 1.) and glm::ortho(-1., 1., -1., 1., 1., -1.), the latter defines the default NDC in OpenGL. This is because the NDC in OpenGL follows a left-hand coordinate frame where the positive Z-axis points out of the screen. The parameters for zNear and zFar in the glm::ortho function represent distances measured in the direction of the camera. So, in the latter function, with zNear set to 1 and zFar set to -1, it signifies that the near clipping plane is closer to the camera while the far clipping plane is farther away, consistent with the default behavior of OpenGL’s NDC.

【计算机图形学】习题课:Viewing,计算机图形学,图形学,CG,同济

4 What is the difference between the clip space and NDC?

Clip Space:

  • Clip space is encountered after the projection transformation has been applied to the vertices of objects in the scene but before the perspective division.
  • It is a four-dimensional space because it includes the homogeneous coordinate w alongside the usual x, y, and z coordinates. The value of w is not necessarily 1; it could be any value depending on the depth and the type of projection used (orthographic or perspective).
  • In clip space, the graphics system can perform clipping to discard geometry that is outside the viewer’s field of view or behind the camera. This is because the clip space is configured in such a way that any coordinates outside a certain range can be easily identified and excluded from the final image.

Normalized Device Coordinates (NDC):

  • After the vertices have been transformed to clip space and clipping has been performed, the perspective division is applied. This process involves dividing the x, y, and z coordinates by the w coordinate. The result of this division is the NDC space.
  • In NDC, the homogeneous coordinate w is now equal to 1. This effectively reduces the dimensionality back to three, making it suitable for the final step of rasterization, which maps these coordinates onto the two-dimensional viewport or screen.
  • The NDC space is a cubic volume where the x, y, and z coordinates range from -1 to 1. Any point within this range can be mapped directly to the viewport.

5 Why does clipping performed in the clip space?

  1. Efficiency: Clip space is a standardized and regular space, which makes it easier and more efficient to determine if an object is within the view frustum. Objects can be quickly tested against the boundaries of this space because, after projection but before the perspective division, the clip space is aligned with the view frustum. This means any coordinates that fall outside this regular volume can be efficiently identified and discarded.
  2. Correctness: In clip space, the original depth information of a vertex is preserved in the w component of its homogeneous coordinates. This is crucial because clipping decisions must be made based on accurate depth information to ensure that objects are correctly rendered in three dimensions. After perspective division, which converts clip space coordinates into normalized device coordinates (NDC), depth information is normalized. In NDC, all coordinates are compressed into a standard range (usually between -1 and 1), which is great for the next stages of rasterization but not for making depth-based clipping decisions.

6 What is the cause of Z-fighting? And can we solve the Z-fighting?

Z-fighting occurs due to the nonlinear interpolation of depth along the z-axis during normalization. Because the resolution of depth decreases for coordinates further from the nearest clipping plane, this can lead to precision issues in the depth buffer. When two surfaces are very close together and their depth values are nearly identical, the renderer might struggle to consistently determine which surface should be displayed over the other. This results in a flickering or stitching effect in the rendered image, known as Z-fighting.

To address the issue of Z-fighting, the following solutions can be implemented:文章来源地址https://www.toymoban.com/news/detail-801582.html

  1. Push the nearest clipping plane further away: By moving the nearest clipping plane backward as far as possible without significantly sacrificing the visible area, the density of depth buffer near the front can be reduced, which may alleviate the Z-fighting to some extent.
  2. Increase the precision of the Z-buffer: Using a depth buffer with more bits can increase the precision of depth values. For example, upgrading from a 16-bit to a 24-bit or 32-bit depth buffer can significantly reduce the occurrence of Z-fighting. This method increases the storage requirements and potential performance costs but can effectively mitigate the problem.

到了这里,关于【计算机图形学】习题课:Viewing的文章就介绍完了。如果您还想了解更多内容,请在右上角搜索TOY模板网以前的文章或继续浏览下面的相关文章,希望大家以后多多支持TOY模板网!

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处: 如若内容造成侵权/违法违规/事实不符,请点击违法举报进行投诉反馈,一经查实,立即删除!

领支付宝红包 赞助服务器费用

相关文章

  • 计算机图形学——大作业

    绘制一个简单的三维场景,可以是室内:卧室,办公室,教室,也可以是室外:运动场,公园等,加上光照效果,简单的纹理映射,透视投影;不能过于简单;可以加动画、鼠标和键盘交互。     上交材料: project和word文档(具体内容展示,思路和心得) 首先初始化窗口,

    2024年02月11日
    浏览(48)
  • 计算机图形与图像技术

    可以使用Python、Java等语言。 下图中,图中各事物比例失调 如何使用代码去掉某个人(不允许使用抠图工具)?         像素(Pixel)是“图像元素”的缩写, 指的是图像的最小单位 。 它是构成数码图像或屏幕显示图像的基本单元,代表了图像中的一个小点或一个小方块

    2024年02月07日
    浏览(55)
  • 计算机图形学 期末复习笔记

    目录 第一章-导论 1. 计算机图形学的定义 2. 计算机图形学的应用领域 2.1 计算机图形学与其他学科的关系 3. 图形显示器的发展及其工作原理 3.1 阴极射线管(CRT) 3.2 随机扫描显示器 3.3 直视存储管显示器 3.4 光栅扫描显示器 4. 图形软件标准的形成 5. 三维图形渲染管线 第二章

    2024年02月12日
    浏览(43)
  • 计算机图形学(三) -- 3D 变换

    同样引入齐次坐标: 3D 点 = ( x , y , z , 1 ) T (x, y, z, 1)^T ( x , y , z , 1 ) T 3D 向量 = ( x , y , z , 0 ) T (x, y, z, 0)^T ( x , y , z , 0 ) T 通常, ( x , y , z , w ) (x, y, z, w) ( x , y , z , w ) (w != 0) 表示一个坐标为 ( x / w , y / w , z / w ) (x/w, y/w, z/w) ( x / w , y / w , z / w ) 的 3D 点 用一个 4x4 的矩阵来表示

    2024年02月08日
    浏览(39)
  • 计算机图形学 | 变换与观察

    华中科技大学《计算机图形学》课程 MOOC地址:计算机图形学(HUST) 回顾几何阶段 整体流程: 这其中存在3种变换: 坐标系的变换 模型本身的运动 观察者的运动 几何变换 以上各种变换都可以通过以下变换的复合来计算: 平移 比例 旋转 对称 错切 图形的几何变换是指对图

    2023年04月27日
    浏览(44)
  • 【计算机图形学】曲线和曲面

    模块5 曲线和曲面 一 实验目的 编写曲线和曲面的算法 二 实验内容 1 :绘制Bezier曲线,并采用自行设计输入和交互修改数据点的方式。 实验结果如下图所示: 第一步:输入特征多边形的顶点个数,并按照顺序输入顶点的坐标。 第二步:点击左键生成bezier曲线(白色部分)和

    2024年02月06日
    浏览(43)
  • 【计算机图形学01】坐标变换

             将坐标变换为标准化设备坐标,接着再转化为屏幕坐标的过程通常是分步进行的,也就是类似于流水线那样子。在流水线中,物体的顶点在最终转化为屏幕坐标之前还会被变换到多个坐标系统(Coordinate System)。将物体的坐标变换到几个 过渡 坐标系(Intermediate Coor

    2024年02月10日
    浏览(38)
  • 【计算机图形学】三维图形投影和消隐(三视图构造)

    模块4-1 三维图形投影和消隐 一 实验目的 编写三维图形各种变换的投影算法 二 实验内容 1 :自行选择三维物体(不能选长方体),建立坐标系,给定点的三维坐标值,建立边表结构。完成三视图。 实验结果如下图所示: 左上显示为主视图,右上显示为侧视图,右下显示为

    2024年02月01日
    浏览(91)
  • 计算机基础一体化教程(习题)

    第一章 1.按照计算机的构成元件,电子计算机应划分为哪几个发展阶段?     一,电子管     二,晶体管     三,中小规模集成电路     四,大规模和超大规模集成电路 2.计算机有什么特点?应用领域有哪些?        计算精度高,计算速度快,存储容量大,自动化。

    2023年04月18日
    浏览(82)
  • 最全计算机图形学面试资料整理

    渲染管线又称渲染流水线,它是图形图像从数据一步一步形成最终输出的画面所要经历的各种操作过程。 物体坐标系(本地坐标系)Local Space 或 Model Space 世界坐标系 World Space 观察者坐标系(摄像机坐标系) View Space 裁剪空间 Clipping Space 屏幕空间 Screen Space 其中前四个矩阵之

    2023年04月15日
    浏览(41)

觉得文章有用就打赏一下文章作者

支付宝扫一扫打赏

博客赞助

微信扫一扫打赏

请作者喝杯咖啡吧~博客赞助

支付宝扫一扫领取红包,优惠每天领

二维码1

领取红包

二维码2

领红包