上一次我们了解了 Piccolo 引擎中渲染的基本流程,包括 Logic Tick 和 Render Tick 的数据交换和 RenderTick 的大致过程,这一节我们来详细看这些过程都是如何实现的,在此基础上下一节尝试为整个渲染流程加入一个 Color Grading Pass。
1 Render Tick 整体流程
首先回顾上一节中看过的 Render Tick 整体流程:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32
| void RenderSystem::tick() { processSwapData();
m_rhi->prepareContext();
m_render_resource->updatePerFrameBuffer(m_render_scene, m_render_camera);
m_render_scene->updateVisibleObjects(std::static_pointer_cast<RenderResource>(m_render_resource), m_render_camera);
m_render_pipeline->preparePassData(m_render_resource);
if (m_render_pipeline_type == RENDER_PIPELINE_TYPE::FORWARD_PIPELINE) { m_render_pipeline->forwardRender(m_rhi, m_render_resource); } else if (m_render_pipeline_type == RENDER_PIPELINE_TYPE::DEFERRED_PIPELINE) { m_render_pipeline->deferredRender(m_rhi, m_render_resource); } else { LOG_ERROR(__FUNCTION__, "unsupported render pipeline type"); } }
|
其中 processSwapData()
上一节中已经学习过, m_rhi->prepareContext()
用来准备渲染命令,其内部就是初始化 command buffer:
1 2 3 4 5 6 7
| void VulkanRHI::prepareContext() { m_p_current_frame_index = &m_current_frame_index; m_current_command_buffer = m_command_buffers[m_current_frame_index]; m_p_command_buffers = m_command_buffers; m_p_command_pools = m_command_pools; }
|
下面是一些需要具体看的操作,我们一一探究。
2 updatePerFrameBuffer
m_render_resource->updatePerFrameBuffer(m_render_scene, m_render_camera)
负责准备每一帧渲染中的场景数据,包括 VP 矩阵,相机位置,环境光,点光源数量,每个点光源的强度、位置、半径衰减以及平行光属性,把这些数据存到 RenderResource
类的对应成员中用于之后渲染使用:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49
| void RenderResource::updatePerFrameBuffer(std::shared_ptr<RenderScene> render_scene, std::shared_ptr<RenderCamera> camera) { Matrix4x4 view_matrix = camera->getViewMatrix(); Matrix4x4 proj_matrix = camera->getPersProjMatrix(); Vector3 camera_position = camera->position(); glm::mat4 proj_view_matrix = GLMUtil::fromMat4x4(proj_matrix * view_matrix);
Vector3 ambient_light = render_scene->m_ambient_light.m_irradiance; uint32_t point_light_num = static_cast<uint32_t>(render_scene->m_point_light_list.m_lights.size());
m_mesh_perframe_storage_buffer_object.proj_view_matrix = proj_view_matrix; m_mesh_perframe_storage_buffer_object.camera_position = GLMUtil::fromVec3(camera_position); m_mesh_perframe_storage_buffer_object.ambient_light = ambient_light; m_mesh_perframe_storage_buffer_object.point_light_num = point_light_num;
m_mesh_point_light_shadow_perframe_storage_buffer_object.point_light_num = point_light_num;
for (uint32_t i = 0; i < point_light_num; i++) { Vector3 point_light_position = render_scene->m_point_light_list.m_lights[i].m_position; Vector3 point_light_intensity = render_scene->m_point_light_list.m_lights[i].m_flux / (4.0f * glm::pi<float>());
float radius = render_scene->m_point_light_list.m_lights[i].calculateRadius();
m_mesh_perframe_storage_buffer_object.scene_point_lights[i].position = point_light_position; m_mesh_perframe_storage_buffer_object.scene_point_lights[i].radius = radius; m_mesh_perframe_storage_buffer_object.scene_point_lights[i].intensity = point_light_intensity;
m_mesh_point_light_shadow_perframe_storage_buffer_object.point_lights_position_and_radius[i] = Vector4(point_light_position, radius); }
m_mesh_perframe_storage_buffer_object.scene_directional_light.direction = render_scene->m_directional_light.m_direction.normalisedCopy(); m_mesh_perframe_storage_buffer_object.scene_directional_light.color = render_scene->m_directional_light.m_color;
m_mesh_inefficient_pick_perframe_storage_buffer_object.proj_view_matrix = proj_view_matrix;
m_particlebillboard_perframe_storage_buffer_object.proj_view_matrix = proj_view_matrix; m_particlebillboard_perframe_storage_buffer_object.eye_position = GLMUtil::fromVec3(camera_position); m_particlebillboard_perframe_storage_buffer_object.up_direction = GLMUtil::fromVec3(camera->up()); }
|
3 updateVisibleObjects
1 2
| m_render_scene->updateVisibleObjects(std::static_pointer_cast<RenderResource>(m_render_resource), m_render_camera);
|
这个函数负责预先计算可见的物体,将完全不可见的物体剔除掉,不送入之后的渲染流程中,将可见物体的属性设置好便于之后使用:
1 2 3 4 5 6 7 8 9
| void RenderScene::updateVisibleObjects(std::shared_ptr<RenderResource> render_resource, std::shared_ptr<RenderCamera> camera) { updateVisibleObjectsDirectionalLight(render_resource, camera); updateVisibleObjectsPointLight(render_resource); updateVisibleObjectsMainCamera(render_resource, camera); updateVisibleObjectsAxis(render_resource); updateVisibleObjectsParticle(render_resource); }
|
可以看到其中调用了各种可见性检测函数,包括对平行光的物体可见性、对点光源的物体可见性,对相机的物体可见性以及坐标轴可见性,坐标轴可见性是为了我们在编辑模式选中物体的时候显示坐标轴,所以也要绘制,而最后一个 updateVisibleObjectsParticle
还没有实现, 我们以 updateVisibleObjectsMainCamera
为例来看可见性检测具体干了什么,其他的也是大同小异,最后会提到。
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39
| void RenderScene::updateVisibleObjectsMainCamera(std::shared_ptr<RenderResource> render_resource, std::shared_ptr<RenderCamera> camera) { m_main_camera_visible_mesh_nodes.clear();
Matrix4x4 view_matrix = camera->getViewMatrix(); Matrix4x4 proj_matrix = camera->getPersProjMatrix(); Matrix4x4 proj_view_matrix = proj_matrix * view_matrix;
ClusterFrustum f = CreateClusterFrustumFromMatrix(GLMUtil::fromMat4x4(proj_view_matrix), -1.0, 1.0, -1.0, 1.0, 0.0, 1.0);
for (const RenderEntity& entity : m_render_entities) { BoundingBox mesh_asset_bounding_box {entity.m_bounding_box.getMinCorner(), entity.m_bounding_box.getMaxCorner()};
if (TiledFrustumIntersectBox(f, BoundingBoxTransform(mesh_asset_bounding_box, GLMUtil::fromMat4x4(entity.m_model_matrix)))) { m_main_camera_visible_mesh_nodes.emplace_back(); RenderMeshNode& temp_node = m_main_camera_visible_mesh_nodes.back();
temp_node.model_matrix = GLMUtil::fromMat4x4(entity.m_model_matrix);
assert(entity.m_joint_matrices.size() <= m_mesh_vertex_blending_max_joint_count); for (size_t joint_index = 0; joint_index < entity.m_joint_matrices.size(); joint_index++) { temp_node.joint_matrices[joint_index] = GLMUtil::fromMat4x4(entity.m_joint_matrices[joint_index]); } temp_node.node_id = entity.m_instance_id;
VulkanMesh& mesh_asset = render_resource->getEntityMesh(entity); temp_node.ref_mesh = &mesh_asset; temp_node.enable_vertex_blending = entity.m_enable_vertex_blending;
VulkanPBRMaterial& material_asset = render_resource->getEntityMaterial(entity); temp_node.ref_material = &material_asset; } } }
|
首先是获取相机的 PV 矩阵,然后根据矩阵构建世界空间下的视锥体,这部分内容我们在之前学过,具体可以查看之前的笔记【光栅化渲染器】(六)剔除与裁剪第 2 部分,有了视锥体就可以利用视锥体和 bounding box 求交来判断是否可见了,如果可见就将该物体的模型矩阵,骨骼矩阵、材质等属性记录下来。
对于平行光的可见性检测,需要先计算平行光覆盖的范围的 Bounding Box,然后后后续的操作都一样;对于点光源,需要分别计算每一个点光源覆盖的球体范围,计算一个 Bounding Sphere,然后逐点光源进行上述步骤即可。
至于求交函数,分为视锥体与 Bounding Box 求交以及 Bounding Box 与 Bounding Sphere 求交,前者我们学习过,利用点和视锥体六个平面的关系判断,后者利用球心到 Bounding Box 六个面的距离和球的半径判断即可,具体函数实现可以自行查看工程中的代码。
4 preparePassData
m_render_pipeline->preparePassData(m_render_resource)
负责准备渲染所需的 pass 数据,包括相机的 pass,编辑模式下选中物体时的显示 pass(主要是绘制坐标轴),平行光的 shadow pass,点光源的 shadow pass 等:
1 2 3 4 5 6 7
| void RenderPipelineBase::preparePassData(std::shared_ptr<RenderResourceBase> render_resource) { m_main_camera_pass->preparePassData(render_resource); m_pick_pass->preparePassData(render_resource); m_directional_light_pass->preparePassData(render_resource); m_point_light_shadow_pass->preparePassData(render_resource); }
|
5 forwardRender
经过上面的场景光源信息、相机信息、物体信息和渲染的 pass 的准备,我们就可以进行渲染了,于是接下来根据渲染路径选择不同的渲染 pipeline:
1 2 3 4 5 6 7 8 9 10 11 12 13
| if (m_render_pipeline_type == RENDER_PIPELINE_TYPE::FORWARD_PIPELINE) { m_render_pipeline->forwardRender(m_rhi, m_render_resource); } else if (m_render_pipeline_type == RENDER_PIPELINE_TYPE::DEFERRED_PIPELINE) { m_render_pipeline->deferredRender(m_rhi, m_render_resource); } else { LOG_ERROR(__FUNCTION__, "unsupported render pipeline type"); }
|
我们首先来看前向渲染 forwardRender
:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38
| void RenderPipeline::forwardRender(std::shared_ptr<RHI> rhi, std::shared_ptr<RenderResourceBase> render_resource) { VulkanRHI* vulkan_rhi = static_cast<VulkanRHI*>(rhi.get()); RenderResource* vulkan_resource = static_cast<RenderResource*>(render_resource.get());
vulkan_resource->resetRingBufferOffset(vulkan_rhi->m_current_frame_index);
vulkan_rhi->waitForFences();
vulkan_rhi->resetCommandPool();
bool recreate_swapchain = vulkan_rhi->prepareBeforePass(std::bind(&RenderPipeline::passUpdateAfterRecreateSwapchain, this)); if (recreate_swapchain) { return; }
static_cast<DirectionalLightShadowPass*>(m_directional_light_pass.get())->draw();
static_cast<PointLightShadowPass*>(m_point_light_shadow_pass.get())->draw();
ColorGradingPass& color_grading_pass = *(static_cast<ColorGradingPass*>(m_color_grading_pass.get())); FXAAPass& fxaa_pass = *(static_cast<FXAAPass*>(m_tone_mapping_pass.get())); ToneMappingPass& tone_mapping_pass = *(static_cast<ToneMappingPass*>(m_tone_mapping_pass.get())); UIPass& ui_pass = *(static_cast<UIPass*>(m_ui_pass.get())); CombineUIPass& combine_ui_pass = *(static_cast<CombineUIPass*>(m_combine_ui_pass.get()));
static_cast<MainCameraPass*>(m_main_camera_pass.get()) ->drawForward(color_grading_pass, fxaa_pass, tone_mapping_pass, ui_pass, combine_ui_pass, vulkan_rhi->m_current_swapchain_image_index);
vulkan_rhi->submitRendering(std::bind(&RenderPipeline::passUpdateAfterRecreateSwapchain, this)); }
|
可以看到在一切准备工作完成之后,首先进行了平行光的 shadow pass 和点光源的 shadow pass 生成光照阴影,然后是主相机的 pass,在主相机 pass 的 drawForward(...)
函数中依次执行了:
drawMeshLighting()
drawSkybox()
drawBillboardParticle()
(未实现)
tone_mapping_pass.draw()
color_grading_pass.draw()
if (m_enable_fxaa) fxaa_pass.draw()
drawAxis()
ui_pass.draw()
(未实现)
combine_ui_pass.draw()
(未实现)
6 deferredRender
延迟渲染的代码和前向渲染基本一致,只是最后调用的是主相机 pass 中的 draw(...)
函数:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38
| void RenderPipeline::deferredRender(std::shared_ptr<RHI> rhi, std::shared_ptr<RenderResourceBase> render_resource) { VulkanRHI* vulkan_rhi = static_cast<VulkanRHI*>(rhi.get()); RenderResource* vulkan_resource = static_cast<RenderResource*>(render_resource.get());
vulkan_resource->resetRingBufferOffset(vulkan_rhi->m_current_frame_index);
vulkan_rhi->waitForFences();
vulkan_rhi->resetCommandPool();
bool recreate_swapchain = vulkan_rhi->prepareBeforePass(std::bind(&RenderPipeline::passUpdateAfterRecreateSwapchain, this)); if (recreate_swapchain) { return; }
static_cast<DirectionalLightShadowPass*>(m_directional_light_pass.get())->draw();
static_cast<PointLightShadowPass*>(m_point_light_shadow_pass.get())->draw();
ColorGradingPass& color_grading_pass = *(static_cast<ColorGradingPass*>(m_color_grading_pass.get())); FXAAPass& fxaa_pass = *(static_cast<FXAAPass*>(m_fxaa_pass.get())); ToneMappingPass& tone_mapping_pass = *(static_cast<ToneMappingPass*>(m_tone_mapping_pass.get())); UIPass& ui_pass = *(static_cast<UIPass*>(m_ui_pass.get())); CombineUIPass& combine_ui_pass = *(static_cast<CombineUIPass*>(m_combine_ui_pass.get()));
static_cast<MainCameraPass*>(m_main_camera_pass.get()) ->draw(color_grading_pass, fxaa_pass, tone_mapping_pass, ui_pass, combine_ui_pass, vulkan_rhi->m_current_swapchain_image_index);
vulkan_rhi->submitRendering(std::bind(&RenderPipeline::passUpdateAfterRecreateSwapchain, this)); }
|
主相机 pass 的 draw(...)
函数中依次执行了:
drawMeshGbuffer()
drawDeferredLighting()
drawBillboardParticle()
(未实现)
tone_mapping_pass.draw()
color_grading_pass.draw()
if (m_enable_fxaa) fxaa_pass.draw()
drawAxis()
ui_pass.draw()
(未实现)
combine_ui_pass.draw()
(未实现)