default search action
40th Computer Graphics International 2023: Shanghai, China - Part II
- Bin Sheng, Lei Bi, Jinman Kim, Nadia Magnenat-Thalmann, Daniel Thalmann:
Advances in Computer Graphics - 40th Computer Graphics International Conference, CGI 2023, Shanghai, China, August 28 - September 1, 2023, Proceedings, Part II. Lecture Notes in Computer Science 14496, Springer 2024, ISBN 978-3-031-50071-8
Reconstruction
- Ali Fakih, Nicola Wilser, Yvan Maillot, Frederic Cordier:
Single-View 3D Reconstruction of Curves. 3-14 - Le Ma, Zhihao Ma, Weiliang Meng, Shibiao Xu, Xiaopeng Zhang:
Audio-Driven Lips and Expression on 3D Human Face. 15-26 - Xiaoyu Chai, Jun Chen, Dongshu Xu, Hongdou Yao, Zheng Wang, Chia-Wen Lin:
Multi-image 3D Face Reconstruction via an Adaptive Aggregation Network. 27-39 - Guiqing Li, Chenhao Yao, Huiqian Zhang, Juncheng Zeng, Yongwei Nie, Chuhua Xian:
METRO-X: Combining Vertex and Parameter Regressions for Recovering 3D Human Meshes with Full Motions. 40-52 - Ritesh Sharma, Eric Bier, Lester Nelson, Mahabir Bhandari, Niraj Kunwar:
Automatic Digitization and Orientation of Scanned Mesh Data for Floor Plan and 3D Model Generation. 53-69 - Xiaoyu Chai, Jun Chen, Dongshu Xu, Hongdou Yao:
An Adaptive-Guidance GAN for Accurate Face Reenactment. 70-82 - Grina Wiem, Douik Ali:
Reconstructing Neutral Face Expressions with Disentangled Variational Autoencoder. 83-94 - Ciliang Sun, Yuqi Li, Jiabao Li, Chong Wang, Xinmiao Dai:
CaSE-NeRF: Camera Settings Editing of Neural Radiance Fields. 95-107 - Yongwei Miao, Haipeng Wang, Ran Fan, Fuchang Liu:
A Submodular-Based Autonomous Exploration for Multi-Room Indoor Scenes Reconstruction. 108-119 - Jin Chen, Jun Chen, Xiaofen Wang, Dongshu Xu, Chao Liang, Zhen Han:
Learning Degradation for Real-World Face Super-Resolution. 120-131
Rendering and Animation
- Hairong Gu, Jiale Wang, Yanhui Hu, Jixiang Wang, Lishun Sun, Mostak Ahamed:
Visualization of Irregular Tree Canopy Centerline Data from a Depth Camera Based on an Optimized Spatial Straight-Line Fitting. 135-146 - Dan Mei, Xiaogang Wang:
Finernet: A Coarse-to-Fine Approach to Learning High-Quality Implicit Surface Reconstruction. 147-158 - Anning Huang, Zhicheng Liu, Qian Zhang, Feng Tian, Jinyuan Jia:
Fine-Grained Web3D Culling-Transmitting-Rendering Pipeline. 159-170 - Xu Lu, Shuo Xiong, Tao Wu, Ke Zhang, Yue Zhang, Yachang Wang, Qilong Kou:
The Chemical Engine Algorithm and Realization Based on Unreal Engine-4. 171-186 - Geonu Noh, Hajin Choi, Bochang Moon:
Enhanced Direct Lighting Using Visibility-Aware Light Sampling. 187-198 - Dongmei Ma, Juan Cao, Zhonggui Chen:
Point Cloud Rendering via Multi-plane NeRF. 199-210 - Shuzhan Yang, Han Su:
Fast Geometric Sampling for Phong-Like Reflection. 211-222 - Haitang Zhang, Junchao Ma, Zixia Qiu, Junmei Yao, Mustafa A. Al Sibahee, Zaid Ameen Abduljabbar, Vincent Omollo Nyangaresi:
Multi-GPU Parallel Pipeline Rendering with Splitting Frame. 223-235 - Dawar Khan, Sheng Gui, Zhanglin Cheng:
Molecular Surface Mesh Smoothing with Subdivision. 236-248 - Jianping Su, Ning Xie, Xin Lou:
Photorealistic Aquatic Plants Rendering with Cellular Structure. 249-260
Colors, Painting and Layout
- Bing Yu, Wangyidai Lv, Dongjin Huang, Youdong Ding:
Staged Transformer Network with Color Harmonization for Image Outpainting. 263-274 - Keyue Fan, Shiguang Liu:
SemiRefiner: Learning to Refine Semi-realistic Paintings. 275-286 - Zhanyi Huang, Wenqing Zhao, Tangsheng Guo, Jin Huang, Ping Li, Bin Sheng:
MagicMirror: A 3-D Real-Time Virtual Try-On System Through Cloth Simulation. 287-299 - Xingquan Cai, Qingtao Lu, Jiali Yao, Yao Liu, Yan Hu:
An Ancient Murals Inpainting Method Based on Bidirectional Feature Adaptation and Adversarial Generative Networks. 300-311 - Xingquan Cai, Sichen Jia, Jiali Yao, Yijie Wu, Haiyan Sun:
An Image Extraction Method for Traditional Dress Pattern Line Drawings Based on Improved CycleGAN. 312-323 - Alexis Benamira, Sachin Shah, Sumanta N. Pattanaik:
Parametrization of Measured BRDF for Flexible Material Editing. 324-336 - Ruhan He, Xuelian Yang, Jin Huang:
cGAN-Based Garment Line Draft Colorization Using a Garment-Line Dataset. 337-348 - Xiaying Liu, Ping Yang, Alexandru C. Telea, Jirí Kosinka, Zizhao Wu:
PCCNet: A Few-Shot Patch-Wise Contrastive Colorization Network. 349-361 - Jiaze He, Wenqing Zhao, Ziruo Li, Jin Huang, Ping Li, Lei Zhu, Bin Sheng, Subrota Kumar Mondal:
Reference-Based Line Drawing Colorization Through Diffusion Model. 362-372 - Yan Wan, Yue Wang, Li Yao:
Research of Virtual Try-On Technology Based on Two-Dimensional Image. 373-384
Synthesis and Generation
- Marco Mameli, Emanuele Balloni, Adriano Mancini, Emanuele Frontoni, Primo Zingaretti:
Investigation on the Encoder-Decoder Application for Mesh Generation. 387-400 - Sijia Yang, Yun Zhou:
Arbitrary Style Transfer with Style Enhancement and Structure Retention. 401-413 - Bo Han, Yixuan Shen, Yitong Fu:
Zero3D: Semantic-Driven 3D Shape Generation for Zero-Shot Learning. 414-426 - Andreea Pocol, Lesley Istead, Sherman Siu, Sabrina Mokhtari, Sara Kodeiri:
Seeing is No Longer Believing: A Survey on the State of Deepfakes, AI-Generated Humans, and Other Nonveridical Media. 427-440 - Yuantian Huang, Satoshi Iizuka, Kazuhiro Fukui:
Diffusion-Based Semantic Image Synthesis from Sparse Layouts. 441-454 - Jia Chen, Yanfang Wen, Jin Huang, Xinrong Hu, Tao Peng:
FoldGEN: Multimodal Transformer for Garment Sketch-to-Photo Generation. 455-466 - Ruien Shen, Chi Weng Ma, Deli Dong, Shuangjiu Xiao:
Light Accumulation Map for Natural Foliage Scene Generation. 467-478 - Bailin Yang, Zheng Chen, Frederick W. B. Li, Haoqiang Sun, Jianlu Cai:
DrawGAN: Multi-view Generative Model Inspired by the Artist's Drawing Method. 479-490 - Takumi Kitamura, Naoya Iwamoto, Hiroshi Kawasaki, Diego Thomas:
A Two-Step Approach for Interactive Animatable Avatars. 491-509
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.