default search action
37th SIGGRAPH 2010: Los Angeles, California, USA - Posters
- Cindy Grimm:
International Conference on Computer Graphics and Interactive Techniques, SIGGRAPH 2010, Los Angeles, California, USA, July 26-30, 2010, Poster Proceedings. ACM 2010, ISBN 978-1-4503-0393-4
Animation
- Junfeng Yao, Xiaobiao Xie, Ming Zhang, Hui Zhang, Andy Ju An Wang:
A 3-D flowering simulation based on botany characteristics and random generation algorithm. 1:1 - Shunsuke Matsuyama, Hironori Mitake, Shoichi Hasegawa:
A development environment for designing interactive characters with sensorimotor models. 2:1 - Aria Shahingohar, Roy Eagleson:
A framework for GPU accelerated needle insertion simulation using meshfree methods. 3:1 - Hirofumi Suda, Kentaro Yamanaka, Shigeo Morishima:
A skinning technique considering the shape of human skeletons. 4:1 - Michael Berger, Gregor Hofer, Hiroshi Shimodaira:
Carnival: a modular framework for automated facial animation. 5:1 - Denis Kravtsov, Oleg Fryazinov, Valery Adzhiev, Alexander A. Pasko, Peter Comninos:
Controlled metamorphosis of animated meshes using polygonal-functional hybrids. 6:1 - Hiroaki Gohara, Shiori Sugimoto, Shigeo Morishima:
Data driven in-betweening for hand drawn rotating face. 7:1 - Michael J. Gourlay:
Fluid-body simulations using vortex particle operations. 8:1 - Dung A. Nguyen, Zhaoyang Wang:
High speed 3D shape and motion capturing system. 9:1 - Meredith McLendon, Ann McNamara, Tim McLaughlin, Ravindra Dwivedi:
Lions and tigers and bears: investigating cues for expressive creature motion. 10:1 - Gregor Hofer, Korin Richmond, Michael Berger:
Lip synchronization by acoustic inversion. 11:1 - Sriranjan Rasakatla, K. Madhava Krishna, Bipin Indurkhya:
"Mod-Leg" a modular legged robotic system. 12:1 - Adriana Schulz, Marcelo Cicconet, Luiz Velho:
Motion scoring. 13:1 - Takeshi Miura, Kazutaka Mitobe, Takaaki Kaiga, Takashi Yukawa, Toshiyuki Taniguchi, Hideo Tamamoto, Noboru Yoshimura:
Multi-level segmentation of dance motion by piecewise regression. 14:1 - Shoji Kunitomo, Shinsuke Nakamura, Shigeo Morishima:
Optimization of cloth simulation parameters by considering static and dynamic features. 15:1 - Fiona M. Rivera, Phil L. Watten, Patrick Holroyd, Felix Beacher, Katerina Mania, Hugo D. Critchley:
Real-time compositing framework for interactive stereo fMRI displays. 16:1 - Cheng-Te Li, Hsun-Ping Hsieh, Tsung-Ting Kuo, Shou-De Lin:
SocioCrowd: a social-network-based framework for crowd simulation. 17:1 - Nobuhiko Mukai, Kentaro Ito, Masashi Nakagawa, Makoto Kosugi:
Spinnability simulation of viscoelastic fluid. 18:1 - Daniele Federico, Damien Fagnou, Tom Reed:
Warping the space around an animated object. 19:1
Art
- Paul D. Solt:
Artwork evolution. 20:1 - Young-Mi Kim, Jong-Soo Choi:
Breathe brush. 21:1 - Daniel Tauber:
Digital writing ductus: a visual representation of individual writing styles. 22:1 - Young-Mi Kim, Jong-Soo Choi:
Ink-and-wash painting oriental cymbidium drawn with the tip of the fingers. 23:1 - Özge Samanci, Anuj Tewari:
GPS comics: seeing thru walls. 24:1 - Cem Sina Cetin:
Musophobia. 25:1 - Tomoko Hashida, Yasuaki Kakehi, Takeshi Naemura:
Photochromic canvas drawing with patterned light. 26:1 - Russell J. Deaton:
Self-assembled art. 27:1 - Shiho Hirayama, Yasuaki Kakehi:
Shaboned display: an interactive substantial display using soap bubbles. 28:1 - Akira Nakayasu, Kiyoshi Tomimatsu:
SMA motion display: plant. 29:1 - Junfeng Yao, Xiaobiao Xie, Fengchun Lin, Xufa Ji, Xiaoyan Lin, Andy Ju An Wang:
The online customer-built WEB3D middleware system for arts and crafts. 30:1 - Yuka Kubo, Hiroyuki Shindo, Koichi Hirota:
The Orikao Method: 3D scene reconstruction from Japanese beauty portraits. 31:1
Design
- Robert B. Trempe Jr.:
24X7@PHL: Codify. 32:1 - Amy Martin, Wendy Ju:
Bloom: an interactive, organic visualization of starred emails. 33:1 - Yuki Igarashi:
DECO: a designing editor for line stone decoration. 34:1 - Yuki Igarashi, Hiromasa Suzuki:
Designing a new toy to fit other toy pieces: a shape-matching toy design based on existing building blocks. 35:1 - Koh Sueda, Kazushi Kotani, Jun Rekimoto:
Easy-Tagging Cam: using social tagging to augment memory. 36:1 - Hiroki Yamada:
Floral melody: flower arrangement as music interface. 37:1 - Robert B. Trempe Jr.:
How would you like to live? 38:1 - Ming Tabg, Jonathon Anderson:
Mathmorph. 39:1 - Rebecca Findlay:
Rebecca Findlay's Pixel Pusher: poster abstract. 40:1 - Sophia Sobers:
Reactive architecture. 41:1 - Kumiko Kushiyama, Tetsuaki Baba, Kouki Doi, Shinji Sasada:
Temperature design display device to use peltier elements and liquid crystal thermograph sheet: "Thermo-Pict neo". 42:1 - Hiroki Yamada:
Tiny Dreamy Stories: interactive paper book capable of changing the storylines. 43:1 - Jieun Kim, Carole Bouchard, Jean-François Omhover, Améziane Aoussat:
TRENDS: a content-based information retrieval system for designers. 44:1 - Mary Huang:
TYPEFACE. 45:1
Games
- Hwan-Soo Yoo, Seong-Whan Kim:
Agritainment: 3D collaborative space for training agricultural experience with entertainment elements. 46:1 - Liliana Vega Zazueta, Griselda Ledezma, Anayeli Hidalgo, Eduardo Ruiz, Omar Pinto, Ricardo Rafael Quintero Meza, Leopoldo Zenaido Zepeda Sánchez:
Basic elements on game design for interactive museum exhibitions. 47:1 - David Bartle, Samuel Rossoff, David Whittaker, Bruce Gooch, Kim Kerns, Jenny MacSween:
Cognitive games as therapy for children with FAS. 48:1 - James R. Geraci, Erek R. Speed:
Improving program productivity, performance and portability through a high level language for graphics and game development. 49:1 - Madhuri Koushik, Eun Jung Lee, Laura Pieroni, Emily Sun, Chun-Wei Yeh:
iPad mini-games connected to an educational social networking website. 50:1 - Tetsuaki Baba, Kumiko Kushiyama, Kouki Doi:
ThermoGame: video game interaction system that offers dynamic temperature sensation to users. 51:1
Hardware
- Kotaro Takahashi, Tomohito Yamamoto:
3D audio-visual display using mobile devices. 52:1 - Mike Roberts, Mario Costa Sousa, Joseph Ross Mitchell:
A work-efficient GPU algorithm for level set segmentation. 53:1 - Budirijanto Purnomo, Norman Rubin, Michael Houston:
ATI Stream Profiler: a tool to optimize an OpenCL kernel on ATI Radeon GPUs. 54:1 - Yoshiharu Momonoi, Masahiro Sekine, Tatsuo Saishu, Yasunobu Yamauchi:
Birds-eye view ray scan system for flatbed autostereoscopic displays. 55:1 - William C. Thibault:
Camera-based calibration for scalable immersive rendering. 56:1 - Sriranjan Rasakatla, Kashyap Kompella, Krishna Koundinya:
Car tracking and vibration test rig using Neo-Freerunner. 57:1 - Kazuhisa Yanaka, Akifumi Momose, Masahiko Yoda:
Chroma keying between integral photography images. 58:1 - Douglas Lanman, Matthew Hirsch, Yunhee Kim, Ramesh Raskar:
Content-adaptive parallax barriers for automultiscopic 3D display. 59:1 - Kip Haynes, Jacquelyn Ford Morie, Eric Chance:
I want my virtual friends to be life size!: adapting Second Life to multi-screen projected environments. 60:1 - Shunsuke Yoshida, Sumio Yano, Hiroshi Ando:
Implementation of a tabletop 3D display based on light field reproduction. 61:1 - Sriranjan Rasakatla:
Low cost 3D perception sensors. 62:1 - Hideaki Nii, James Keng Soon Teh, Adrian David Cheok:
Moving Slit Light Field Display. 63:1
Image/Video Processing
- Ishtiaq Rasool Khan:
A backward compatible HDR encoding scheme. 64:1 - Chun-Te Wu, Wei-Hao Huang, Chih-Hao Liu, Wei-Jia Huang, Kai-Che Liu, Ludovic J. Angot:
A real-time video 2D-to-3D with the bilateral grid. 65:1 - Masaru Tsuchida, Toru Takahashi, Koichi Ito, Takahito Kawanishi, Junji Yamato, Takafumi Aoki:
A stereo one-shot multi-band camera system for accurate color reproduction. 66:1 - Christian Lipski, Christian Linz, Marcus A. Magnor:
Belief propagation optical flow for high-resolution image morphing. 67:1 - Stavros Papastavrou, Demetris Hadjiachilleos, Georgios Stylianou:
Blind-folded recognition of bank notes on the mobile phone. 68:1 - Xuan Dong, Yi Pang, Jiangtao Wen:
Fast efficient algorithm for enhancement of low lighting video. 69:1 - Benoit Lange, Nancy Rodriguez:
LOD +: augmenting LOD with skeletons. 70:1 - Shiro Ozawa, Takao Abe, Noriyuki Naruto, Toshihiro Nakae, Makoto Nakamura, Naoya Miyashita, Mitsunori Hirano, Kazuhiko Tanaka:
Marker-less object recognition for surface computing. 71:1 - Erich Marth, Guillermo Marcus:
Parallelization of the x264 encoder using OpenCL. 72:1 - Eriko Kimura, Naoki Kawai, Kazunori Miyata:
Practical 3D decoration on flat media with anisotropic reflection. 73:1 - Matthew Trentacoste, Rafal Mantiuk, Wolfgang Heidrich:
Quality-preserving image downsizing. 74:1 - Zhengguo Li, Susanto Rahardja, Shiqian Wu, Zijian Zhu, Shoulie Xie:
Robust movement detection based on a new similarity index for HDR imaging. 75:1 - Thang M. Hoang:
Simple gamma correction for fringe projection profilometry system. 76:1 - Sun-Young Lee, Jong-Chul Yoon, In-Kwon Lee:
Temporally coherent video matting. 77:1 - Jean-Charles Bazin, Soonkee Chung, Roger Blanco Ribera, Quang Pham, Inso Kweon:
Virtual face sculpting. 78:1
Interaction
- Koki Nagano, Takeru Utsugi, Mika Hirano, Takeo Hamada, Akihiko Shirai, Masayuki Nakajima:
A new "multiplex content" displaying system compatible with current 3D projection technology. 79:1 - Seiya Matsuda, Tomohito Yamamoto:
A web system for creating and sharing 3D auditory contents. 80:1 - Yuki Hirobe, Shinobu Kuroki, Katsunari Sato, Takumi Yoshida, Kouta Minamizawa, Susumu Tachi:
Colorful Touch Palette. 81:1 - Yoshihiro Kuroda, Hirotoshi Ashida, Masataka Imura, Yoshiyuki Kagiyama, Osamu Oshiro:
Force reflecting porous media with dynamic elasticity change. 82:1 - Michal Lech, Bozena Kostek:
Gesture controlled interactive whiteboard based on SVM and fuzzy logic. 83:1 - Stephen David Beck, Shantenu Jha, Brygg Ullmer, Chris Branton, Sharath Maddineni:
GRENDL: grid enabled distribution and control for Laptop Orchestras. 84:1 - Marcelo Cicconet, Luiz Velho, Paulo Cezar Pinto Carvalho, Giordano Cabral:
Guitar-leading band. 85:1 - Ji-Hye An, Su-Jin Lee:
How people tend to organize sensory information into unified wholes in haptic phone?: focusing on cross modality interaction. 86:1 - Kai Uwe Barthel, Sebastian Müller, David Backstein, Dirk Neumann, Klaus Jung:
Image retrieval using collaborative filtering and visual navigation. 87:1 - Paulo F. U. Gotardo, Alan Price:
Integrated space: authoring in an immersive environment with 3D body tracking. 88:1 - Jae-Hee Park, Tack-Don Han:
LLP+: multi-touch sensing using cross plane infrared laser light for interactive based displays. 89:1 - Norbert Gyorbíró, Henry Larkin, Michael Cohen:
Long-term memory retention and recall of collected personal memories. 90:1 - Chun-Yu Tsai, Hung-Jung Lin, Tzu-Hao Kuo, Kai-Yin Cheng, I-Chao Shen, Bing-Yu Chen, Rung-Huei Liang:
MusicSpace: you "play" the music. 91:1 - Anusha I. Withana, Rika Matsui, Maki Sugimoto, Kentaro Harada, Masa Inakage:
Narrative image composition using objective and subjective tagging. 92:1 - Toshihiro Nakae, Shiro Ozawa, Naoya Miyashita:
O-Link: augmented object system for intergenerational communication. 93:1 - Frank Steinicke, Gerd Bruder, Scott Kuhl:
Perception of perspective distortions of man-made virtual objects. 94:1 - Marcelo Cicconet, Paulo Cezar Pinto Carvalho:
Playing the QWERTY keyboard. 95:1 - Dane M. Coffey, Daniel F. Keefe:
Shadow WIM: a multi-touch, dynamic world-in-miniature interface for exploring biomedical data. 96:1 - Kristian Gohlke, Michael Hlatky, Jörn Loviscach:
TapShot: screenshot snippets as GUI shortcuts. 97:1 - Yoichi Ochiai:
The visible electricity device: visible breadboard. 98:1 - Kumiko Kushiyama, Tetsuaki Baba, Kouki Doi, Shinji Sasada:
Thermal design display device to use the thermal tactile illusions: "Thermo-Paradox". 99:1 - Pierre Rouanet, Pierre-Yves Oudeyer, David Filliat:
Using mediator objects to easily and robustly teach visual objects to a robot. 100:1 - Patricia Codyre:
Using innovative ehealth interventions in a local health care context. 101:1
Modeling
- Tanasai Sucontphunt, Zhigang Deng, Ulrich Neumann:
3D human face identity transfer using deformation gradient. 102:1 - Mohammed Yousef, Ahmed Hashem, Hassan Saad, Amr Gamal, Osama Galal, Khaled F. Hussain:
A scripting language for Digital Content Creation applications. 103:1 - Wael Abdelrahman, Sara Farag:
Automated 3D mesh segmentation using 2D footprints. 104:1 - Olusola Olumide Aina, Jian-Jun Zhang:
Automatic muscle generation for physically-based facial animation. 105:1 - Ergun Akleman, Jianer Chen, Yen-Lin Chen, Qing Xing:
Cyclic twill-woven objects. 106:1 - Kazuhiko Yamamoto, Toki Takeda, Ryoichi Ando, Syota Kawano:
Darwin's Lake: sketch-based creature creation system enables users to collaborate with contents designers. 107:1 - Jae-Pil Heo, Duksu Kim, Joon-Kyung Seong, Jeong-Mo Hong, Min Tang, Sung-Eui Yoon:
FASTCD: fracturing-aware stable collision detection. 108:1 - Nuno Gonçalves, Ana Catarina Nogueira:
Faster accurate reflections throught quadric mirrors. 109:1 - Yasuyuki Tomita, Reiji Tsuruno:
Motion texture animation of water surface. 110:1 - Sara Farag, Wael Abdelrahman:
Physical modeling of heterogeneous embedded deformable object deformation. 111:1 - James Gregson, Zheng Wang:
Rapid surface and volume mesh generation from depth-augmented visual hulls. 112:1 - Craig W. Reynolds:
Using interactive evolution to discover camouflage patterns. 113:1 - Ku-Jin Kim, Jung-Eun Lee, Nakhoon Baek:
Voronoi diagram computation for protein molecules using graphics hardware. 114:1
Rendering
- Pedro Santos, Thomas Gierlinger, Rafael Huff, Martin Ritz, André Stork:
A full HDR pipeline from acquisition to projection. 115:1 - Sudarshanram Shetty, Mike Bailey:
A physical rendering model for human teeth. 116:1 - Shailen Agrawal, Subodh Kumar:
Approximate ambient occlusion for dynamic scenes using the GPU. 117:1 - Lesley Northam, Joe Istead, Craig S. Kaplan:
Artistic sketching with a painterly rendering algorithm. 118:1 - Borom Tunwattanapong, Abhijeet Ghosh, Paul E. Debevec:
Combining spherical harmonics and point-source illumination for efficient image-based relighting. 119:1 - Kyoji Matsushima, Masaki Nakamura, Sumio Nakahara, Ichiroh Kanaya:
Computational holography: the real 3-D by fast wave-field rendering in ultra high resolution. 120:1 - Ole Gulbrandsen:
Controlling the dark side in toon shading. 121:1 - Tomohito Hattori, Hiroyuki Kubo, Shigeo Morishima:
Curvature depended local illumination approximation of ambient occlusion. 122:1 - Jeong-ho Ahn, Jong-Chul Yoon, In-Kwon Lee:
Depth-based Anisotropic Kuwahara Filtering. 123:1 - Andrew Cox, Jan Kautz:
Dynamic ambient occlusion from volumetric proxies. 124:1-124:2 - Roger V. Hoang, Steven Koepnick, Joseph D. Mahsman, Matthew R. Sgambati, Cody J. White, Daniel S. Coming:
Exploring global illumination for virtual reality. 125:1 - Xuehui Liu, Xiaoguang Hao, Meng-Cheng Huang, Fang Liu, Mingquan Zhou, Hanqiu Sun, Enhua Wu:
Fast soft shadow by depth peeling. 126:1 - Mohammad Obaid, Ramakrishnan Mukundan, Mark Billinghurst:
Generating and rendering expressive caricatures. 127:1 - Patrick Cozzi, Frank Stoner:
GPU ray casting of virtual globes. 128:1 - Christian Linz, Christian Lipski, Marcus A. Magnor:
Multi-image interpolation based on graph-cuts and symmetric optical flow. 129:1 - Lei Ma, Shuangjiu Xiao, Xubo Yang:
Multi-interfaces based refractive rendering. 130:1 - Daniel Makoto Tokunaga, Cléber Gimenez Corrêa, Luiz Ricardo Nakamura, Fátima L. S. Nunes, Romero Tori:
Non-photorealistic rendering in stereoscopic 3D visualization. 131:1 - Martin Eisemann, Elmar Eisemann, Hans-Peter Seidel, Marcus A. Magnor:
Photo zoom: high resolution from unordered image collections. 132:1 - Mahdi MohammadBagher, Jan Kautz, Nicolas Holzschuch, Cyril Soler:
Screen-space Percentage-Closer Soft Shadows. 133:1 - Tom Cuypers, Se Baek Oh, Tom Haber, Philippe Bekaert, Ramesh Raskar:
WBSDF for simulating wave effects of light and audio. 134:1 - Benjamin P. DeLillo:
WebGLU development library for WebGL. 135:1 - Stefan Elsen:
WorldSeed. 136:1
Virtual/Augmented Reality
- Tai-Wei Kan, Chin-Hung Teng:
A framework for multifunctional Augmented Reality based on 2D barcodes. 137:1 - Hiroki Nishino:
A shape-free, designable 6-DoF marker tracking method. 138:1 - Tej Tadi, Patrick Salamin, Frédéric Vexo, Daniel Thalmann, Olaf Blanke:
Brain activity underlying third person and first person perspective training in virtual environments. 139:1 - Takashi Kajinami, Oribe Hayashi, Takuji Narumi, Tomohiro Tanikawa, Michitaka Hirose:
Digital display case: the exhibition sysytem for conveying the background information. 140:1 - Megha Davalath, Mat Sanford, Anton Agana, Ann McNamara, Frederic I. Parke:
Evaluating performance in immersive displays. 141:1 - Tae-Joon Kim, Yongyoung Byun, Yongjin Kim, Bochang Moon, Seungyong Lee, Sung-Eui Yoon:
HCCMeshes: hierarchical-culling oriented compact meshes. 142:1 - Takuji Narumi, Takashi Kajinami, Tomohiro Tanikawa, Michitaka Hirose:
Meta cookie. 143:1 - Tsouknidas Nikolaos, Kiyoshi Tomimatsu:
QR-code calibration for mobile augmented reality applications: linking a unique physical location to the digital world. 144:1 - Justin A. Ehrlich:
The effect of desktop illumination realism on a user's sense of presence in a virtual learning environment. 145:1 - Woong Choi, Takahiro Fukumori, Kohei Furukawa, Kozaburo Hachimura, Takanobu Nishiura, Keiji Yano:
Virtual Yamahoko parade in virtual Kyoto. 146:1
Visualization
- Santi Fort:
2020 3D media: new directions in immersive entertainment. 147:1 - Ippei Takauchi, Yuta Hara, Hiromu Saito, Ryo Asakura, Motofumi Hattori:
A mathematical model of deforming manifolds and their visualizations by CG animation. 148:1 - Dilip Banerjee, John Gross, Pradeep Reddy Gaddam, Marc Olano, William Hess, Judith E. Terrill, Terence J. Griffin, John G. Hagedorn, John T. Kelso, Steven G. Satterfield:
An integrated interactive visualization and analysis environment to study the impact of fire on building structures. 149:1 - Toshiki Takeuchi, Takuji Narumi, Kunihiro Nishimura, Tomohiro Tanikawa, Michitaka Hirose:
Forecast and visualization of future expenditure with logging and analyzing receipts. 150:1 - Stephen D. Laycock, Matthew B. Stocks, Steven Hayward:
Navigation and exploration of large data-sets using a haptic feedback device. 151:1 - Shantanu H. Joshi, Ian Bowman, Robin Jennings, David Hasson, Zhizhong Liu, Arthur W. Toga, John D. Van Horn:
Visual mining of neuro-metaspaces. 152:1 - Kunihiro Nishimura, Jun'ichi Nakano, Tomohiro Tanikawa, Michitaka Hirose:
Visualization of multiple people's lifelog: collecting "Ant's-eye view" to generate "Bird's-eye view". 153:1 - Pedro Cruz, Penousal Machado:
Visualizing empires decline. 154:1 - Ian Wojtowicz:
Visualizing the history of ideas. 155:1
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.