default search action
CIG 2011: Seoul, South Korea
- Sung-Bae Cho, Simon M. Lucas, Philip Hingston:
2011 IEEE Conference on Computational Intelligence and Games, CIG 2011, Seoul, South Korea, August 31 - September 3, 2011. IEEE 2011, ISBN 978-1-4577-0010-1 - Greg Ashe, Nathan R. Sturtevant, Jong-Hwan Kim:
Keynotes: Data mining and machine learning applications in MMOs. - Kevin Kim:
NCsoft special session: NCsoft, connecting the world with entertainment. - Robert G. Reynolds, Daniel A. Ashlock, Georgios N. Yannakakis, Julian Togelius, Mike Preuss:
Tutorials: Cultural Algorithms: Incorporating social intelligence into virtual worlds. - Garrison W. Greenwood:
Enhanced cooperation in the N-person iterated snowdrift game through tag mediation. 1-8 - Bruce Kwong-Bun Tong, Chun Man Ma, Chi Wan Sung:
A Monte-Carlo approach for the endgame of Ms. Pac-Man. 9-15 - Cameron McGuinness, Daniel A. Ashlock:
Incorporating required structure into tiles. 16-23 - Nirvana S. Antonio, Cicero Ferreira Fernandes Costa Filho, Marly Guimarães Fernandes Costa, Rafael Padilla:
Optimization of an evaluation function of the 4-sided dominoes game using a genetic algorithm. 24-30 - Ruck Thawonmas, Yoshinori Tani:
Frame selection using iterative grammatical evolution for automatic comic generation from game logs. 31-38 - Nozomu Ikehata, Takeshi Ito:
Monte-Carlo tree search in Ms. Pac-Man. 39-46 - Jorge Muñoz, Georgios N. Yannakakis, Fiona Bríd Mulvey, Dan Witzner Hansen, Germán Gutiérrez, Araceli Sanchis:
Towards gaze-controlled platform games. 47-54 - Nick Nygren, Jörg Denzinger, Ben Stephenson, John Aycock:
User-preference-based automated level generation for platform games. 55-62 - Daniel A. Ashlock, Christopher Kuusela, Nicholas Rogers:
Hormonal systems for prisoners dilemma agents. 63-70 - Antonios Liapis, Georgios N. Yannakakis, Julian Togelius:
Neuroevolutionary constrained optimization for content creation. 71-78 - Reinaldo Uribe, Fernando Lozano, Katsunari Shibata, Charles Anderson:
Discount and speed/execution tradeoffs in Markov Decision Process games. 79-86 - Daniel Whitehouse, Edward Jack Powley, Peter I. Cowling:
Determinization and information set Monte Carlo Tree Search for the card game Dou Di Zhu. 87-94 - Arthur Carvalho, Renato Oliveira:
Reinforcement learning for the soccer dribbling task. 95-101 - Jacob Schrum, Risto Miikkulainen:
Evolving multimodal networks for multitask games. 102-109 - Mark H. M. Winands, Yngvi Björnsson:
αβ-based play-outs in Monte-Carlo Tree Search. 110-117 - Atif M. Alhejali, Simon M. Lucas:
Using a training camp with Genetic Programming to evolve Ms Pac-Man agents. 118-125 - Noor Shaker, Georgios N. Yannakakis, Julian Togelius:
Feature analysis for modeling game content quality. 126-133 - Faisal Alvi, Moataz A. Ahmed:
Complexity analysis and playing strategies for Ludo and its variant race games. 134-141 - Luís Peña, Sascha Ossowski, José María Peña Sánchez, José Ángel Sánchez:
EEP - A lightweight emotional model: Application to RPG video game characters. 142-149 - Li Zhang:
Developments in metaphor inference and context-aware affect sensing. 150-157 - J. (Pim) A. M. Nijssen, Mark H. M. Winands:
Monte-Carlo Tree Search for the game of Scotland Yard. 158-165 - Gideon Avigad, Erella Eisenstadt, Miri Weiss-Cohen:
Optimal strategies for multi objective games and their search by evolutionary multi objective optimization. 166-173 - Tobias Mahlmann, Julian Togelius, Georgios N. Yannakakis:
Modelling and evaluation of complex scenarios with the Strategy Game Description Language. 174-181 - Samuel Sarjant, Bernhard Pfahringer, Kurt Driessens, Tony Smith:
Using the online cross-entropy method to learn relational policies for playing different games. 182-189 - Gabriel Synnaeve, Pierre Bessière:
A Bayesian model for RTS units control applied to StarCraft. 190-196 - Giel van Lankveld, Pieter Spronck, H. Jaap van den Herik, Arnoud Arntz:
Games as personality profiling tools. 197-202 - Tim Uusitalo, Stefan J. Johansson:
A reactive mutli-agent approach to car driving using artificial potential fields. 203-210 - Byung-Chull Bae, Yun-Gyung Cheong, Robert Michael Young:
Automated story generation with multiple internal focalization. 211-218 - Alexandros Agapitos, Michael O'Neill, Anthony Brabazon, Theodoros Theodoridis:
Learning environment models in car racing using stateful Genetic Programming. 219-226 - Luigi Cardamone, Antonio Caiazzo, Daniele Loiacono, Pier Luca Lanzi:
Transfer of driving behaviors across different racing games. 227-234 - Ping-Chiang Chou, Hassen Doghmen, Chang-Shing Lee, Fabien Teytaud, Olivier Teytaud, Hui-Min Wang, Mei-Hui Wang, Li-Wen Wu, Shi-Jim Yen:
Computational and human intelligence in blind Go. 235-242 - Fabien Teytaud, Olivier Teytaud:
Lemmas on partial observation, with application to phantom games. 243-249 - Bernard Helmstetter, Chang-Shing Lee, Fabien Teytaud, Olivier Teytaud, Mei-Hui Wang, Shi-Jim Yen:
Random positions in Go. 250-257 - Padmini Rajagopalan, Aditya Rawal, Risto Miikkulainen, Marc A. Wiseman, Kay E. Holekamp:
The role of reward structure, coordination mechanism and net return in the evolution of cooperation. 258-265 - Luca Galli, Daniele Loiacono, Luigi Cardamone, Pier Luca Lanzi:
A cheating detection framework for Unreal Tournament III: A machine learning approach. 266-272 - Diego Perez Liebana, Miguel Nicolau, Michael O'Neill, Anthony Brabazon:
Reactiveness and navigation in computer games: Different needs, different approaches. 273-280 - Gabriel Synnaeve, Pierre Bessière:
A Bayesian model for opening prediction in RTS games with application to StarCraft. 281-288 - Michael Cook, Simon Colton:
Multi-faceted evolution of simple arcade games. 289-296 - Ken Hartsook, Alexander Zook, Sauvik Das, Mark O. Riedl:
Toward supporting stories with procedurally generated game worlds. 297-304 - David Robles, Philipp Rohlfshagen, Simon M. Lucas:
Learning non-random moves for playing Othello: Improving Monte Carlo Tree Search. 305-312 - Hisao Ishibuchi, Keisuke Takahashi, Kouichirou Hoshino, Junpei Maeda, Yusuke Nojima:
Effects of configuration of agents with different strategy representations on the evolution of cooperative behavior in a spatial IPD game. 313-320 - Aisha A. Abdullahi, Simon M. Lucas:
Temporal difference learning with interpolated n-tuples: Initial results from a simulated car racing environment. 321-328 - Jacob Schrum, Igor Karpov, Risto Miikkulainen:
UT2: Human-like behavior via neuroevolution of combat behavior and replay of human traces. 329-336 - Mike Preuss, Jan Quadflieg, Günter Rudolph:
TORCS sensor noise removal and multi-objective track selection for driving style adaptation. 337-344 - Ruck Thawonmas, Seiji Murakami, Takumi Sato:
Believable judge bot that learns to select tactics and judge opponents. 345-349 - Zafeirios Fountas, David Gamez, Andreas Fidjeland:
A neuronal global workspace for human-like control of a computer game character. 350-357
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.