Abstract is missing.
- Autoencoder-augmented neuroevolution for visual doom playingSamuel Alvernaz, Julian Togelius. 1-8 [doi]
- Measuring strategic depth in games using hierarchical knowledge basesDaan Apeldoorn, Vanessa Volz. 9-16 [doi]
- General video game playing escapes the no free lunch theoremDaniel Ashlock, Diego Pérez-Liébana, Amanda Saunders. 17-24 [doi]
- Mixed-initiative procedural generation of dungeons using game design patternsAlexander Baldwin, Steve Dahlskog, José M. Font, Johan Holmberg. 25-32 [doi]
- Games and big data: A scalable multi-dimensional churn prediction modelPaul Bertens, Anna Guitart, Africa Perianez. 33-36 [doi]
- Using multiple worlds for multiple agent roles in gamesJoseph Alexander Brown, Daniel Ashlock. 37-44 [doi]
- Detecting flow in games using facial expressionsAndrew Burns, James R. Tulip. 45-52 [doi]
- Monte Carlo tree search based algorithms for dynamic difficulty adjustmentSimon Demediuk, Marco Tamassia, William L. Raffe, Fabio Zambetta, Xiaodong Li, Florian 'Floyd' Mueller. 53-59 [doi]
- Combining cooperative and adversarial coevolution in the context of pac-manAlexander Dockhorn, Rudolf Kruse. 60-67 [doi]
- An intentional AI for hanabiMarkus Eger, Chris Martens, Marcela Alfaro Cordoba. 68-75 [doi]
- Towards a hybrid neural and evolutionary heuristic approach for playing tile-matching puzzle gamesJosé María Font, Daniel Manrique, Sergio Larrodera, Pablo Ramos-Criado. 76-79 [doi]
- Adaptive gameplay for mobile gamingYannick Francillette, Abdelkader Gouaich, Lylia Abrouk. 80-87 [doi]
- Rolling horizon evolution enhancements in general video game playingRaluca D. Gaina, Simon M. Lucas, Diego Perez Liebana. 88-95 [doi]
- 3D cylindrical trace transform based feature extraction for effective human action classificationGeorgios Goudelis, Georgios Tsatiris, Kostas Karpouzis, Stefanos D. Kollias. 96-103 [doi]
- A fuzzy system approach for choosing public goods game strategiesGarrison W. Greenwood. 104-109 [doi]
- Evolved communication strategies and emergent behaviour of multi-agents in pursuit domainsGina Grossi, Brian Ross. 110-117 [doi]
- Beyond playing to win: Diversifying heuristics for GVGAICristina Guerrero-Romero, Annie Louis, Diego Perez Liebana. 118-125 [doi]
- CiF-CK: An architecture for social NPCS in commercial gamesManuel Guimarães, Pedro Santos, Arnav Jhala. 126-133 [doi]
- Building an automatic sprite generator with deep convolutional generative adversarial networksLewis Horsley, Diego Perez Liebana. 134-141 [doi]
- Simulating strategy and dexterity for puzzle gamesAaron Isaksen, Drew Wallace, Adam Finkelstein, Andy Nealen. 142-149 [doi]
- Extracting gamers' cognitive psychological features and improving performance of churn prediction from mobile gamesJiHoon Jeon, DuMim Yoon, Seong-il Yang, KyungJoong Kim. 150-153 [doi]
- Procedural generation of angry birds fun levels using pattern-struct and preset-modelYuxuan Jiang, Tomohiro Harada, Ruck Thawonmas. 154-161 [doi]
- Learning macromanagement in starcraft from replays using deep learningNiels Justesen, Sebastian Risi. 162-169 [doi]
- General video game rule generationAhmed Khalifa, Michael Cerny Green, Diego Perez Liebana, Julian Togelius. 170-177 [doi]
- Opponent modeling based on action table for MCTS-based fighting game AIMan-Je Kim, Kyung-Joong Kim. 178-180 [doi]
- Text-based adventures of the golovin AI agentBartosz Kostka, Jaroslaw Kwiecieli, Jakub Kowalski, Pawel Rychlikowski. 181-188 [doi]
- Optimizing game live service for mobile free-to-play gamesSang-Kwang Lee, Seong-il Yang. 189-190 [doi]
- Showdown AI competitionScott Lee, Julian Togelius. 191-198 [doi]
- Fight or flight: Evolving maps for cube 2 to foster a fleeing behaviorDaniele Loiacono, Luca Arnaboldi. 199-206 [doi]
- Learning human-like behaviors using neuroevolution with statistical penaltiesLuong Huu Phuc, Kanazawa Naoto, Ikeda Kokolo. 207-214 [doi]
- Using Monte Carlo tree search and google maps to improve game balancing in location-based gamesLuís Fernando Maia Silva, Windson Viana, Fernando Trinta. 215-222 [doi]
- Learning to play visual doom using model-free episodic controlByeong-Jun Min, Kyung-Joong Kim. 223-225 [doi]
- Automated learning of hierarchical task networks for controlling minecraft agentsChanh Nguyen, Noah Reifsnyder, Sriram Gopalakrishnan, Héctor Muñoz-Avila. 226-231 [doi]
- Improving generalization ability in a puzzle game using reinforcement learningHiroya Oonishi, Hitoshi Iima. 232-239 [doi]
- Automated game design learningJoseph C. Osborn, Adam Summerville, Michael Mateas. 240-247 [doi]
- Introducing real world physics and macro-actions to general video game aiDiego Perez Liebana, Matthew Stephenson, Raluca D. Gaina, Jochen Renz, Simon M. Lucas. 248-255 [doi]
- DLNE: A hybridization of deep learning and neuroevolution for visual controlAndreas Precht Poulsen, Mark Thorhauge, Mikkel Hvilshj Funch, Sebastian Risi. 256-263 [doi]
- Resource-gathering algorithms in the game of starcraftMartin L. M. Rooijackers, Mark H. M. Winands. 264-271 [doi]
- Monte Carlo tree search experiments in hearthstoneAndre Santos, Pedro A. Santos, Francisco S. Melo. 272-279 [doi]
- Procedural level generation using multi-layer level representations with MdMCsSam Snodgrass, Santiago Ontañón. 280-287 [doi]
- Generating varied, stable and solvable levels for angry birds style physics gamesMatthew Stephenson, Jochen Renz. 288-295 [doi]
- Single believe state generation for partially observable real-time strategy gamesAlberto Uriarte, Santiago Ontañón. 296-303 [doi]
- Cellular automata simulation on FPGA for training neural networks with virtual world imageryOlivier Van Acker, Oded Lachish, Graeme Burnett. 304-305 [doi]
- Deep Q networks for visual fighting game AISeonghun Yoon, Kyung-Joong Kim. 306-308 [doi]
- Improving hearthstone AI by learning high-level rollout policies and bucketing chance node eventsShuyi Zhang, Michael Buro. 309-316 [doi]
- Monte Carlo tree search with temporal-difference learning for general video game playingErcument Ilhan, A. Sima Etaner-Uyar. 317-324 [doi]