HUMAN mind has a limited capability for processing a huge amount of detailed information in his environment; thus, to compensate, the brain groups together the information it perceives by its similarity, proximity, or functionality and assigns to each group a name or a “word” in natural language. This classification of information allows human to perform complex tasks and make intelligent decisions in an inherently vague and imprecise environment without any measurements or computation. Inspired by this human capability, Zadeh introduced the machinery of CW as a tool to formulate human reasoning with perceptions drawn from natural language and argued that the addition of CW theory to the existing tools gives rise to the theories with enhanced capabilities to deal with real-world problems and makes it possible to design systems with higher level of machine intelligence . To do this, CW offers two principal components, (1) a language for representing the meaning of words taken from natural language, this language is called the Generalized Constraint Language (GCL), and (2) a set of deduction rules for computing and reasoning with words instead of numbers. CW is rooted in fuzzy logic; however, it offers a much more general methodology for fusion of natural language propositions and computation with fuzzy variables. CW inference rules are drawn from various fuzzy domains, such as fuzzy logic, fuzzy arithmetic, fuzzy probability, and fuzzy syllogism. This paper reports a preliminary work on the implementation of a CW inference system on top of JESS expert system shell (CWJess) . The CW reasoning is fully integrated with JESS facts and inference engine and allows knowledge to be specified in terms of GCL assertions.
The aim of intelligent techniques, termed game AI, used in computer video games is to provide an interesting and challenging game play to a game player. Being highly sophisticated, these games present game developers with similar kind of requirements and challenges as faced by academic AI community. The game companies claim to use sophisticated game AI to model artiﬁcial characters such as computer game bots, intelligent realistic AI agents. However, these bots work via simple routines pre-programmed to suit the game map, game rules, game type, and other parameters unique to each game. Mostly, illusive intelligent behaviors are programmed using simple conditional statements and are hard-coded in the bots’ logic. Moreover, a game programmer has to spend considerable time conﬁguring crisp inputs for these conditional statements. Therefore, we realize a need for machine learning techniques to dynamically improve bots’ behavior and save precious computer programmers’ man-hours. We selected Qlearning, a reinforcement learning technique, to evolve dynamic intelligent bots, as it is a simple, efﬁcient, and online learning algorithm. Machine learning techniques such as reinforcement learning are known to be intractable if they use a detailed model of the world, and also require tuning of various parameters to give satisfactory performance. Therefore, this paper examine Qlearning for evolving a few basic behaviors viz. learning to ﬁght, and planting the bomb for computer game bots. Furthermore, we experimented on how bots would use knowledge learned from abstract models to evolve its behavior in more detailed model of the world.
In modern computer games, ‘bots’ – Intelligent realistic agents play a prominent role in success of a game in market. Typically, bots are modeled using finitestate machine and then programmed via simple conditional statements which are hard-coded in bots logic. Since these bots have become quite predictable to an experienced games player, she might lose her interest in game. We present a model of bots using BDI agents, which will show more human-like behavior, more believable and will provide more realistic feel to the game. These bots will use the inputs from actual game players to specify her Beliefs, Desires, and Intentions while game playing.