[Jairsubscribers] 12 new articles published by JAIR

jair-ed@isi.edu jair-ed at isi.edu
Fri Oct 29 15:26:15 PDT 2010


Dear JAIR subscriber:
 This message lists papers that have been recently published in JAIR and describes how to access them. (If you wish to remove yourself from this mailing list, see instructions at the end of this message.)

----------------------------------------------------------------
I. New JAIR Articles

T.  Lang and M.  Toussaint (2010)
  "Planning with Noisy Probabilistic Relational Rules",
   Volume 39, pages 1-49

   For quick access go to <http://www.jair.org/papers/paper3093.html>

Abstract:
Noisy probabilistic relational rules are a promising world model representation for several reasons. They are compact and generalize over world instantiations. They are usually interpretable and they can be learned effectively from the action experiences in complex worlds. We investigate reasoning with such rules in grounded relational domains. Our algorithms exploit the compactness of rules for efficient and flexible decision-theoretic planning. As a first approach, we combine these rules with the Upper Confidence Bounds applied to Trees (UCT) algorithm based on look-ahead trees. Our second approach converts these rules into a structured dynamic Bayesian network representation and predicts the effects of action sequences using approximate inference and beliefs over world states. We evaluate the effectiveness of our approaches for planning in a simulated complex 3D robot manipulation scenario with an articulated manipulator and realistic physics and in domains of the probabilistic planning competition. Empirical results show that our methods can solve problems where existing methods fail.


M.  Katz and C.  Domshlak (2010)
  "Implicit Abstraction Heuristics",
   Volume 39, pages 51-126

   For quick access go to <http://www.jair.org/papers/paper3063.html>

Abstract:
State-space search with explicit abstraction heuristics is at the state of the art of cost-optimal planning. These heuristics are inherently limited, nonetheless, because the size of the abstract space must be bounded by some, even if a very large, constant. Targeting this shortcoming, we introduce the notion of <i>(additive) implicit abstractions</i>, in which the planning task is abstracted by instances of tractable fragments of optimal planning. We then introduce a concrete setting of this framework, called <i>fork-decomposition</i>, that is based on two novel fragments of tractable cost-optimal planning. The induced admissible heuristics are then studied formally and empirically. This  study testifies for the accuracy of the fork decomposition heuristics, yet our empirical evaluation also stresses the tradeoff between their accuracy and the runtime complexity of computing them. Indeed, some of the power of the explicit abstraction heuristics comes from precomputing the heuristic function offline and then determining <i>h(s)</i> for each evaluated state <i>s</i> by a very fast lookup in a ``database.'' By contrast, while fork-decomposition heuristics can be calculated in polynomial time,  computing them is far from being fast. To address this problem, we show that the time-per-node complexity bottleneck of the fork-decomposition heuristics  can be successfully overcome. We demonstrate that an equivalent of the explicit abstraction notion of a ``database'' exists for the fork-decomposition abstractions as well, despite their exponential-size abstract spaces. We then verify empirically that heuristic search with the ``databased" fork-decomposition heuristics favorably competes with the state of the art of cost-optimal planning.


S.  Richter and M.  Westphal (2010)
  "The LAMA Planner: Guiding Cost-Based Anytime Planning with Landmarks",
   Volume 39, pages 127-177

   For quick access go to <http://www.jair.org/papers/paper2972.html>

Abstract:
LAMA is a classical planning system based on heuristic forward search. Its core feature is the use of a pseudo-heuristic derived from landmarks, propositional formulas that must be true in every solution of a planning task. LAMA builds on the Fast Downward planning system, using finite-domain rather than binary state variables and multi-heuristic search. The latter is employed to combine the landmark heuristic with a variant of the well-known FF heuristic. Both heuristics are cost-sensitive, focusing on high-quality solutions in the case where actions have non-uniform cost. A weighted A* search is used with iteratively decreasing weights, so that the planner continues to search for plans of better quality until the search is terminated.

LAMA showed best performance among all planners in the sequential satisficing track of the International Planning Competition 2008. In this paper we present the system in detail and investigate which features of LAMA are crucial for its performance. We present individual results for some of the domains used at the competition, demonstrating good and bad cases for the techniques implemented in LAMA. Overall, we find that using landmarks improves performance, whereas the incorporation of action costs into the heuristic estimators proves not to be beneficial. We show that in some domains a search that ignores cost solves far more problems, raising the question of how to deal with action costs more effectively in the future. The iterated weighted A* search greatly improves results, and shows synergy effects with the use of landmarks.


G.  Chalkiadakis, E.  Elkind, E.  Markakis, M.  Polukarov and N.  R. Jennings (2010)
  "Cooperative Games with Overlapping Coalitions",
   Volume 39, pages 179-216

   For quick access go to <http://www.jair.org/papers/paper3075.html>

Abstract:
In the usual models of cooperative game theory, the outcome of a coalition formation process is either the grand coalition or a coalition structure that consists of disjoint coalitions. However, in many domains where coalitions are associated with tasks, an agent may be involved in executing more than one task, and thus may distribute his resources among several coalitions. To tackle such scenarios, we introduce a model for cooperative games with overlapping coalitions—or overlapping coalition formation (OCF) games. We then explore the issue of stability in this setting. In particular, we introduce a notion of the core, which generalizes the corresponding notion in the traditional (non-overlapping) scenario. Then, under some quite general conditions, we characterize the elements of the core, and show that any element of the core maximizes the social welfare. We also introduce a concept of balancedness for overlapping coalitional games, and use it to characterize coalition structures that can be extended to elements of the core. Finally, we generalize the notion of convexity to our setting, and show that under some natural assumptions convex games have a non-empty core. Moreover, we introduce two alternative notions of stability in OCF that allow a wider range of deviations, and explore the relationships among the corresponding definitions of the core, as well as the classic (non-overlapping) core and the Aubin core. We illustrate the general properties of the three cores, and also study them from a computational perspective, thus obtaining additional insights into their fundamental structure.


M.  O. Riedl and R.  M. Young (2010)
  "Narrative Planning: Balancing Plot and Character",
   Volume 39, pages 217-268

   For quick access go to <http://www.jair.org/papers/paper2989.html>

Abstract:
Narrative, and in particular storytelling, is an important part of the human experience.  Consequently, computational systems that can reason about narrative can be more effective communicators, entertainers, educators, and trainers.  One of the central challenges in computational narrative reasoning is narrative generation, the automated creation of meaningful event sequences.  There are many factors -- logical and aesthetic -- that contribute to the success of a narrative artifact.  Central to this success is its understandability.  We argue that the following two attributes of narratives are universal: (a) the logical causal progression of plot, and (b) character believability.  Character believability is the perception by the audience that the actions performed by characters do not negatively impact the audience's suspension of disbelief.  Specifically, characters must be perceived by the audience to be intentional agents.  In this article, we explore the use of refinement search as a technique for solving the narrative generation problem -- to find a sound and believable sequence of character actions that transforms an initial world state into a world state in which goal propositions hold. We describe a novel refinement search planning algorithm -- the Intent-based Partial Order Causal Link (IPOCL) planner -- that, in addition to creating causally sound plot progression, reasons about character intentionality by identifying possible character goals that explain their actions and creating plan structures that explain why those characters commit to their goals. We present the results of an empirical evaluation that demonstrates that narrative plans generated by the IPOCL algorithm support audience comprehension of character intentions better than plans generated by conventional partial-order planners.


V.  Bulitko, Y.  Bj&#246;rnsson and R.  Lawrence (2010)
  "Case-Based Subgoaling in Real-Time Heuristic Search for Video Game Pathfinding",
   Volume 39, pages 269-300

   For quick access go to <http://www.jair.org/papers/paper3076.html>

Abstract:
Real-time heuristic search algorithms satisfy a constant bound on the amount of planning per action, independent of problem size. As a result, they scale up well as problems become larger. This property would make them well suited for video games where Artificial Intelligence controlled agents must react quickly to user commands and to other agents' actions. On the downside, real-time search algorithms employ learning methods that frequently lead to poor solution quality and cause the agent to appear irrational by re-visiting the same problem states repeatedly. The situation changed recently with a new algorithm, D LRTA*, which attempted to eliminate learning by automatically selecting subgoals. D LRTA* is well poised for video games, except it has a complex and memory-demanding pre-computation phase during which it builds a database of subgoals. In this paper, we propose a simpler and more memory-efficient way of pre-computing subgoals thereby eliminating the main obstacle to applying state-of-the-art real-time search methods in video games. The new algorithm solves a number of randomly chosen problems off-line, compresses the solutions into a series of subgoals and stores them in a database. When presented with a novel problem on-line, it queries the database for the most similar previously solved case and uses its subgoals to solve the problem. In the domain of pathfinding on four large video game maps, the new algorithm delivers solutions eight times better while using 57 times less memory and requiring 14% less pre-computation time.



A.  Feldman, G.  Provan and A.  van Gemund (2010)
  "A Model-Based Active Testing Approach to Sequential Diagnosis",
   Volume 39, pages 301-334

   For quick access go to <http://www.jair.org/papers/paper3031.html>

Abstract:
Model-based diagnostic reasoning often leads to a large number of diagnostic hypotheses. The set of diagnoses can be reduced by taking into account extra observations (passive monitoring), measuring additional variables (probing) or executing additional tests (sequential diagnosis/test sequencing). In this paper we combine the above approaches with techniques from Automated Test Pattern Generation (ATPG) and Model-Based Diagnosis (MBD) into a framework called FRACTAL (FRamework for ACtive Testing ALgorithms). Apart from the inputs and outputs that connect a system to its environment, in active testing we consider additional input variables to which a sequence of test vectors can be supplied. We address the computationally hard problem of computing optimal control assignments (as defined in FRACTAL) in terms of a greedy approximation algorithm called FRACTAL-G. We compare the decrease in the number of remaining minimal cardinality diagnoses of FRACTAL-G to that of two more FRACTAL algorithms: FRACTAL-ATPG and FRACTAL-P. FRACTAL-ATPG is based on ATPG and sequential diagnosis while FRACTAL-P is based on probing and, although not an active testing algorithm, provides a baseline for comparing the lower bound on the number of reachable diagnoses for the FRACTAL algorithms. We empirically evaluate the trade-offs of the three FRACTAL algorithms by performing extensive experimentation on the ISCAS85/74XXX benchmark of combinational circuits.


B.  Bidyuk, R.  Dechter and E.  Rollon (2010)
  "Active Tuples-based Scheme for Bounding Posterior Beliefs",
   Volume 39, pages 335-371

   For quick access go to <http://www.jair.org/papers/paper2945.html>

Abstract:
The paper presents a scheme for computing lower and upper bounds on the posterior marginals in Bayesian networks with discrete variables. Its power lies in its ability to use any available scheme that bounds the probability of evidence or posterior marginals and enhance its performance in an anytime manner. The scheme uses the cutset conditioning principle to tighten existing bounding schemes  and to facilitate anytime behavior, utilizing  a fixed  number of cutset tuples. The accuracy of the bounds improves as the number of used cutset tuples increases and so does the computation time. We demonstrate empirically the value of our scheme for bounding posterior marginals and probability of evidence using a variant of the bound propagation algorithm as a plug-in scheme.


B.  Banerjee and B.  Chandrasekaran (2010)
  "A Constraint Satisfaction Framework for Executing Perceptions and Actions in Diagrammatic Reasoning",
   Volume 39, pages 373-427

   For quick access go to <http://www.jair.org/papers/paper3069.html>

Abstract:
Diagrammatic reasoning (DR) is pervasive in human problem solving as a powerful adjunct to symbolic reasoning based on language-like representations. The research reported in this paper is a contribution to building a general purpose DR system as an extension to a SOAR-like problem solving architecture. The work is in a framework in which DR is modeled as a process where subtasks are solved, as appropriate, either by inference from symbolic representations or by interaction with a diagram, i.e., perceiving specified information from a diagram or modifying/creating objects in a diagram in specified ways according to problem solving needs. The perceptions and actions in most DR systems built so far are hand-coded for the specific application, even when the rest of the system is built using the general architecture. The absence of a general framework for executing perceptions/actions poses as a major hindrance to using them opportunistically -- the essence of open-ended search in problem solving.

Our goal is to develop a framework for executing a wide variety of specified perceptions and actions across tasks/domains without human intervention. We observe that the domain/task-specific visual perceptions/actions can be transformed into domain/task-independent spatial problems. We specify a spatial problem as a quantified constraint satisfaction problem in the real domain using an open-ended vocabulary of properties, relations and actions involving three kinds of diagrammatic objects -- points, curves, regions. Solving a spatial problem from this specification requires computing the equivalent simplified quantifier-free expression, the complexity of which is inherently doubly exponential. We represent objects as configuration of simple elements to facilitate decomposition of complex problems into simpler and similar subproblems. We show that, if the symbolic solution to a subproblem can be expressed concisely, quantifiers can be eliminated from spatial problems in low-order polynomial time using similar previously solved subproblems. This requires determining the similarity of two problems, the existence of a mapping between them computable in polynomial time, and designing a memory for storing previously solved problems so as to facilitate search. The efficacy of the idea is shown by time complexity analysis. We demonstrate the proposed approach by executing perceptions and actions involved in DR tasks in two army applications.


S.  Rudolph and B.  Glimm (2010)
  "Nominals, Inverses, Counting, and Conjunctive Queries or: Why Infinity is your Friend!",
   Volume 39, pages 429-481

   For quick access go to <http://www.jair.org/papers/paper3029.html>

Abstract:
Description Logics are knowledge representation formalisms that provide, for example, the logical underpinning of the W3C OWL standards. Conjunctive queries, the standard query language in databases, have recently gained significant attention as an expressive formalism for querying Description Logic knowledge bases. Several different techniques for deciding conjunctive query entailment are available for a wide range of DLs. Nevertheless, the combination of nominals, inverse roles, and number restrictions in OWL 1 and OWL 2 DL causes unsolvable problems for the techniques hitherto available. We tackle this problem and present a decidability result for entailment of unions of conjunctive queries in the DL ALCHOIQb that contains all three problematic constructors simultaneously. Provided that queries contain only simple roles, our result also shows decidability of entailment of (unions of) conjunctive queries in the logic that underpins OWL 1 DL and we believe that the presented results will pave the way for further progress towards conjunctive query entailment decision procedures for the Description Logics underlying the OWL standards.


M.  Geist and O.  Pietquin (2010)
  "Kalman Temporal Differences",
   Volume 39, pages 483-532

   For quick access go to <http://www.jair.org/papers/paper3077.html>

Abstract:
Because reinforcement learning suffers from a lack of scalability, online value (and Q-) function approximation has received increasing interest this last decade. This contribution introduces a novel approximation scheme, namely the Kalman Temporal Differences (KTD) framework, that exhibits the following features: sample-efficiency, non-linear approximation, non-stationarity handling and uncertainty management. A first KTD-based algorithm is provided for deterministic Markov Decision Processes (MDP) which produces biased estimates in the case of stochastic transitions. Than the eXtended KTD framework (XKTD), solving stochastic MDP, is described. Convergence is analyzed for special cases for both deterministic and stochastic transitions. Related algorithms are experimented on classical benchmarks. They compare favorably to the state of the art while exhibiting the announced features.


K.   Daniel, A.  Nash, S.  Koenig and A.  Felner (2010)
  "Theta*: Any-Angle Path Planning on Grids",
   Volume 39, pages 533-579

   For quick access go to <http://www.jair.org/papers/paper2994.html>

Abstract:
  Grids with blocked and unblocked cells are often used to represent terrain in robotics and video games. However, paths formed by grid edges can be longer than true shortest paths in the terrain since their headings are artificially constrained. We present two new correct and complete any-angle path-planning algorithms that avoid this shortcoming.  Basic Theta* and Angle-Propagation Theta* are both variants of A* that propagate information along grid edges without constraining paths to grid edges. Basic Theta* is simple to understand and implement, fast and finds short paths. However, it is not guaranteed to find true shortest paths. Angle-Propagation Theta* achieves a better worst-case complexity per vertex expansion than Basic Theta* by propagating angle ranges when it expands vertices, but is more complex, not as fast and finds slightly longer paths. We refer to Basic Theta* and Angle-Propagation Theta* collectively as Theta*. Theta* has unique properties, which we analyze in detail. We show experimentally that it finds shorter paths than both A* with post-smoothed paths and Field D* (the only other version of A* we know of that propagates information along grid edges without constraining paths to grid edges) with a runtime comparable to that of A* on grids. Finally, we extend Theta* to grids that contain unblocked cells with non-uniform traversal costs and introduce variants of Theta* which provide different tradeoffs between path length and runtime.




----------------------------------------------------------------
II. Unsubscribing from our Mailing List

To remove yourself from the JAIR subscribers mailing list, visit our
Web site (http://www.jair.org/), follow the link "notify me of new
articles", enter your email address in the form at the bottom of the
page, and follow the directions.  In the event that you've already
deleted yourself from the list and we keep sending you messages like
this one, send mail to jair-ed at isi.edu.

----------------------------------------------------------------



More information about the Jairsubscribers mailing list