From jairmail at ISI.EDU Fri Jan 23 18:03:07 2004
From: jairmail at ISI.EDU (JAIRMAIL)
Date: Fri Jan 23 18:03:11 2004
Subject: [Jairsubscribers] JAIR Update (Special Issue)
Message-ID: <200401240203.i0O237d07374@nitro.isi.edu>
CONTENTS:
I. New JAIR Articles
II. Instructions for Obtaining Articles
Dear JAIR subscriber:
This message lists papers that have been recently published in JAIR
(Volume 21: The JAIR Special Issue on the 3rd International Planning
Competition) and describes how to access them. If you wish to remove
yourself from this mailing list, see section II.
Long, D. and Fox, M.
"The 3rd International Planning Competition: Results and Analysis"
Fox, M. and Long, D.
"PDDL2.1: An Extension to PDDL for Expressing Temporal Planning Domains"
Bacchus, F.
"The Power of Modeling---a Response to PDDL2.1"
Boddy, M.S.
"Imperfect Match: PDDL 2.1 and Real Applications"
Geffner, H.A.
"PDDL 2.1: Representation vs. Computation"
McDermott, D.
"PDDL2.1 -- The Art of the Possible? Commentary on Fox and Long"
Smith, D.E.
"The Case for Durative Actions: A Commentary on PDDL2.1"
Do, M. and Kambhampati, S.
"SAPA: A Multi-objective Metric Temporal Planner"
Edelkamp, S.
"Taming Numbers and Durations in the Model Checking Integrated Planning
System"
Gerevini, A., Saetti, A. and Serina, I.
"Planning Through Stochastic Local Search and Temporal Action Graphs in LPG"
Hoffmann, J.
"The Metric-FF Planning System: Translating ``Ignoring Delete Lists'' to
Numeric State Variables"
Kvarnström, J. and Magnusson, M.
"TALplanner in IPC-2002: Extensions and Control Rules"
Nau, D.S., Au, T.C., Ilghami, O., Kuter, U., Murdock, J.W., Wu, D. and Yaman, F
"SHOP2: An HTN Planning System"
Younes, H.L.S. and Simmons, R.G.
"VHPOP: Versatile Heuristic Partial Order Planner"
----------------------------------------------------------------
I. New JAIR Articles
Long, D. and Fox, M. (2003)
"The 3rd International Planning Competition: Results and Analysis",
Volume 20, pages 1-59.
For quick access go to
Abstract:
This paper reports the outcome of the third in the series of biennial
international planning competitions, held in association with the
International Conference on AI Planning and Scheduling (AIPS) in
2002. In addition to describing the domains, the planners and the
objectives of the competition, the paper includes analysis of the
results. The results are analysed from several perspectives, in order
to address the questions of comparative performance between planners,
comparative difficulty of domains, the degree of agreement between
planners about the relative difficulty of individual problem instances
and the question of how well planners scale relative to one another
over increasingly difficult problems. The paper addresses these
questions through statistical analysis of the raw results of the
competition, in order to determine which results can be considered to
be adequately supported by the data. The paper concludes with a
discussion of some challenges for the future of the competition series.
Fox, M. and Long, D. (2003)
"PDDL2.1: An Extension to PDDL for Expressing Temporal Planning Domains",
Volume 20, pages 61-124.
For quick access go to
Abstract:
In recent years research in the planning community has moved
increasingly toward s application of planners to realistic problems
involving both time and many typ es of resources. For example,
interest in planning demonstrated by the space res earch community has
inspired work in observation scheduling, planetary rover ex ploration
and spacecraft control domains. Other temporal and resource-intensive
domains including logistics planning, plant control and manufacturing
have also helped to focus the community on the modelling and reasoning
issues that must be confronted to make planning technology meet the
challenges of application.
The International Planning Competitions have acted as an important
motivating fo rce behind the progress that has been made in planning
since 1998. The third com petition (held in 2002) set the planning
community the challenge of handling tim e and numeric resources. This
necessitated the development of a modelling langua ge capable of
expressing temporal and numeric properties of planning domains. In
this paper we describe the language, PDDL2.1, that was used in the
competition. We describe the syntax of the language, its formal
semantics and the validation of concurrent plans. We observe that
PDDL2.1 has considerable modelling power --- exceeding the
capabilities of current planning technology --- and presents a number
of important challenges to the research community.
Bacchus, F. (2003)
"The Power of Modeling---a Response to PDDL2.1",
Volume 20, pages 125-132.
For quick access go to
Abstract:
In this commentary I argue that although PDDL is a very useful
standard for the planning competition, its design does not properly
consider the issue of domain modeling. Hence, I would not advocate its
use in specifying planning domains outside of the context of the
planning competition. Rather, the field needs to explore different
approaches and grapple more directly with the problem of effectively
modeling and utilizing all of the diverse pieces of knowledge we
typically have about planning domains.
Boddy, M.S. (2003)
"Imperfect Match: PDDL 2.1 and Real Applications",
Volume 20, pages 133-137.
For quick access go to
Abstract:
PDDL was originally conceived and constructed as a lingua franca for
the International Planning Competition. PDDL2.1 embodies a set of
extensions intended to support the expression of something closer to
``real planning problems.'' This objective has only been partially
achieved, due in large part to a deliberate focus on not moving too
far from classical planning models and solution methods.
Geffner, H.A. (2003)
"PDDL 2.1: Representation vs. Computation",
Volume 20, pages 139-144.
For quick access go to
Abstract:
I comment on the PDDL 2.1 language and its use in the planning
competition, focusing on the choices made for accommodating time and
concurrency. I also discuss some methodological issues that have to
do with the move toward more expressive planning languages and the
balance needed in planning research between semantics and computation.
McDermott, D. (2003)
"PDDL2.1 -- The Art of the Possible? Commentary on Fox and Long",
Volume 20, pages 145-148.
For quick access go to
Abstract:
PDDL2.1 was designed to push the envelope of what planning algorithms
can do, and it has succeeded. It adds two important features: durative
actions,which take time (and may have continuous effects); and
objective functions for measuring the quality of plans. The concept of
durative actions is flawed; and the treatment of their semantics
reveals too strong an attachment to the way many contemporary planners
work. Future PDDL innovators should focus on producing a clean
semantics for additions to the language, and let planner implementers
worry about coupling their algorithms to problems expressed in the
latest version of the language.
Smith, D.E. (2003)
"The Case for Durative Actions: A Commentary on PDDL2.1",
Volume 20, pages 149-154.
For quick access go to
Abstract:
The addition of durative actions to PDDL2.1 sparked some
controversy. Fox and Long argued that actions should be considered as
instantaneous, but can start and stop processes. Ultimately, a
limited notion of durative actions was incorporated into the
language. I argue that this notion is still impoverished, and that the
underlying philosophical position of regarding durative actions as
being a shorthand for a start action, process, and stop action ignores
the realities of modelling and execution for complex systems.
Do, M. and Kambhampati, S. (2003)
"SAPA: A Multi-objective Metric Temporal Planner",
Volume 20, pages 155-194.
For quick access go to
Abstract:
SAPA is a domain-independent heuristic forward chaining planner
that can handle durative actions, metric resource constraints,
and deadline goals. It is designed to be capable of handling the
multi-objective nature of metric temporal planning. Our technical
contributions include (i) planning-graph based methods for deriving
heuristics that are sensitive to both cost and makespan (ii)
techniques for adjusting the heuristic estimates to take action
interactions and metric resource limitations into account and
(iii) a linear time greedy post-processing technique to improve
execution flexibility of the solution plans. An implementation of SAPA
using many of the techniques presented in this paper was one of
the best domain independent planners for domains with metric and
temporal constraints in the third International Planning Competition,
held at AIPS-02. We describe the technical details of extracting the
heuristics and present an empirical evaluation of the current
implementation of SAPA.
Edelkamp, S. (2003)
"Taming Numbers and Durations in the Model Checking Integrated Planning
System",
Volume 20, pages 195-238.
For quick access go to
Abstract:
The Model Checking Integrated Planning System (MIPS) is a temporal
least commitment heuristic search planner based on a flexible
object-oriented workbench architecture. Its design clearly separates
explicit and symbolic directed exploration algorithms from the set of
on-line and off-line computed estimates and associated data
structures.
MIPS has shown distinguished performance in the last two international
planning competitions. In the last event the description language was
extended from pure propositional planning to include numerical state
variables, action durations, and plan quality objective
functions. Plans were no longer sequences of actions but time-stamped
schedules.
As a participant of the fully automated track of the competition, MIPS
has proven to be a general system; in each track and every benchmark
domain it efficiently computed plans of remarkable quality. This
article introduces and analyzes the most important algorithmic
novelties that were necessary to tackle the new layers of
expressiveness in the benchmark problems and to achieve a high level
of performance.
The extensions include critical path analysis of sequentially
generated plans to generate corresponding optimal parallel plans. The
linear time algorithm to compute the parallel plan bypasses known NP
hardness results for partial ordering by scheduling plans with respect
to the set of actions and the imposed precedence relations.
The efficiency of this algorithm also allows us to improve the
exploration guidance: for each encountered planning state the
corresponding approximate sequential plan is scheduled.
One major strength of MIPS is its static analysis phase that grounds
and simplifies parameterized predicates, functions and operators, that
infers knowledge to minimize the state description length, and that
detects domain object symmetries. The latter aspect is analyzed in
detail.
MIPS has been developed to serve as a complete and optimal state space
planner, with admissible estimates, exploration engines and branching
cuts. In the competition version, however, certain performance
compromises had to be made, including floating point arithmetic,
weighted heuristic search exploration according to an inadmissible
estimate and parameterized optimization.
Gerevini, A., Saetti, A. and Serina, I. (2003)
"Planning Through Stochastic Local Search and Temporal Action Graphs in LPG",
Volume 20, pages 239-290.
For quick access go to
Abstract:
We present some techniques for planning in domains specified with the
recent standard language PDDL2.1, supporting 'durative actions' and
numerical quantities. These techniques are implemented in LPG, a
domain-independent planner that took part in the 3rd International
Planning Competition (IPC). LPG is an incremental, any time system
producing multi-criteria quality plans. The core of the system is
based on a stochastic local search method and on a graph-based
representation called 'Temporal Action Graphs' (TA-graphs).
This paper focuses on temporal planning, introducing TA-graphs and
proposing some techniques to guide the search in LPG using this
representation. The experimental results of the 3rd IPC, as well as
further results presented in this paper, show that our techniques
can be very effective. Often LPG outperforms all other fully-automated
planners of the 3rd IPC in terms of speed to derive a solution, or
quality of the solutions that can be produced.
Hoffmann, J. (2003)
"The Metric-FF Planning System: Translating ``Ignoring Delete Lists'' to
Numeric State Variables",
Volume 20, pages 291-341.
An online appendix (C code for the Metric-FF system) is also a available.
For quick access go to
Abstract:
Planning with numeric state variables has been a challenge for many
years, and was a part of the 3rd International Planning Competition
(IPC-3). Currently one of the most popular and successful algorithmic
techniques in STRIPS planning is to guide search by a heuristic
function, where the heuristic is based on relaxing the planning task
by ignoring the delete lists of the available actions.
We present a natural extension of ``ignoring delete lists'' to numeric
state variables, preserving the relevant theoretical properties of the
STRIPS relaxation under the condition that the numeric task at hand is
``monotonic''. We then identify a subset of the numeric IPC-3
competition language, ``linear tasks'', where monotonicity can be
achieved by pre-processing. Based on that, we extend the algorithms
used in the heuristic planning system FF to linear tasks. The
resulting system Metric-FF is, according to the IPC-3 results which we
discuss, one of the two currently most efficient numeric planners.
Kvarnström, J. and Magnusson, M. (2003)
"TALplanner in IPC-2002: Extensions and Control Rules",
Volume 20, pages 343-377.
For quick access go to
Abstract:
TALplanner is a forward-chaining planner that relies on domain
knowledge in the shape of temporal logic formulas in order to prune
irrelevant parts of the search space. TALplanner recently participated
in the third International Planning Competition, which had a clear
emphasis on increasing the complexity of the problem domains being
used as benchmark tests and the expressivity required to represent
these domains in a planning system. Like many other planners,
TALplanner had support for some but not all aspects of this increase
in expressivity, and a number of changes to the planner were required.
After a short introduction to TALplanner, this article describes some
of the changes that were made before and during the competition. We
also describe the process of introducing suitable domain knowledge for
several of the competition domains.
Nau, D.S., Au, T.C., Ilghami, O., Kuter, U., Murdock, J.W., Wu, D. and Yaman, F. (2003)
"SHOP2: An HTN Planning System",
Volume 20, pages 379-404.
For quick access go to
Abstract:
The SHOP2 planning system received one of the awards for distinguished
performance in the 2002 International Planning Competition. This paper
describes the features of SHOP2 which enabled it to excel in the
competition, especially those aspects of SHOP2 that deal with temporal
and metric planning domains.
Younes, H.L.S. and Simmons, R.G. (2003)
"VHPOP: Versatile Heuristic Partial Order Planner",
Volume 20, pages 405-430.
An online appendix (Source code for VHPOP version 2.2) is also a available.
For quick access go to
Abstract:
VHPOP is a partial order causal link (POCL) planner loosely based on
UCPOP. It draws from the experience gained in the early to mid 1990's
on flaw selection strategies for POCL planning, and combines this with
more recent developments in the field of domain independent planning
such as distance based heuristics and reachability analysis. We
present an adaptation of the additive heuristic for plan space
planning, and modify it to account for possible reuse of existing
actions in a plan. We also propose a large set of novel flaw
selection strategies, and show how these can help us solve more
problems than previously possible by POCL planners. VHPOP also
supports planning with durative actions by incorporating standard
techniques for temporal constraint reasoning. We demonstrate that the
same heuristic techniques used to boost the performance of classical
POCL planning can be effective in domains with durative actions as
well. The result is a versatile heuristic POCL planner competitive
with established CSP-based and heuristic state space planners.
----------------------------------------------------------------
II. INSTRUCTIONS FOR OBTAINING ARTICLES
When an article is published in JAIR, it is immediately available from
our distribution sites. The article is also immediately announced and
posted on the USENET newsgroups comp.ai.jair.announce and
comp.ai.jair.papers. Periodically, we send out announcements like
this to our mailing list announcing recent papers, for those of you
who do not read the JAIR newsgroups. (Note that if you read the JAIR
newsgroups, you have the advantage of immediately being notified about
new papers.)
To obtain an article, there are a variety of access methods:
-- WWW: You can access our server through the web. The URL is:
http://www.jair.org/
-- Anonymous FTP from the site below:
CMU Machine: ftp.cs.cmu.edu directory: project/jair
-- Newsgroups: read comp.ai.jair.announce and comp.ai.jair.papers
To Remove Yourself from this Mailing List
To remove yourself from the JAIR subscribers mailing list, visit our
Web site (http://www.jair.org/), follow the link "notify me of new
articles", enter your email address in the form at the bottom of the
page, and follow the directions. In the event that you've already
deleted yourself from the list and we keep sending you messages like
this one, send mail to jair-ed@isi.edu.
From jairmail at ISI.EDU Fri Apr 2 11:39:30 2004
From: jairmail at ISI.EDU (JAIRMAIL)
Date: Fri Apr 2 11:39:33 2004
Subject: [Jairsubscribers] JAIR Update
Message-ID: <200404021939.i32JdUn02918@nitro.isi.edu>
CONTENTS:
I. New JAIR Articles
II. Instructions for Obtaining Articles
Dear JAIR subscriber:
This message lists papers that have been recently published in JAIR
and describes how to access them. (If you wish to remove yourself from
this mailing list, see section II.)
Zhang, N.L. and Kocka, T. (2004)
"Effective Dimensions of Hierarchical Latent Class Models"
Wellman, M.P., Reeves, D.M., Lochner, K.M. and Vorobeychik, Y. (2004)
"Price Prediction in a Trading Agent Competition"
Monderer, D. and Tennenholtz, M. (2004)
"K-Implementation"
Stanley, K.S. and Miikkulainen, R. (2004)
"Competitive Coevolution through Evolutionary Complexification"
Park, J.D. and Darwiche, A. (2004)
"Complexity Results and Approximation Strategies for MAP Explanations"
Boutilier, C., Brafman, R.I., Domshlak, C., Hoos, H.H. and Poole, D. (2004)
"CP-nets: A Tool for Representing and Reasoning with
Conditional Ceteris Paribus Preference Statements"
Dixon, H.E., Ginsberg, M.L. and Parkes, A.J. (2004)
"Generalizing Boolean Satisfiability I: Background and Survey of
Existing Work"
Arieli, O., Denecker, M., Van Nuffelen, B. and Bruynooghe, M. (2004)
"Coherent Integration of Databases by Abductive Logic Programming"
Nederhof, M.J. and Satta, G. (2004)
"IDL-Expressions: A Formalism for Representing and Parsing Finite
Languages in Natural Language Processing"
Hnich, B., Smith, B.M. and Walsh, T. (2004)
"Dual Modelling of Permutation and Injection Problems"
Halpern, J.Y. and Koller, D. (2004)
"Representation Dependence in Probabilistic Inference"
Thompson, C.A., Goker, M.H. and Langley, P. (2004)
"A Personalized System for Conversational Recommendations"
----------------------------------------------------------------
I. New JAIR Articles
Zhang, N.L. and Kocka, T. (2004)
"Effective Dimensions of Hierarchical Latent Class Models",
Volume 21, pages 1-17.
For quick access go to
Abstract:
Hierarchical latent class (HLC) models are tree-structured Bayesian
networks where leaf nodes are observed while internal nodes are
latent. There are no theoretically well justified model selection
criteria for HLC models in particular and Bayesian networks with
latent nodes in general. Nonetheless, empirical studies suggest that
the BIC score is a reasonable criterion to use in practice for
learning HLC models. Empirical studies also suggest that sometimes
model selection can be improved if standard model dimension is replaced
with effective model dimension in the penalty term of the BIC score.
Effective dimensions are difficult to compute. In this paper, we prove
a theorem that relates the effective dimension of an HLC model to the
effective dimensions of a number of latent class models. The theorem
makes it computationally feasible to compute the effective dimensions
of large HLC models. The theorem can also be used to compute the
effective dimensions of general tree models.
Wellman, M.P., Reeves, D.M., Lochner, K.M. and Vorobeychik, Y. (2004)
"Price Prediction in a Trading Agent Competition",
Volume 21, pages 19-36.
For quick access go to
Abstract:
The 2002 Trading Agent Competition (TAC) presented a challenging
market game in the domain of travel shopping. One of the pivotal
issues in this domain is uncertainty about hotel prices, which have a
significant influence on the relative cost of alternative trip
schedules. Thus, virtually all participants employ some method for
predicting hotel prices. We survey approaches employed in the
tournament, finding that agents apply an interesting diversity of
techniques, taking into account differing sources of evidence bearing
on prices. Based on data provided by entrants on their agents' actual
predictions in the TAC-02 finals and semifinals, we analyze the
relative efficacy of these approaches. The results show that taking
into account game-specific information about flight prices is a major
distinguishing factor. Machine learning methods effectively induce
the relationship between flight and hotel prices from game data, and a
purely analytical approach based on competitive equilibrium analysis
achieves equal accuracy with no historical data. Employing a new
measure of prediction quality, we relate absolute accuracy to
bottom-line performance in the game.
Monderer, D. and Tennenholtz, M. (2004)
"K-Implementation",
Volume 21, pages 37-62.
For quick access go to
Abstract:
This paper discusses an interested party who wishes to influence the
behavior of agents in a game (multi-agent interaction), which is not
under his control. The interested party cannot design a new game,
cannot enforce agents' behavior, cannot enforce payments by the
agents, and cannot prohibit strategies available to the agents.
However, he can influence the outcome of the game by committing to
non-negative monetary transfers for the different strategy profiles
that may be selected by the agents. The interested party assumes that
agents are rational in the commonly agreed sense that they do not use
dominated strategies. Hence, a certain subset of outcomes is
implemented in a given game if by adding non-negative payments,
rational players will necessarily produce an outcome in this
subset. Obviously, by making sufficiently big payments one can
implement any desirable outcome. The question is what is the cost of
implementation? In this paper we introduce the notion of
k-implementation of a desired set of strategy profiles, where k stands
for the amount of payment that need to be actually made in order to
implement desirable outcomes. A major point in k-implementation is
that monetary offers need not necessarily materialize when following
desired behaviors. We define and study k-implementation in the
contexts of games with complete and incomplete information. In the
latter case we mainly focus on the VCG games. Our setting is later
extended to deal with mixed strategies using correlation devices.
Together, the paper introduces and studies the implementation of
desirable outcomes by a reliable party who cannot modify game rules
(i.e. provide protocols), complementing previous work in mechanism
design, while making it more applicable to many realistic CS settings.
Stanley, K.S. and Miikkulainen, R. (2004)
"Competitive Coevolution through Evolutionary Complexification",
Volume 21, pages 63-100.
For quick access go to
Abstract:
Two major goals in machine learning are the discovery and improvement
of solutions to complex problems. In this paper, we argue that
complexification, i.e. the incremental elaboration of solutions
through adding new structure, achieves both these goals. We
demonstrate the power of complexification through the NeuroEvolution
of Augmenting Topologies (NEAT) method, which evolves increasingly
complex neural network architectures. NEAT is applied to an
open-ended coevolutionary robot duel domain where robot controllers
compete head to head. Because the robot duel domain supports a wide
range of strategies, and because coevolution benefits from an
escalating arms race, it serves as a suitable testbed for studying
complexification. When compared to the evolution of networks with
fixed structure, complexifying evolution discovers significantly more
sophisticated strategies. The results suggest that in order to
discover and improve complex solutions, evolution, and search in
general, should be allowed to complexify as well as optimize.
Park, J.D. and Darwiche, A. (2004)
"Complexity Results and Approximation Strategies for MAP Explanations",
Volume 21, pages 101-133.
For quick access go to
Abstract:
MAP is the problem of finding a most probable instantiation of a
set of variables given evidence. MAP has always been perceived to
be significantly harder than the related problems of computing the
probability of a variable instantiation Pr, or the problem of
computing the most probable explanation (MPE). This paper
investigates the complexity of MAP in Bayesian networks.
Specifically, we show that MAP is complete for NP^PP and
provide further negative complexity results for algorithms based
on variable elimination. We also show that MAP remains hard even
when MPE and Pr become easy. For example, we show that MAP is
NP-complete when the networks are restricted to polytrees, and
even then can not be effectively approximated.
Given the difficulty of computing MAP exactly, and the difficulty
of approximating MAP while providing useful guarantees on the
resulting approximation, we investigate best effort
approximations. We introduce a generic MAP approximation framework.
We provide two instantiations of the framework; one for networks which
are amenable to exact inference Pr, and one for networks for which
even exact inference is too hard. This allows MAP approximation on
networks that are too complex to even exactly solve the easier problems,
Pr and MPE. Experimental results indicate that using these
approximation algorithms provides much better solutions than standard
techniques, and provide accurate MAP estimates in many cases.
Boutilier, C., Brafman, R.I., Domshlak, C., Hoos, H.H. and Poole, D. (2004)
"CP-nets: A Tool for Representing and Reasoning with
Conditional Ceteris Paribus Preference Statements",
Volume 21, pages 135-191.
For quick access go to
Abstract:
Information about user preferences plays a key role in automated
decision making. In many domains it is desirable to assess such
preferences in a qualitative rather than quantitative way. In this
paper, we propose a qualitative graphical representation of
preferences that reflects conditional dependence and independence of
preference statements under a ceteris paribus (all else being equal)
interpretation. Such a representation is often compact and arguably
quite natural in many circumstances. We provide a formal semantics for
this model, and describe how the structure of the network can be
exploited in several inference tasks, such as determining whether one
outcome dominates (is preferred to) another, ordering a set outcomes
according to the preference relation, and constructing the best
outcome subject to available evidence.
Dixon, H.E., Ginsberg, M.L. and Parkes, A.J. (2004)
"Generalizing Boolean Satisfiability I: Background and Survey of Existing Work",
Volume 21, pages 193-243.
For quick access go to
Abstract:
This is the first of three planned papers describing ZAP, a
satisfiability engine that substantially generalizes existing tools
while retaining the performance characteristics of modern
high-performance solvers. The fundamental idea underlying ZAP is that
many problems passed to such engines contain rich internal structure
that is obscured by the Boolean representation used; our goal is to
define a representation in which this structure is apparent and can
easily be exploited to improve computational performance. This paper
is a survey of the work underlying ZAP, and discusses previous
attempts to improve the performance of the
Davis-Putnam-Logemann-Loveland algorithm by exploiting the structure
of the problem being solved. We examine existing ideas including
extensions of the Boolean language to allow cardinality constraints,
pseudo-Boolean representations, symmetry, and a limited form of
quantification. While this paper is intended as a survey, our
research results are contained in the two subsequent articles, with
the theoretical structure of ZAP described in the second paper in this
series, and ZAP's implementation described in the third.
Arieli, O., Denecker, M., Van Nuffelen, B. and Bruynooghe, M. (2004)
"Coherent Integration of Databases by Abductive Logic Programming",
Volume 21, pages 245-286.
For quick access go to
Abstract:
We introduce an abductive method for a coherent integration of
independent data-sources. The idea is to compute a list of data-facts
that should be inserted to the amalgamated database or retracted from
it in order to restore its consistency. This method is implemented by
an abductive solver, called Asystem, that applies SLDNFA-resolution on
a meta-theory that relates different, possibly contradicting, input
databases. We also give a pure model-theoretic analysis of the possible
ways to `recover' consistent data from an inconsistent database in terms
of those models of the database that exhibit as minimal inconsistent
information as reasonably possible. This allows us to characterize the
`recovered databases' in terms of the `preferred' (i.e., most consistent)
models of the theory. The outcome is an abductive-based application that
is sound and complete with respect to a corresponding model-based,
preferential semantics, and -- to the best of our knowledge -- is more
expressive (thus more general) than any other implementation of coherent
integration of databases.
Nederhof, M.J. and Satta, G. (2004)
"IDL-Expressions: A Formalism for Representing and Parsing Finite Languages in Natural Language Processing",
Volume 21, pages 287-317.
For quick access go to
Abstract:
We propose a formalism for representation of finite languages,
referred to as the class of IDL-expressions, which combines concepts
that were only considered in isolation in existing formalisms. The
suggested applications are in natural language processing, more
specifically in surface natural language generation and in machine
translation, where a sentence is obtained by first generating a large
set of candidate sentences, represented in a compact way, and then by
filtering such a set through a parser. We study several formal
properties of IDL-expressions and compare this new formalism with more
standard ones. We also present a novel parsing algorithm for
IDL-expressions and prove a non-trivial upper bound on its time
complexity.
Hnich, B., Smith, B.M. and Walsh, T. (2004)
"Dual Modelling of Permutation and Injection Problems",
Volume 21, pages 357-391.
For quick access go to
Abstract:
When writing a constraint program, we have to choose which variables
should be the decision variables, and how to represent the constraints
on these variables. In many cases, there is considerable choice for
the decision variables. Consider, for example, permutation problems
in which we have as many values as variables, and each variable takes
an unique value. In such problems, we can choose between a primal
and a dual viewpoint. In the dual viewpoint, each dual variable
represents one of the primal values, whilst each dual value represents
one of the primal variables. Alternatively, by means of channelling
constraints to link the primal and dual variables, we can have a
combined model with both sets of variables. In this paper, we perform
an extensive theoretical and empirical study of such primal, dual and
combined models for two classes of problems: permutation problems and
injection problems. Our results show that it often be advantageous to
use multiple viewpoints, and to have constraints which channel
between them to maintain consistency. They also illustrate a general
methodology for comparing different constraint models.
Halpern, J.Y. and Koller, D. (2004)
"Representation Dependence in Probabilistic Inference",
Volume 21, pages 319-356.
For quick access go to
Abstract:
Non-deductive reasoning systems are often representation dependent:
representing the same situation in two different ways may cause such a
system to return two different answers. Some have viewed this as a
significant problem. For example, the principle of maximum entropyhas
been subjected to much criticism due to its representation dependence.
There has, however, been almost no work investigating representation
dependence. In this paper, we formalize this notion and show that it
is not a problem specific to maximum entropy. In fact, we show that
any representation-independent probabilistic inference procedure that
ignores irrelevant information is essentially entailment, in a precise
sense. Moreover, we show that representation independence is
incompatible with even a weak default assumption of independence. We
then show that invariance under a restricted class of representation
changes can form a reasonable compromise between representation
independence and other desiderata, and provide a construction of a
family of inference procedures that provides such restricted
representation independence, using relative entropy.
Thompson, C.A., Goker, M.H. and Langley, P. (2004)
"A Personalized System for Conversational Recommendations",
Volume 21, pages 393-428.
For quick access go to
Abstract:
Searching for and making decisions about information is becoming
increasingly difficult as the amount of information and number of
choices increases. Recommendation systems help users find items of
interest of a particular type, such as movies or restaurants, but are
still somewhat awkward to use. Our solution is to take advantage of
the complementary strengths of personalized recommendation systems and
dialogue systems, creating personalized aides. We present a
system -- the Adaptive Place Advisor -- that treats item selection as an
interactive, conversational process, with the program inquiring about
item attributes and the user responding. Individual, long-term user
preferences are unobtrusively obtained in the course of normal
recommendation dialogues and used to direct future conversations with
the same user. We present a novel user model that influences both
item search and the questions asked during a conversation. We
demonstrate the effectiveness of our system in significantly reducing
the time and number of interactions required to find a satisfactory
item, as compared to a control group of users interacting with a
non-adaptive version of the system.
----------------------------------------------------------------
II. INSTRUCTIONS FOR OBTAINING ARTICLES
When an article is published in JAIR, it is immediately available from
our distribution sites. The article is also immediately announced and
posted on the USENET newsgroups comp.ai.jair.announce and
comp.ai.jair.papers. Periodically, we send out announcements like
this to our mailing list announcing recent papers, for those of you
who do not read the JAIR newsgroups. (Note that if you read the JAIR
newsgroups, you have the advantage of immediately being notified about
new papers.)
To obtain an article, there are a variety of access methods:
-- WWW: You can access our server through the web. The URL is:
http://www.jair.org/
-- Anonymous FTP from the site below:
CMU Machine: ftp.cs.cmu.edu directory: project/jair
-- Newsgroups: read comp.ai.jair.announce and comp.ai.jair.papers
To Remove Yourself from this Mailing List
To remove yourself from the JAIR subscribers mailing list, visit our
Web site (http://www.jair.org/), follow the link "notify me of new
articles", enter your email address in the form at the bottom of the
page, and follow the directions. In the event that you've already
deleted yourself from the list and we keep sending you messages like
this one, send mail to jair-ed@isi.edu.
From jairmail at ISI.EDU Sat Jul 24 14:24:28 2004
From: jairmail at ISI.EDU (JAIRMAIL)
Date: Sat Jul 24 14:25:08 2004
Subject: [Jairsubscribers] JAIR Update
Message-ID: <200407242124.i6OLOSM10804@nitro.isi.edu>
CONTENTS:
I. New JAIR Articles
II. Instructions for Obtaining Articles and Unsubscribing
Dear JAIR subscriber:
This message lists papers that have been recently published in JAIR
and describes how to access them. (If you wish to remove yourself from
this mailing list, see section II.)
Zhang, W.
"Phase Transitions and Backbones of the Asymmetric Traveling Salesman
Problem"
Keppens, J. and Shen, Q.
"Compositional Model Repositories via Dynamic Constraint Satisfaction
with Order-of-Magnitude Preferences"
Liberatore, P.
"On Polynomial Sized MDP Succinct Policies"
Borodin, A., El-Yaniv, R. and Gogan, V.
"Can We Learn to Beat the Best Stock"
Babaioff, M. and Nisan, N.
"Concurrent Auctions Across The Supply Chain"
Felner, A., Stern, R., Ben-Yair, A., Kraus, S. and Netanyahu, N.
"PHA*: Finding the Shortest Path with A* in An Unknown Physical
Environment"
----------------------------------------------------------------
I. New JAIR Articles
Zhang, W. (2004)
"Phase Transitions and Backbones of the Asymmetric Traveling Salesman
Problem", Volume 21, pages 471-497.
For quick access go to
Abstract:
In recent years, there has been much interest in phase transitions of
combinatorial problems. Phase transitions have been successfully used
to analyze combinatorial optimization problems, characterize their
typical-case features and locate the hardest problem instances. In
this paper, we study phase transitions of the asymmetric Traveling
Salesman Problem (ATSP), an NP-hard combinatorial optimization problem
that has many real-world applications. Using random instances of up
to 1,500 cities in which intercity distances are uniformly
distributed, we empirically show that many properties of the problem,
including the optimal tour cost and backbone size, experience sharp
transitions as the precision of intercity distances increases across a
critical value. Our experimental results on the costs of the ATSP
tours and assignment problem agree with the theoretical result that
the asymptotic cost of assignment problem is pi ^2 /6 the number
of cities goes to infinity. In addition, we show that the average
computational cost of the well-known branch-and-bound subtour
elimination algorithm for the problem also exhibits a thrashing
behavior, transitioning from easy to difficult as the distance
precision increases. These results answer positively an open question
regarding the existence of phase transitions in the ATSP, and provide
guidance on how difficult ATSP problem instances should be generated.
Keppens, J. and Shen, Q. (2004)
"Compositional Model Repositories via Dynamic Constraint Satisfaction
with Order-of-Magnitude Preferences",Volume 21, pages 499-550.
For quick access go to
Abstract:
The predominant knowledge-based approach to automated model
construction, compositional modelling, employs a set of models of
particular functional components. Its inference mechanism takes a
scenario describing the constituent interacting components of a system
and translates it into a useful mathematical model. This paper
presents a novel compositional modelling approach aimed at building
model repositories. It furthers the field in two respects. Firstly,
it expands the application domain of compositional modelling to
systems that can not be easily described in terms of interacting
functional components, such as ecological systems. Secondly, it
enables the incorporation of user preferences into the model selection
process. These features are achieved by casting the compositional
modelling problem as an activity-based dynamic preference constraint
satisfaction problem, where the dynamic constraints describe the
restrictions imposed over the composition of partial models and the
preferences correspond to those of the user of the automated modeller.
In addition, the preference levels are represented through the use of
symbolic values that differ in orders of magnitude.
Liberatore, P. (2004)
"On Polynomial Sized MDP Succinct Policies", Volume 21, pages 551-577.
For quick access go to
Abstract:
Policies of Markov Decision Processes (MDPs) determine the
next action to execute from the current state and, possibly,
the history (the past states). When the number of states is
large, succinct representations are often used to compactly
represent both the MDPs and the policies in a reduced amount
of space. In this paper, some problems related to the size
of succinctly represented policies are analyzed. Namely, it
is shown that some MDPs have policies that can only be
represented in space super-polynomial in the size of the
MDP, unless the polynomial hierarchy collapses. This fact
motivates the study of the problem of deciding whether a
given MDP has a policy of a given size and reward. Since
some algorithms for MDPs work by finding a succinct
representation of the value function, the problem of
deciding the existence of a succinct representation of a
value function of a given size and reward is also
considered.
Borodin, A., El-Yaniv, R. and Gogan, V. (2004)
"Can We Learn to Beat the Best Stock", Volume 21, pages 579-594.
For quick access go to
Abstract:
A novel algorithm for actively trading stocks is presented. While
traditional expert advice and ``universal'' algorithms
(as well as standard technical trading heuristics) attempt to predict
winners or trends, our approach relies on predictable statistical
relations between all pairs of stocks in the market. Our empirical
results on historical markets provide strong evidence that this type
of technical trading can ``beat the market'' and moreover, can beat
the best stock in the market. In doing so we utilize a new idea for
smoothing critical parameters in the context of expert learning.
Babaioff, M. and Nisan, N. (2004)
"Concurrent Auctions Across The Supply Chain",
Volume 21, pages 595-629.
For quick access go to
Abstract:
With the recent technological feasibility of electronic commerce over
the Internet, much attention has been given to the design of
electronic markets for various types of electronically-tradable goods.
Such markets, however, will normally need to function in some
relationship with markets for other related goods, usually those
downstream or upstream in the supply chain. Thus, for example, an
electronic market for rubber tires for trucks will likely need to be
strongly influenced by the rubber market as well as by the truck
market.
In this paper we design protocols for exchange of information
between a sequence of markets along a single supply chain. These
protocols allow each of these markets to function separately, while
the information exchanged ensures efficient global behavior across the
supply chain. Each market that forms a link in the supply chain
operates as a double auction, where the bids on one side of the double
auction come from bidders in the corresponding segment of the
industry, and the bids on the other side are synthetically generated
by the protocol to express the combined information from all other
links in the chain. The double auctions in each of the markets can be
of several types, and we study several variants of incentive
compatible double auctions, comparing them in terms of their
efficiency and of the market revenue.
Felner, A., Stern, R., Ben-Yair, A., Kraus, S. and Netanyahu, N. (2004)
"PHA*: Finding the Shortest Path with A* in An Unknown Physical
Environment", Volume 21, pages 631-670.
For quick access go to
Abstract:
We address the problem of finding the shortest path between two points
in an unknown real physical environment, where a traveling agent must
move around in the environment to explore unknown territory. We
introduce the Physical-A* algorithm (PHA*) for solving this
problem. PHA* expands all the mandatory nodes that A* would expand and
returns the shortest path between the two points. However, due to the
physical nature of the problem, the complexity of the algorithm is
measured by the traveling effort of the moving agent and not by the
number of generated nodes, as in standard A*. PHA* is presented as a
two-level algorithm, such that its high level, A*, chooses the next
node to be expanded and its low level directs the agent to that node
in order to explore it. We present a number of variations for both
the high-level and low-level procedures and evaluate their performance
theoretically and experimentally. We show that the travel cost of our
best variation is fairly close to the optimal travel cost, assuming
that the mandatory nodes of A* are known in advance. We then
generalize our algorithm to the multi-agent case, where a number of
cooperative agents are designed to solve the problem. Specifically,
we provide an experimental implementation for such a system. It
should be noted that the problem addressed here is not a navigation
problem, but rather a problem of finding the shortest path between two
points for future usage.
----------------------------------------------------------------
II. INSTRUCTIONS FOR OBTAINING ARTICLES
When an article is published in JAIR, it is immediately available from
our distribution sites. The article is also immediately announced and
posted on the USENET newsgroups comp.ai.jair.announce and
comp.ai.jair.papers. Periodically, we send out announcements like
this to our mailing list announcing recent papers, for those of you
who do not read the JAIR newsgroups. (Note that if you read the JAIR
newsgroups, you have the advantage of immediately being notified about
new papers.)
To obtain an article, there are a variety of access methods:
-- WWW: You can access our server through the web. The URL is:
http://www.jair.org/
-- Anonymous FTP from the site below:
CMU Machine: ftp.cs.cmu.edu directory: project/jair
-- Newsgroups: read comp.ai.jair.announce and comp.ai.jair.papers
To Remove Yourself from this Mailing List
To remove yourself from the JAIR subscribers mailing list, visit our
Web site (http://www.jair.org/), follow the link "notify me of new
articles", enter your email address in the form at the bottom of the
page, and follow the directions. In the event that you've already
deleted yourself from the list and we keep sending you messages like
this one, send mail to jair-ed@isi.edu.
From jairmail at ISI.EDU Mon Nov 8 10:13:34 2004
From: jairmail at ISI.EDU (JAIRMAIL)
Date: Mon Nov 8 10:14:33 2004
Subject: [Jairsubscribers] JAIR Update
Message-ID: <200411081813.iA8IDY120434@nitro.isi.edu>
CONTENTS:
I. New JAIR Articles
II. Instructions for Obtaining Articles
III. News: 2004 IJCAII-JAIR Best Paper Prize awarded to Ygge and Akkermans
Dear JAIR subscriber:
This message lists papers that have been recently published in JAIR
and describes how to access them. (If you wish to remove yourself from
this mailing list, see section II.)
----------------------------------------------------------------
I. New JAIR Articles
Cohen, D., Cooper, M., Jeavons, P. and Krokhin, A. (2004)
"A Maximal Tractable Class of Soft Constraints",
Volume 22, pages 1-22.
For quick access go to
Abstract:
Many researchers in artificial intelligence are beginning to explore the
use of soft constraints to express a set of (possibly conflicting) problem
requirements. A soft constraint is a function defined on a collection of
variables which associates some measure of desirability with each possible
combination of values for those variables. However, the crucial question of
the computational complexity of finding the optimal solution to a collection
of soft constraints has so far received very little attention. In this paper
we identify a class of soft binary constraints for which the problem of
finding the optimal solution is tractable. In other words, we show that for
any given set of such constraints, there exists a polynomial time algorithm
to determine the assignment having the best overall combined measure of
desirability. This tractable class includes many commonly-occurring soft
constraints, such as 'as near as possible' or 'as soon as possible after',
as well as crisp constraints such as 'greater than'. Finally, we show that
this tractable class is maximal, in the sense that adding any other form of
soft binary constraint which is not in the class gives rise to a class of
problems which is NP-hard.
Dubois, D., Fargier, H. and Prade, H. (2004)
"Ordinal and Probabilistic Representations of Acceptance",
Volume 22, pages 23-56.
For quick access go to
Abstract:
An accepted belief is a proposition considered likely enough by an
agent, to be inferred from as if it were true. This paper bridges the
gap between probabilistic and logical representations of accepted
beliefs. To this end, natural properties of relations on propositions,
describing relative strength of belief are augmented with some
conditions ensuring that accepted beliefs form a deductively closed
set. This requirement turns out to be very restrictive. In particular,
it is shown that the sets of accepted belief of an agent can always be
derived from a family of possibility rankings of states. An agent
accepts a proposition in a given context if this proposition is
considered more possible than its negation in this context, for all
possibility rankings in the family. These results are closely
connected to the non-monotonic 'preferential' inference system of
Kraus, Lehmann and Magidor and the so-called plausibility functions of
Friedman and Halpern. The extent to which probability theory is
compatible with acceptance relations is laid bare. A solution to the
lottery paradox, which is considered as a major impediment to the use
of non-monotonic inference is proposed using a special kind of
probabilities (called lexicographic, or big-stepped). The setting of
acceptance relations also proposes another way of approaching the
theory of belief change after the works of Gärdenfors and
colleagues. Our view considers the acceptance relation as a primitive
object from which belief sets are derived in various contexts.
Meek, C.J. and Birmingham, W.P. (2004)
"A Comprehensive Trainable Error Model for Sung Music Queries",
Volume 22, pages 57-91.
For quick access go to
Abstract:
We propose a model for errors in sung queries, a variant of the
hidden Markov model (HMM). This is a solution to the problem of
identifying the degree of similarity between a (typically
error-laden) sung query and a potential target in a
database of musical works, an important problem in the field of
music information retrieval. Similarity metrics are a critical
component of `query-by-humming' (QBH) applications which search
audio and multimedia databases for strong matches to oral queries.
Our model comprehensively expresses the types of {m error} or
variation between target and query: cumulative and non-cumulative
local errors, transposition, tempo and tempo changes, insertions,
deletions and modulation. The model is not only expressive, but
automatically trainable, or able to learn and generalize from
query examples. We present results of simulations, designed to
assess the discriminatory potential of the model, and tests with
real sung queries, to demonstrate relevance to real-world
applications.
Chockler, H. and Halpern, J.Y. (2004)
"Responsibility and Blame: A Structural-Model Approach",
Volume 22, pages 93-115.
For quick access go to
Abstract:
Causality is typically treated an all-or-nothing concept; either A is a
cause of B or it is not. We extend the definition of causality
introduced by Halpern and Pearl [2004a] to take into account the
degree of responsibility of A for B. For example, if someone wins an
election 11-0, then each person who votes for him is less responsible
for the victory than if he had won 6-5. We then define a notion of
degree of blame, which takes into account an agent's epistemic state.
Roughly speaking, the degree of blame of A for B is the expected
degree of responsibility of A for B, taken over the epistemic state of
an agent.
Derbeko, P, El-Yaniv, R. and Meir, R. (2004)
"Explicit Learning Curves for Transduction and Application to Clustering
and Compression Algorithms", Volume 22, pages 117-142.
For quick access go to
Abstract:
Inductive learning is based on inferring a general rule from a finite
data set and using it to label new data. In transduction one attempts
to solve the problem of using a labeled training set to label a set of
unlabeled points, which are given to the learner prior to
learning. Although transduction seems at the outset to be an easier
task than induction, there have not been many provably useful
algorithms for transduction. Moreover, the precise relation between
induction and transduction has not yet been determined. The main
theoretical developments related to transduction were presented by
Vapnik more than twenty years ago. One of Vapnik's basic results is a
rather tight error bound for transductive classification based on an
exact computation of the hypergeometric tail. While tight, this bound
is given implicitly via a computational routine. Our first
contribution is a somewhat looser but explicit characterization of a
slightly extended PAC-Bayesian version of Vapnik's transductive
bound. This characterization is obtained using concentration
inequalities for the tail of sums of random variables obtained by
sampling without replacement. We then derive error bounds for
compression schemes such as (transductive) support vector machines and
for transduction algorithms based on clustering. The main observation
used for deriving these new error bounds and algorithms is that the
unlabeled test points, which in the transductive setting are known in
advance, can be used in order to construct useful data dependent prior
distributions over the hypothesis space.
Goldman, C.V. and Zilberstein, S. (2004)
"Decentralized Control of Cooperative Systems: Categorization and
Complexity Analysis", Volume 22, pages 143-174.
For quick access go to
Abstract:
Decentralized control of cooperative systems captures the operation of
a group of decision makers that share a single global objective. The
difficulty in solving optimally such problems arises when the agents
lack full observability of the global state of the system when they
operate. The general problem has been shown to be NEXP-complete. In
this paper, we identify classes of decentralized control problems
whose complexity ranges between NEXP and P. In particular, we study
problems characterized by independent transitions, independent
observations, and goal-oriented objective functions. Two algorithms
are shown to solve optimally useful classes of goal-oriented
decentralized processes in polynomial time. This paper also studies
information sharing among the decision-makers, which can improve their
performance. We distinguish between three ways in which agents can
exchange information: indirect communication, direct communication and
sharing state features that are not controlled by the agents. Our
analysis shows that for every class of problems we consider,
introducing direct or indirect communication does not change the
worst-case complexity. The results provide a better understanding of
the complexity of decentralized control problems that arise in
practice and facilitate the development of planning algorithms for
these problems.
----------------------------------------------------------------
II. INSTRUCTIONS FOR OBTAINING ARTICLES
When an article is published in JAIR, it is immediately available from
our distribution sites. The article is also immediately announced and
posted on the USENET newsgroups comp.ai.jair.announce and
comp.ai.jair.papers. Periodically, we send out announcements like
this to our mailing list announcing recent papers, for those of you
who do not read the JAIR newsgroups. (Note that if you read the JAIR
newsgroups, you have the advantage of immediately being notified about
new papers.)
To obtain an article, there are a variety of access methods:
-- WWW: You can access our server through the web. The URL is:
http://www.jair.org/
-- Anonymous FTP from the site below:
CMU Machine: ftp.cs.cmu.edu directory: project/jair
-- Newsgroups: read comp.ai.jair.announce and comp.ai.jair.papers
To Remove Yourself from this Mailing List
To remove yourself from the JAIR subscribers mailing list, visit our
Web site (http://www.jair.org/), follow the link "notify me of new
articles", enter your email address in the form at the bottom of the
page, and follow the directions. In the event that you've already
deleted yourself from the list and we keep sending you messages like
this one, send mail to jair-ed@isi.edu.
----------------------------------------------------------------
III. News: IJCAII-JAIR Best Paper Award
The winner of the 2004 IJCAII-JAIR Best Paper Prize is:
Ygge, F. and Akkermans, H. (1999)
"Decentralized Markets versus Central Control: A Comparative Study",
11, 301-333
The following paper was selected for Honorable Mention:
Fox, D., Burgard, W. and Thrun, S. (1999)
"Markov Localization for Mobile Robots in Dynamic Environments",
11, 391-427.
The Annual IJCAII-JAIR Best Paper Prize is awarded to an outstanding
paper published in JAIR in the preceding five calendar years. For
2003, papers published between 1999 and 2003 (inclusive) were
eligible. The prize committee is comprised of associate editors and
members of the JAIR Advisory Board; their decision is based on both
the significance of the paper and the quality of presentation. The
recipient(s) of the award receives a prize of US$500 (to be split
amongst the authors of a co-authored paper). Funding for this award
was provided by the International Joint Conference on Artificial
Intelligence, Inc.