From jair-ed at isi.edu Wed May 30 16:35:07 2012
From: jair-ed at isi.edu (jair-ed@isi.edu)
Date: Wed, 30 May 2012 18:35:07 -0500
Subject: [Jairsubscribers] 17 new articles published by JAIR
Message-ID:
Dear JAIR subscriber:
This message lists papers that have been published in JAIR Volume 43 and describes how to access them. (If you wish to remove yourself from this mailing list, see instructions at the end of this message.)
----------------------------------------------------------------
I. New JAIR Articles
F. Stulp, A. Fedrizzi, L. Mösenlechner and M. Beetz (2012)
"Learning and Reasoning with Action-Related Places for Robust Mobile Manipulation",
Volume 43, pages 1-42
For quick access go to
Abstract:
We propose the concept of Action-Related Place (ARPlace) as a powerful and flexible representation of task-related place in the context of mobile manipulation. ARPlace represents robot base locations not as a single position, but rather as a collection of positions, each with an associated probability that the manipulation action will succeed when located there. ARPlaces are generated using a predictive model that is acquired through experience-based learning, and take into account the uncertainty the robot has about its own location and the location of the object to be manipulated.
When executing the task, rather than choosing one specific goal position based only on the initial knowledge about the task context, the robot instantiates an ARPlace, and bases its decisions on this ARPlace, which is updated as new information about the task becomes available. To show the advantages of this least-commitment approach, we present a transformational planner that reasons about ARPlaces in order to optimize symbolic plans. Our empirical evaluation demonstrates that using ARPlaces leads to more robust and efficient mobile manipulation in the face of state estimation uncertainty on our simulated robot.
N. Fu, H.C. Lau, P. Varakantham and F. Xiao (2012)
"Robust Local Search for Solving RCPSP/max with Durational Uncertainty",
Volume 43, pages 43-86
For quick access go to
Abstract:
Scheduling problems in manufacturing, logistics and project management have frequently been modeled using the framework of Resource Constrained Project Scheduling Problems with minimum and maximum time lags (RCPSP/max). Due to the importance of these problems, providing scalable solution schedules for RCPSP/max problems is a topic of extensive research. However, all existing methods for solving RCPSP/max assume that durations of activities are known with certainty, an assumption that does not hold in real world scheduling problems where unexpected external events such as manpower availability, weather changes, etc. lead to delays or advances in completion of activities. Thus, in this paper, our focus is on providing a scalable method for solving RCPSP/max problems with durational uncertainty. To that end, we introduce the robust local search method consisting of three key ideas: (a) Introducing and studying the properties of two decision rule approximations used to compute start times of activities with respect to dynamic realizations of the durational uncertainty; (b) Deriving the expression for robust makespan of an execution strategy based on decision rule approximations; and (c) A robust local search mechanism to efficiently compute activity execution strategies that are robust against durational uncertainty. Furthermore, we also provide enhancements to local search that exploit temporal dependencies between activities. Our experimental results illustrate that robust local search is able to provide robust execution strategies efficiently.
A. Sadilek and H. Kautz (2012)
"Location-Based Reasoning about Complex Multi-Agent Behavior",
Volume 43, pages 87-133
For quick access go to
Abstract:
Recent research has shown that surprisingly rich models of human activity can be learned from GPS (positional) data. However, most effort to date has concentrated on modeling single individuals or statistical properties of groups of people. Moreover, prior work focused solely on modeling actual successful executions (and not failed or attempted executions) of the activities of interest. We, in contrast, take on the task of understanding human interactions, attempted interactions, and intentions from noisy sensor data in a fully relational multi-agent setting. We use a real-world game of capture the flag to illustrate our approach in a well-defined domain that involves many distinct cooperative and competitive joint activities. We model the domain using Markov logic, a statistical-relational language, and learn a theory that jointly denoises the data and infers occurrences of high-level activities, such as a player capturing an enemy. Our unified model combines constraints imposed by the geometry of the game area, the motion model of the players, and by the rules and dynamics of the game in a probabilistically and logically sound fashion. We show that while it may be impossible to directly detect a multi-agent activity due to sensor noise or malfunction, the occurrence of the activity can still be inferred by considering both its impact on the future behaviors of the people involved as well as the events that could have preceded it. Further, we show that given a model of successfully performed multi-agent activities, along with a set of examples of failed attempts at the same activities, our system automatically learns an augmented model that is capable of recognizing success and failure, as well as goals of people's actions with high accuracy. We compare our approach with other alternatives and show that our unified model, which takes into account not only relationships among individual players, but also relationships among activities over the entire length of a game, although more computationally costly, is sig
nificantly more accurate. Finally, we demonstrate that explicitly modeling unsuccessful attempts boosts performance on other important recognition tasks.
T. Flati and R. Navigli (2012)
"The CQC Algorithm: Cycling in Graphs to Semantically Enrich and Enhance a Bilingual Dictionary",
Volume 43, pages 135-171
For quick access go to
Abstract:
Bilingual machine-readable dictionaries are knowledge resources useful in many automatic tasks. However, compared to monolingual computational lexicons like WordNet, bilingual dictionaries typically provide a lower amount of structured information, such as lexical and semantic relations, and often do not cover the entire range of possible translations for a word of interest. In this paper we present Cycles and Quasi-Cycles (CQC), a novel algorithm for the automated disambiguation of ambiguous translations in the lexical entries of a bilingual machine-readable dictionary. The dictionary is represented as a graph, and cyclic patterns are sought in the graph to assign an appropriate sense tag to each translation in a lexical entry. Further, we use the algorithm's output to improve the quality of the dictionary itself, by suggesting accurate solutions to structural problems such as misalignments, partial alignments
and missing entries. Finally, we successfully apply CQC to the task of synonym extraction.
G. Pesant, C. Quimper and A. Zanarini (2012)
"Counting-Based Search: Branching Heuristics for Constraint Satisfaction Problems",
Volume 43, pages 173-210
For quick access go to
Abstract:
Designing a search heuristic for constraint programming that is reliable across problem domains has been an important research topic in recent years. This paper concentrates on one family of candidates: counting-based search. Such heuristics seek to make branching decisions that preserve most of the solutions by determining what proportion of solutions to each individual constraint agree with that decision. Whereas most generic search heuristics in constraint programming rely on local information at the level of the individual variable, our search heuristics are based on more global information at the constraint level. We design several algorithms that are used to count the number of solutions to specific families of constraints and propose some search heuristics exploiting such information. The experimental part of the paper considers eight problem domains ranging from well-established benchmark puzzles to rostering and sport scheduling. An initial empirical analysis identifies heuristic maxSD as a robust candidate among our proposals.eWe then evaluate the latter against the state of the art, including the latest generic search heuristics, restarts, and discrepancy-based tree traversals. Experimental results show that counting-based search generally outperforms other generic heuristics.
Y. Zeng and P. Doshi (2012)
"Exploiting Model Equivalences for Solving Interactive Dynamic Influence Diagrams",
Volume 43, pages 211-255
For quick access go to
Abstract:
We focus on the problem of sequential decision making in partially observable environments shared with other agents of uncertain types having similar or conflicting objectives. This problem has been previously formalized by multiple frameworks one of which is the interactive dynamic influence diagram (I-DID), which generalizes the well-known influence diagram to the multiagent setting. I-DIDs are graphical models and may be used to compute the policy of an agent given its belief over the physical state and others' models, which changes as the agent acts and observes in the multiagent setting.
As we may expect, solving I-DIDs is computationally hard. This is predominantly due to the large space of candidate models ascribed to the other agents and its exponential growth over time. We present two methods for reducing the size of the model space and stemming its exponential growth. Both these methods involve aggregating individual models into equivalence classes. Our first method groups together behaviorally equivalent models and selects only those models for updating which will result in predictive behaviors that are distinct from others in the updated model space. The second method further compacts the model space by focusing on portions of the behavioral predictions. Specifically, we cluster actionally equivalent models that prescribe identical actions at a single time step. Exactly identifying the equivalences would require us to solve all models in the initial set. We avoid this by selectively solving some of the models, thereby introducing an approximation. We discuss the error introduced by the approximation, and empirically demonstrate the improved efficiency in solving I-DIDs due to the equivalences.
J.H.M. Lee and K. L. Leung (2012)
"Consistency Techniques for Flow-Based Projection-Safe Global Cost Functions in Weighted Constraint Satisfaction",
Volume 43, pages 257-292
For quick access go to
Abstract:
Many combinatorial problems deal with preferences and violations, the goal of which is to find solutions with the minimum cost. Weighted constraint satisfaction is a framework for modeling such problems, which consists of a set of cost functions to measure the degree of violation or preferences of different combinations of variable assignments. Typical solution methods for weighted constraint satisfaction problems (WCSPs) are based on branch-and-bound search, which are made practical through the use of powerful consistency techniques such as AC*, FDAC*, EDAC* to deduce hidden cost information and value pruning during search. These techniques, however, are designed to be efficient only on binary and ternary cost functions which are represented in table form. In tackling many real-life problems, high arity (or global) cost functions are required. We investigate efficient representation scheme and algorithms to bring the benefits of the consistency techniques to also high arity cost functions, which are often derived from hard global constraints from classical constraint satisfaction.
The literature suggests some global cost functions can be represented as flow networks, and the minimum cost flow algorithm can be used to compute the minimum costs of such networks in polynomial time. We show that naive adoption of this flow-based algorithmic method for global cost functions can result in a stronger form of null-inverse consistency. We further show how the method can be modified to handle cost projections and extensions to maintain generalized versions of AC* and FDAC* for cost functions with more than two variables. Similar generalization for the stronger EDAC* is less straightforward. We reveal the oscillation problem when enforcing EDAC* on cost functions sharing more than one variable. To avoid oscillation, we propose a weak version of EDAC* and generalize it to weak EDGAC* for non-binary cost functions. Using various benchmarks involving the soft variants of hard global constraints ALLDIFFERENT, GCC, SAME, and REGULAR, empirical results demonstrate that our proposal gives improvements of up to an order of magnitude when compared with the traditional constraint optimization approach, both in terms of time and pruning.
R. Huang, Y. Chen and W. Zhang (2012)
"SAS+ Planning as Satisfiability",
Volume 43, pages 293-328
For quick access go to
Abstract:
Planning as satisfiability is a principal approach to planning with many eminent advantages. The existing planning as satisfiability techniques usually use encodings compiled from STRIPS. We introduce a novel SAT encoding scheme (SASE) based on the SAS+ formalism. The new scheme exploits the structural information in SAS+, resulting in an encoding that is both more compact and efficient for planning. We prove the correctness of the new encoding by establishing an isomorphism between the solution plans of SASE and that of STRIPS based encodings. We further analyze the transition variables newly introduced in SASE to explain why it accommodates modern SAT solving algorithms and improves performance. We give empirical statistical results to support our analysis. We also develop a number of techniques to further reduce the encoding size of SASE, and conduct experimental studies to show the strength of each individual technique. Finally, we report extensive experimental results to demonstrate significant improvements of SASE over the state-of-the-art STRIPS based encoding schemes in terms of both time and memory efficiency.
P. Jeavons and J. Petke (2012)
"Local Consistency and SAT-Solvers",
Volume 43, pages 329-351
For quick access go to
Abstract:
Local consistency techniques such as k-consistency are a key component of specialised solvers for constraint satisfaction problems. In this paper we show that the power of using k-consistency techniques on a constraint satisfaction problem is precisely captured by using a particular inference rule, which we call negative-hyper-resolution, on the standard direct encoding of the problem into Boolean clauses. We also show that current clause-learning SAT-solvers will discover in expected polynomial time any inconsistency that can be deduced from a given set of clauses using negative-hyper-resolvents of a fixed size. We combine these two results to show that, without being explicitly designed to do so, current clause-learning SAT-solvers efficiently simulate k-consistency techniques, for all fixed values of k. We then give some experimental results to show that this feature allows clause-learning SAT-solvers to efficiently solve certain families of constraint problems which are challenging for conventional constraint-programming solvers.
L. R. Planken, M. M. de Weerdt and R. P.J. van der Krogt (2012)
"Computing All-Pairs Shortest Paths by Leveraging Low Treewidth",
Volume 43, pages 353-388
For quick access go to
Abstract:
We present two new and efficient algorithms for computing all-pairs shortest paths. The algorithms operate on directed graphs with real (possibly negative) weights. They make use of directed path consistency along a vertex ordering d. Both algorithms run in O(n^2 w_d) time, where w_d is the graph width induced by this vertex ordering. For graphs of constant treewidth, this yields O(n^2) time, which is optimal. On chordal graphs, the algorithms run in O(nm) time. In addition, we present a variant that exploits graph separators to arrive at a run time of O(n w_d^2 + n^2 s_d) on general graphs, where s_d <= w_d is the size of the largest minimal separator induced by the vertex ordering d. We show empirically that on both constructed and realistic benchmarks, in many cases the algorithms outperform Floyd-Warshall's as well as Johnson's algorithm, which represent the current state of the art with a run time of O(n^3) and O(nm + n^2 log n), respectively. Our algorithms can be used for spatial and temporal reasoning, such as for the Simple Temporal Problem, which underlines their relevance to the planning and scheduling community.
F. Sánchez-Martínez, R. C. Carrasco, M. A. Martínez-Prieto and J. Adiego (2012)
"Generalized Biwords for Bitext Compression and Translation Spotting",
Volume 43, pages 389-418
For quick access go to
Abstract:
Large bilingual parallel texts (also known as bitexts) are usually stored in a compressed form, and previous work has shown that they can be more efficiently compressed if the fact that the two texts are mutual translations is exploited. For example, a bitext can be seen as a sequence of biwords ---pairs of parallel words with a high probability of co-occurrence--- that can be used as an intermediate representation in the compression process. However, the simple biword approach described in the literature can only exploit one-to-one word alignments and cannot tackle the reordering of words. We therefore introduce a generalization of biwords which can describe multi-word expressions and reorderings. We also describe some methods for the binary compression of generalized biword sequences, and compare their performance when different schemes are applied to the extraction of the biword sequence. In addition, we show that this generalization of biwords allows for the implementation of an efficient algorithm to look on the compressed bitext for words or text segments in one of the texts and retrieve their counterpart translations in the other text ---an application usually referred to as translation spotting--- with only some minor modifications in the compression algorithm.
B. Cuenca Grau, B. Motik, G. Stoilos and I. Horrocks (2012)
"Completeness Guarantees for Incomplete Ontology Reasoners: Theory and Practice",
Volume 43, pages 419-476
For quick access go to
Abstract:
To achieve scalability of query answering, the developers of Semantic Web applications are often forced to use incomplete OWL 2 reasoners, which fail to derive all answers for at least one query, ontology, and data set. The lack of completeness guarantees, however, may be unacceptable for applications in areas such as health care and defence, where missing answers can adversely affect the application's functionality. Furthermore, even if an application can tolerate some level of incompleteness, it is often advantageous to estimate how many and what kind of answers are being lost.
In this paper, we present a novel logic-based framework that allows one to check whether a reasoner is complete for a given query Q and ontology T---that is, whether the reasoner is guaranteed to compute all answers to Q w.r.t. T and an arbitrary data set A. Since ontologies and typical queries are often fixed at application design time, our approach allows application developers to check whether a reasoner known to be incomplete in general is actually complete for the kinds of input relevant for the application.
We also present a technique that, given a query Q, an ontology T, and reasoners R_1 and R_2 that satisfy certain assumptions, can be used to determine whether, for each data set A, reasoner R_1 computes more answers to Q w.r.t. T and A than reasoner R_2. This allows application developers to select the reasoner that provides the highest degree of completeness for Q and T that is compatible with the application's scalability requirements.
Our results thus provide a theoretical and practical foundation for the design of future ontology-based information systems that maximise scalability while minimising or even eliminating incompleteness of query answers.
J. Baum, A. E. Nicholson and T. I. Dix (2012)
"Proximity-Based Non-uniform Abstractions for Approximate Planning",
Volume 43, pages 477-522
For quick access go to
Abstract:
In a deterministic world, a planning agent can be certain of the consequences of its planned sequence of actions. Not so, however, in dynamic, stochastic domains where Markov decision processes are commonly used. Unfortunately these suffer from the `curse of dimensionality': if the state space is a Cartesian product of many small sets (`dimensions'), planning is exponential in the number of those dimensions.
Our new technique exploits the intuitive strategy of selectively ignoring various dimensions in different parts of the state space. The resulting non-uniformity has strong implications, since the approximation is no longer Markovian, requiring the use of a modified planner. We also use a spatial and temporal proximity measure, which responds to continued planning as well as movement of the agent through the state space, to dynamically adapt the abstraction as planning progresses.
We present qualitative and quantitative results across a range of experimental domains showing that an agent exploiting this novel approximation method successfully finds solutions to the planning problem using much less than the full state space. We assess and analyse the features of domains which our method can exploit.
C. Hernandez and J. A. Baier (2012)
"Avoiding and Escaping Depressions in Real-Time Heuristic Search",
Volume 43, pages 523-570
For quick access go to
Abstract:
Heuristics used for solving hard real-time search problems have regions with depressions. Such regions are bounded areas of the search space in which the heuristic function is inaccurate compared to the actual cost to reach a solution. Early real-time search algorithms, like LRTA*, easily become trapped in those regions since the heuristic values of their states may need to be updated multiple times, which results in costly solutions. State-of-the-art real-time search algorithms, like LSS-LRTA* or LRTA*(k), improve LRTA*'s mechanism to update the heuristic, resulting in improved performance. Those algorithms, however, do not guide search towards avoiding depressed regions. This paper presents depression avoidance, a simple real-time search principle to guide search towards avoiding states that have been marked as part of a heuristic depression. We propose two ways in which depression avoidance can be implemented: mark-and-avoid and move-to-border. We implement these strategies on top of LSS-LRTA* and RTAA*, producing 4 new real-time heuristic search algorithms: aLSS-LRTA*, daLSS-LRTA*, aRTAA*, and daRTAA*. When the objective is to find a single solution by running the real-time search algorithm once, we show that daLSS-LRTA* and daRTAA* outperform their predecessors sometimes by one order of magnitude. Of the four new algorithms, daRTAA* produces the best solutions given a fixed deadline on the average time allowed per planning episode. We prove all our algorithms have good theoretical properties: in finite search spaces, they find a solution if one exists, and converge to an optimal after a number of trials.
J. Lee and R. Palla (2012)
"Reformulating the Situation Calculus and the Event Calculus in the General Theory of Stable Models and in Answer Set Programming",
Volume 43, pages 571-620
For quick access go to
Abstract:
Circumscription and logic programs under the stable model semantics are two well-known nonmonotonic formalisms. The former has served as a basis of classical logic based action formalisms, such as the situation calculus, the event calculus and temporal action logics; the latter has served as a basis of a family of action languages, such as language A and several of its descendants. Based on the discovery that circumscription and the stable model semantics coincide on a class of canonical formulas, we reformulate the situation calculus and the event calculus in the general theory of stable models. We also present a translation that turns the reformulations further into answer set programs, so that efficient answer set solvers can be applied to compute the situation calculus and the event calculus.
M. Vasirani and S. Ossowski (2012)
"A Market-Inspired Approach for Intersection Management in Urban Road Traffic Networks",
Volume 43, pages 621-659
For quick access go to
Abstract:
Traffic congestion in urban road networks is a costly problem that affects all major cities in developed countries. To tackle this problem, it is possible (i) to act on the supply side, increasing the number of roads or lanes in a network, (ii) to reduce the demand, restricting the access to urban areas at specific hours or to specific vehicles, or (iii) to improve the efficiency of the existing network, by means of a widespread use of so-called Intelligent Transportation Systems (ITS). In line with the recent advances in smart transportation management infrastructures, ITS has turned out to be a promising field of application for artificial intelligence techniques. In particular, multiagent systems seem to be the ideal candidates for the design and implementation of ITS. In fact, drivers can be naturally modelled as autonomous agents that interact with the transportation management infrastructure, thereby generating a large-scale, open, agent-based system. To regulate such a system and maintain a smooth and efficient flow of traffic, decentralised mechanisms for the management of the transportation infrastructure are needed.
In this article we propose a distributed, market-inspired, mechanism for the management of a future urban road network, where intelligent autonomous vehicles, operated by software agents on behalf of their human owners, interact with the infrastructure in order to travel safely and efficiently through the road network. Building on the reservation-based intersection control model proposed by Dresner and Stone, we consider two different scenarios: one with a single intersection and one with a network of intersections. In the former, we analyse the performance of a novel policy based on combinatorial auctions for the allocation of reservations. In the latter, we analyse the impact that a traffic assignment strategy inspired by competitive markets has on the drivers' route choices. Finally we propose an adaptive management mechanism that integrates the auction-based traffic control policy with the competitive traffic assignment strategy.
S.R.K. Branavan, D. Silver and R. Barzilay (2012)
"Learning to Win by Reading Manuals in a Monte-Carlo Framework",
Volume 43, pages 661-704
For quick access go to
Abstract:
Domain knowledge is crucial for effective performance in autonomous control systems. Typically, human effort is required to encode this knowledge into a control algorithm. In this paper, we present an approach to language grounding which automatically interprets text in the context of a complex control application, such as a game, and uses domain knowledge extracted from the text to improve control performance. Both text analysis and control strategies are learned jointly using only a feedback signal inherent to the application. To effectively leverage textual information, our method automatically extracts the text segment most relevant to the current game state, and labels it with a task-centric predicate structure. This labeled text is then used to bias an action selection policy for the game, guiding it towards promising regions of the action space. We encode our model for text analysis and game playing in a multi-layer neural network, representing linguistic decisions via latent variables in the hidden layers, and game action quality via the output layer. Operating within the Monte-Carlo Search framework, we estimate model parameters using feedback from simulated games. We apply our approach to the complex strategy game Civilization II using the official game manual as the text guide. Our results show that a linguistically-informed game-playing agent significantly outperforms its language-unaware counterpart, yielding a 34% absolute improvement and winning over 65% of games when playing against the built-in AI of Civilization.
----------------------------------------------------------------
II. Unsubscribing from our Mailing List
To remove yourself from the JAIR subscribers mailing list, visit our
Web site (http://www.jair.org/), follow the link "notify me of new
articles", enter your email address in the form at the bottom of the
page, and follow the directions. In the event that you've already
deleted yourself from the list and we keep sending you messages like
this one, send mail to jair-ed at isi.edu.
----------------------------------------------------------------
From jair-ed at isi.edu Thu Sep 6 23:21:35 2012
From: jair-ed at isi.edu (jair-ed@isi.edu)
Date: Fri, 07 Sep 2012 01:21:35 -0500
Subject: [Jairsubscribers] 16 new articles published by JAIR
Message-ID:
Dear JAIR subscriber:
This message lists papers that have been published recently in JAIR Volume 44 and describes how to access them. (If you wish to remove yourself from this mailing list, see instructions at the end of this message.)
----------------------------------------------------------------
I. New JAIR Articles
A. J. Coles, A. I. Coles, M. Fox and D. Long (2012)
"COLIN: Planning with Continuous Linear Numeric Change",
Volume 44, pages 1-96
For quick access go to
Abstract:
In this paper we describe COLIN, a forward-chaining heuristic search planner, capable of reasoning with COntinuous LINear numeric change, in addition to the full temporal semantics of PDDL. Through this work we make two advances to the state-of-the-art in terms of expressive reasoning capabilities of planners: the handling of continuous linear change, and the handling of duration-dependent effects in combination with duration inequalities, both of which require tightly coupled temporal and numeric reasoning during planning. COLIN combines FF-style forward chaining search, with the use of a Linear Program (LP) to check the consistency of the interacting temporal and numeric constraints at each state. The LP is used to compute bounds on the values of variables in each state, reducing the range of actions that need to be considered for application. In addition, we develop an extension of the Temporal Relaxed Planning Graph heuristic of CRIKEY3, to support reasoning directly with continuous change. We extend the range of task variables considered to be suitable candidates for specifying the gradient of the continuous numeric change effected by an action. Finally, we explore the potential for employing mixed integer programming as a tool for optimising the timestamps of the actions in the plan, once a solution has been found. To support this, we further contribute a selection of extended benchmark domains that include continuous numeric effects. We present results for COLIN that demonstrate its scalability on a range of benchmarks, and compare to existing state-of-the-art planners.
D. D. Maua, C. P. de Campos and M. Zaffalon (2012)
"Solving Limited Memory Influence Diagrams",
Volume 44, pages 97-140
For quick access go to
Abstract:
We present a new algorithm for exactly solving decision making problems represented as influence diagrams. We do not require the usual assumptions of no forgetting and regularity; this allows us to solve problems with simultaneous decisions and limited information. The algorithm is empirically shown to outperform a state-of-the-art algorithm on randomly generated problems of up to 150 variables and 10^64 solutions. We show that these problems are NP-hard even if the underlying graph structure of the problem has low treewidth and the variables take on a bounded number of states, and that they admit no provably good approximation if variables can take on an arbitrary number of states.
C. Bäckström and P. Jonsson (2012)
"Algorithms and Limits for Compact Plan Representations",
Volume 44, pages 141-177
For quick access go to
Abstract:
Compact representations of objects is a common concept in computer science. Automated planning can be viewed as a case of this concept: a planning instance is a compact implicit representation of a graph and the problem is to find a path (a plan) in this graph. While the graphs themselves are represented compactly as planning instances, the paths are usually represented explicitly as sequences of actions. Some cases are known where the plans always have compact representations, for example, using macros. We show that these results do not extend to the general case, by proving a number of bounds for compact representations of plans under various criteria, like efficient sequential or random access of actions. In addition to this, we show that our results have consequences for what can be gained from reformulating planning into some other problem. As a contrast to this we also prove a number of positive results, demonstrating restricted cases where plans do have useful compact representations, as well as proving that macro plans have favourable access properties. Our results are finally discussed in relation to other relevant contexts.
P. Nakov and H. T. Ng (2012)
"Improving Statistical Machine Translation for a Resource-Poor Language Using Related Resource-Rich Languages",
Volume 44, pages 179-222
For quick access go to
Abstract:
We propose a novel language-independent approach for improving machine translation for resource-poor languages by exploiting their similarity to resource-rich ones. More precisely, we improve the translation from a resource-poor source language X_1 into a resource-rich language Y given a bi-text containing a limited number of parallel sentences for X_1-Y and a larger bi-text for X_2-Y for some resource-rich language X_2 that is closely related to X_1. This is achieved by taking advantage of the opportunities that vocabulary overlap and similarities between the languages X_1 and X_2 in spelling, word order, and syntax offer: (1) we improve the word alignments for the resource-poor language, (2) we further augment it with additional translation options, and (3) we take care of potential spelling differences through appropriate transliteration. The evaluation for Indonesian- >English using Malay and for Spanish -> English using Portuguese and pretending Spanish is resource-poor shows an absolute gain of up to 1.35 and 3.37 BLEU points, respectively, which is an improvement over the best rivaling approaches, while using much less additional data. Overall, our method cuts the amount of necessary "real'' training data by a factor of 2--5.
W. Mao and J. Gratch (2012)
"Modeling Social Causality and Responsibility Judgment in Multi-Agent Interactions",
Volume 44, pages 223-273
For quick access go to
Abstract:
Social causality is the inference an entity makes about the social behavior of other entities and self. Besides physical cause and effect, social causality involves reasoning about epistemic states of agents and coercive circumstances. Based on such inference, responsibility judgment is the process whereby one singles out individuals to assign responsibility, credit or blame for multi-agent activities. Social causality and responsibility judgment are a key aspect of social intelligence, and a model for them facilitates the design and development of a variety of multi-agent interactive systems. Based on psychological attribution theory, this paper presents a domain-independent computational model to automate social inference and judgment process according to an agent?s causal knowledge and observations of interaction. We conduct experimental studies to empirically validate the computational model. The experimental results show that our model predicts human judgments of social attributions and makes inferences consistent with what most people do in their judgments. Therefore, the proposed model can be generically incorporated into an intelligent system to augment its social and cognitive functionality.
P. Ghosh, A. Sharma, P.P. Chakrabarti and P. Dasgupta (2012)
"Algorithms for Generating Ordered Solutions for Explicit AND/OR Structures",
Volume 44, pages 275-333
For quick access go to
Abstract:
We present algorithms for generating alternative solutions for explicit acyclic AND/OR structures in non-decreasing order of cost. The proposed algorithms use a best first search technique and report the solutions using an implicit representation ordered by cost. In this paper, we present two versions of the search algorithm -- (a) an initial version of the best first search algorithm, ASG, which may present one solution more than once while generating the ordered solutions, and (b) another version, LASG, which avoids the construction of the duplicate solutions. The actual solutions can be reconstructed quickly from the implicit compact representation used. We have applied the methods on a few test domains, some of them are synthetic while the others are based on well known problems including the search space of the 5-peg Tower of Hanoi problem, the matrix-chain multiplication problem and the problem of finding secondary structure of RNA. Experimental results show the efficacy of the proposed algorithms over the existing approach. Our proposed algorithms have potential use in various domains ranging from knowledge based frameworks to service composition, where the AND/OR structure is widely used for representing problems.
M. Fox, D. Long and D. Magazzeni (2012)
"Plan-based Policies for Efficient Multiple Battery Load Management",
Volume 44, pages 335-382
For quick access go to
Abstract:
Efficient use of multiple batteries is a practical problem with wide and growing application. The problem can be cast as a planning problem under uncertainty. We describe the approach we have adopted to modelling and solving this problem, seen as a Markov Decision Problem, building effective policies for battery switching in the face of stochastic load profiles.
Our solution exploits and adapts several existing techniques: planning for deterministic mixed discrete-continuous problems and Monte Carlo sampling for policy learning. The paper describes the development of planning techniques to allow solution of the non-linear continuous dynamic models capturing the battery behaviours. This approach depends on carefully handled discretisation of the temporal dimension. The construction of policies is performed using a classification approach and this idea offers opportunities for wider exploitation in other problems. The approach and its generality are described in the paper.
Application of the approach leads to construction of policies that, in simulation, significantly outperform those that are currently in use and the best published solutions to the battery management problem. We achieve solutions that achieve more than 99% efficiency in simulation compared with the theoretical limit and do so with far fewer battery switches than existing policies. Behaviour of physical batteries does not exactly match the simulated models for many reasons, so to confirm that our theoretical results can lead to real measured improvements in performance we also conduct and report experiments using a physical test system. These results demonstrate that we can obtain 5%-15% improvement in lifetimes in the case of a two battery system.
P. Haslum (2012)
"Narrative Planning: Compilations to Classical Planning",
Volume 44, pages 383-395
For quick access go to
Abstract:
A model of story generation recently proposed by Riedl and Young
casts it as planning, with the additional condition that story
characters behave intentionally. This means that characters have
perceivable motivation for the actions they take. I show that this
condition can be compiled away (in more ways than one) to produce a
classical planning problem that can be solved by an off-the-shelf
classical planner, more efficiently than by Riedl and Young's
specialised planner.
E. Albacete, J. Calle, E. Castro and D. Cuadra (2012)
"Semantic Similarity Measures Applied to an Ontology for Human-Like Interaction",
Volume 44, pages 397-421
For quick access go to
Abstract:
The focus of this paper is the calculation of similarity between two concepts from an ontology for a Human-Like Interaction system. In order to facilitate this calculation, a similarity function is proposed based on five dimensions (sort, compositional, essential, restrictive and descriptive) constituting the structure of ontological knowledge. The paper includes a proposal for computing a similarity function for each dimension of knowledge. Later on, the similarity values obtained are weighted and aggregated to obtain a global similarity measure. In order to calculate those weights associated to each dimension, four training methods have been proposed. The training methods differ in the element to fit: the user, concepts or pairs of concepts, and a hybrid approach. For evaluating the proposal, the knowledge base was fed from WordNet and extended by using a knowledge editing toolkit (Cognos). The evaluation of the proposal is carried out through the comparison of system responses with those given by human test subjects, both providing a measure of the soundness of the procedure and revealing ways in which the proposal may be improved.
J. Velez, G. Hemann, A. S. Huang, I. Posner and N. Roy (2012)
"Modelling Observation Correlations for Active Exploration and Robust Object Detection ",
Volume 44, pages 423-453
For quick access go to
Abstract:
Today, mobile robots are expected to carry out increasingly complex tasks in multifarious, real-world environments. Often, the tasks require a certain semantic understanding of the workspace. Consider, for example, spoken instructions from a human collaborator referring to objects of interest; the robot must be able to accurately detect these objects to correctly understand the instructions. However, existing object detection, while competent, is not perfect. In particular, the performance of detection algorithms is commonly sensitive to the position of the sensor relative to the objects in the scene.
This paper presents an online planning algorithm which learns an explicit model of the spatial dependence of object detection and generates plans which maximize the expected performance of the detection, and by extension the overall plan performance. Crucially, the learned sensor model incorporates spatial correlations between measurements, capturing the fact that successive measurements taken at the same or nearby locations are not independent. We show how this sensor model can be incorporated into an efficient forward search algorithm in the information space of detected objects, allowing the robot to generate motion plans efficiently. We investigate the performance of our approach by addressing the tasks of door and text detection in indoor environments and demonstrate significant improvement in detection performance during task execution over alternative methods in simulated and real robot experiments.
M. C. Cooper and S. Zivny (2012)
"Tractable Triangles and Cross-Free Convexity in Discrete Optimisation",
Volume 44, pages 455-490
For quick access go to
Abstract:
The minimisation problem of a sum of unary and pairwise functions of discrete variables is a general NP-hard problem with wide applications such as computing MAP configurations in Markov Random Fields (MRF), minimising Gibbs energy, or solving binary Valued Constraint Satisfaction Problems (VCSPs).
We study the computational complexity of classes of discrete optimisation problems given by allowing only certain types of costs in every triangle of variable-value assignments to three distinct variables. We show that for several computational problems, the only non- trivial tractable classes are the well known maximum matching problem and the recently discovered joint-winner property. Our results, apart from giving complete classifications in the studied cases, provide guidance in the search for hybrid tractable classes; that is, classes of problems that are not captured by restrictions on the functions (such as submodularity) or the structure of the problem graph (such as bounded treewidth).
Furthermore, we introduce a class of problems with convex cardinality functions on cross-free sets of assignments. We prove that while imposing only one of the two conditions renders the problem NP-hard, the conjunction of the two gives rise to a novel tractable class satisfying the cross-free convexity property, which generalises the joint-winner property to problems of unbounded arity.
J. Huang, A. Kapoor and C. Guestrin (2012)
"Riffled Independence for Efficient Inference with Partial Rankings",
Volume 44, pages 491-532
For quick access go to
Abstract:
Distributions over rankings are used to model data in a multitude of real world settings such as preference analysis and political elections. Modeling such distributions presents several computational challenges, however, due to the factorial size of the set of rankings over an item set. Some of these challenges are quite familiar to the artificial intelligence community, such as how to compactly represent a distribution over a combinatorially large space, and how to efficiently perform probabilistic inference with these representations. With respect to ranking, however, there is the additional challenge of what we refer to as human task complexity ? users are rarely willing to provide a full ranking over a long list of candidates, instead often preferring to provide partial ranking information.
Simultaneously addressing all of these challenges ? i.e., designing a compactly representable model which is amenable to efficient inference and can be learned using partial ranking data ? is a difficult task, but is necessary if we would like to scale to problems with nontrivial size. In this paper, we show that the recently proposed riffled independence assumptions cleanly and efficiently address each of the above challenges. In particular, we establish a tight mathematical connection between the concepts of riffled independence and of partial rankings. This correspondence not only allows us to then develop efficient and exact algorithms for performing inference tasks using riffled independence based represen- tations with partial rankings, but somewhat surprisingly, also shows that efficient inference is not possible for riffle independent models (in a certain sense) with observations which do not take the form of partial rankings. Finally, using our inference algorithm, we introduce the first method for learning riffled independence based models from partially ranked data.
P. D. Turney (2012)
"Domain and Function: A Dual-Space Model of Semantic Relations and Compositions",
Volume 44, pages 533-585
For quick access go to
Abstract:
Given appropriate representations of the semantic relations between carpenter and wood and between mason and stone (for example, vectors in a vector space model), a suitable algorithm should be able to recognize that these relations are highly similar (carpenter is to wood as mason is to stone; the relations are analogous). Likewise, with representations of dog, house, and kennel, an algorithm should be able to recognize that the semantic composition of dog and house, dog house, is highly similar to kennel (dog house and kennel are synonymous). It seems that these two tasks, recognizing relations and compositions, are closely connected. However, up to now, the best models for relations are significantly different from the best models for compositions. In this paper, we introduce a dual-space model that unifies these two tasks. This model matches the performance of the best previous models for relations and compositions. The dual-space model consists of a space for measuring domain similarity and a space for measuring function similarity. Carpenter and wood share the same domain, the domain of carpentry. Mason and stone share the same domain, the domain of masonry. Carpenter and mason share the same function, the function of artisans. Wood and stone share the same function, the function of materials. In the composition dog house, kennel has some domain overlap with both dog and house (the domains of pets and buildings). The function of kennel is similar to the function of house (the function of shelters). By combining domain and function similarities in various ways, we can model relations, compositions, and other aspects of semantics.
J. Hoffmann, I. Weber and F. M. Kraft (2012)
"SAP Speaks PDDL: Exploiting a Software-Engineering Model for Planning in Business Process Management",
Volume 44, pages 587-632
For quick access go to
Abstract:
Planning is concerned with the automated solution of action sequencing problems described in declarative languages giving the action preconditions and effects. One important application area for such technology is the creation of new processes in Business Process Management (BPM), which is essential in an ever more dynamic business environment. A major obstacle for the application of Planning in this area lies in the modeling. Obtaining a suitable model to plan with -- ideally a description in PDDL, the most commonly used planning language -- is often prohibitively complicated and/or costly. Our core observation in this work is that this problem can be ameliorated by leveraging synergies with model-based software development. Our application at SAP, one of the leading vendors of enterprise software, demonstrates that even one-to-one model re-use is possible.
The model in question is called Status and Action Management (SAM). It describes the behavior of Business Objects (BO), i.e., large-scale data structures, at a level of abstraction corresponding to the language of business experts. SAM covers more than 400 kinds of BOs, each of which is described in terms of a set of status variables and how their values are required for, and affected by, processing steps (actions) that are atomic from a business perspective. SAM was developed by SAP as part of a major model-based software engineering effort. We show herein that one can use this same model for planning, thus obtaining a BPM planning application that incurs no modeling overhead at all.
We compile SAM into a variant of PDDL, and adapt an off-the-shelf planner to solve this kind of problem. Thanks to the resulting technology, business experts may create new processes simply by specifying the desired behavior in terms of status variable value changes: effectively, by describing the process in their own language.
B. Konev, M. Ludwig, D. Walther and F. Wolter (2012)
"The Logical Difference for the Lightweight Description Logic EL",
Volume 44, pages 633-708
For quick access go to
Abstract:
We study a logic-based approach to versioning of ontologies. Under this view, ontologies provide answers to queries about some vocabulary of interest. The difference between two versions of an ontology is given by the set of queries that receive different answers.
We investigate this approach for terminologies given in the description logic EL extended with role inclusions and domain and range restrictions for three distinct types of queries: subsumption, instance, and conjunctive queries. In all three cases, we present polynomial-time algorithms that decide whether two terminologies give the same answers to queries over a given vocabulary and compute a succinct representation of the difference if it is non-
empty. We present an implementation, CEX2, of the developed algorithms for subsumption and instance queries and apply it to distinct versions of Snomed CT and the NCI ontology.
C. Domshlak, E. Karpas and S. Markovitch (2012)
"Online Speedup Learning for Optimal Planning",
Volume 44, pages 709-755
For quick access go to
Abstract:
Domain-independent planning is one of the foundational areas in the field of Artificial Intelligence. A description of a planning task consists of an initial world state, a goal, and a set of actions for modifying the world state. The objective is to find a sequence of actions, that is, a plan, that transforms the initial world state into a goal state. In optimal planning, we are interested in finding not just a plan, but one of the cheapest plans. A prominent approach to optimal planning these days is heuristic
state-space search, guided by admissible heuristic functions. Numerous admissible heuristics have been developed, each with its own strengths and weaknesses, and it is well known that there is no single "best'' heuristic for optimal planning in general. Thus, which heuristic to choose for a given planning task is a difficult question. This difficulty can be avoided by combining several heuristics, but that requires computing numerous heuristic estimates at each state, and the tradeoff between the time spent doing so and the time saved by the combined advantages of the different heuristics might be high. We present a novel method that reduces the cost of combining admissible heuristics for optimal planning, while maintaining its benefits. Using an idealized search space model, we formulate a decision rule for choosing the best heuristic to compute at each state. We then present an active online learning approach for learning a classifier with that decision rule as the target concept, and employ the learned classifier to decide which heuristic to compute at each state. We evaluate this technique empirically, and show that it substantially outperforms the standard method for combining several heuristics via their pointwise maximum.
----------------------------------------------------------------
II. Unsubscribing from our Mailing List
To remove yourself from the JAIR subscribers mailing list, visit our
Web site (http://www.jair.org/), follow the link "notify me of new
articles", enter your email address in the form at the bottom of the
page, and follow the directions. In the event that you've already
deleted yourself from the list and we keep sending you messages like
this one, send mail to jair-ed at isi.edu.
----------------------------------------------------------------
From jair-ed at isi.edu Sun Oct 7 23:39:07 2012
From: jair-ed at isi.edu (jair-ed@isi.edu)
Date: Mon, 08 Oct 2012 01:39:07 -0500
Subject: [Jairsubscribers] 4 new articles published by JAIR
Message-ID:
Dear JAIR subscriber:
This message lists papers that have been recently published in JAIR and describes how to access them. (The editorial staff recently decided to
send out these messages more frequently, on a monthly basis. If you
wish to remove yourself from this mailing list, see instructions at the end of this message.)
----------------------------------------------------------------
I. New JAIR Articles
F. Belardinelli and A. Lomuscio (2012)
"Interactions between Knowledge and Time in a First-Order Logic for Multi-Agent Systems: Completeness Results",
Volume 45, pages 1-45
For quick access go to
Abstract:
We investigate a class of first-order temporal-epistemic logics for reasoning about multi-agent systems. We encode typical properties of systems including perfect recall, synchronicity, no learning, and having a unique initial state in terms of variants of quantified interpreted systems, a first-order extension of interpreted systems. We identify several monodic fragments of first-order temporal-epistemic logic and show their completeness with respect to their corresponding classes of quantified interpreted systems.
D. A. Cohen, M. C. Cooper, P. Creed, D. Marx and A. Z. Salamon (2012)
"The Tractability of CSP Classes Defined by Forbidden Patterns",
Volume 45, pages 47-78
For quick access go to
Abstract:
The constraint satisfaction problem (CSP) is a general problem central to computer science and artificial intelligence. Although the CSP is NP-hard in general, considerable effort has been spent on identifying tractable subclasses. The main two approaches consider structural properties (restrictions on the hypergraph of constraint scopes) and relational properties (restrictions on the language of constraint relations). Recently, some authors have considered hybrid properties that restrict the constraint hypergraph and the relations simultaneously.
Our key contribution is the novel concept of a CSP pattern and classes of problems defined by forbidden patterns (which can be viewed as forbidding generic sub-problems). We describe the theoretical framework which can be used to reason about classes of problems defined by forbidden patterns. We show that this framework generalises certain known hybrid tractable classes.
Although we are not close to obtaining a complete characterisation concerning the tractability of general forbidden patterns, we prove a dichotomy in a special case: classes of problems that arise when we can only forbid binary negative patterns (generic sub-problems in which only disallowed tuples are specified). In this case we show that all (finite sets of) forbidden patterns define either polynomial-time solvable or NP-complete classes of instances.
H. Vlaeminck, J. Vennekens, M. Denecker and M. Bruynooghe (2012)
"An approximative inference method for solving ∃∀SO satisfiability problems",
Volume 45, pages 79-124
For quick access go to
Abstract:
This paper considers the fragment ∃∀SO of second-order logic. Many interesting problems, such as conformant planning, can be naturally expressed as finite domain satisfiability problems of this logic. Such satisfiability problems are computationally hard (ΣP2) and many of these problems are often solved approximately. In this paper, we develop a general approximative method, i.e., a sound but incomplete method, for solving ∃∀SO satisfiability problems. We use a syntactic representation of a constraint propagation method for first-order logic to transform such an ∃∀SO satisfiability problem to an ∃SO(ID) satisfiability problem (second-order logic, extended with inductive definitions). The finite domain satisfiability problem for the latter language is in NP and can be handled by several existing solvers. Inductive definitions are a powerful knowledge representation tool, and this moti- vates us to also approximate ∃∀SO(ID) problems. In order to do this, we first show how to perform propagation on such inductive definitions. Next, we use this to approximate ∃∀SO(ID) satisfiability problems. All this provides a general theoretical framework for a number of approximative methods in the literature. Moreover, we also show how we can use this framework for solving practical useful problems, such as conformant planning, in an effective way.
S.A. Mirroshandel and G. Ghassem-Sani (2012)
"Towards Unsupervised Learning of Temporal Relations between Events",
Volume 45, pages 125-163
For quick access go to
Abstract:
Automatic extraction of temporal relations between event pairs is an important task for several natural language processing applications such as Question Answering, Information Extraction, and Summarization. Since most existing methods are supervised and require large corpora, which for many languages do not exist, we have concentrated our efforts to reduce the need for annotated data as much as possible. This paper presents two different algorithms towards this goal. The first algorithm is a weakly supervised machine learning approach for classification of temporal relations between events. In the first stage, the algorithm learns a general classifier from an annotated corpus. Then, inspired by the hypothesis of "one type of temporal relation per discourse'', it extracts useful information from a cluster of topically related documents. We show that by combining the global information of such a cluster with local decisions of a general classifier, a bootstrapping cross-document classifier can be built to extract temporal relations between events. Our experiments show that without any additional annotated data, the accuracy of the proposed algorithm is higher than that of several previous successful systems. The second proposed method for temporal relation extraction is based on the expectation maximization (EM) algorithm. Within EM, we used different techniques such as a greedy best-first search and integer linear programming for temporal inconsistency removal. We think that the experimental results of our EM based algorithm, as a first step toward a fully unsupervised temporal relation extraction method, is encouraging.
----------------------------------------------------------------
II. Unsubscribing from our Mailing List
To remove yourself from the JAIR subscribers mailing list, visit our
Web site (http://www.jair.org/), follow the link "notify me of new
articles", enter your email address in the form at the bottom of the
page, and follow the directions. In the event that you've already
deleted yourself from the list and we keep sending you messages like
this one, send mail to jair-ed at isi.edu.
----------------------------------------------------------------
From jair-ed at isi.edu Tue Oct 30 17:04:49 2012
From: jair-ed at isi.edu (jair-ed@isi.edu)
Date: Tue, 30 Oct 2012 19:04:49 -0500
Subject: [Jairsubscribers] 6 new articles published by JAIR
Message-ID:
Dear JAIR subscriber:
This message lists papers that have been published in JAIR in October, and describes how to access them. (If you wish to remove yourself from this mailing list, see instructions at the end of this message.)
----------------------------------------------------------------
I. New JAIR Articles
T. Voice, M. Polukarov and N. R. Jennings (2012)
"Coalition Structure Generation over Graphs",
Volume 45, pages 165-196
For quick access go to
Abstract:
We give the analysis of the computational complexity of coalition structure generation over graphs. Given an undirected graph G = (N,E) and a valuation function v : P(N) → R over the subsets of nodes, the problem is to find a partition of N into connected subsets, that maximises the sum of the components? values. This problem is generally NP?complete; in particular, it is hard for a defined class of valuation functions which are independent of disconnected members?that is, two nodes have no effect on each other?s marginal con- tribution to their vertex separator. Nonetheless, for all such functions we provide bounds on the complexity of coalition structure generation over general and minor?free graphs. Our proof is constructive and yields algorithms for solving corresponding instances of the problem. Furthermore, we derive linear time bounds for graphs of bounded treewidth. However, as we show, the problem remains NP?complete for planar graphs, and hence, for any K_k minor?free graphs where k ≥ 5. Moreover, a 3-SAT problem with m clauses can be represented by a coalition structure generation problem over a planar graph with O(m^2) nodes. Importantly, our hardness result holds for a particular subclass of valuation functions, termed edge sum, where the value of each subset of nodes is simply determined by the sum of given weights of the edges in the induced subgraph.
B. Cuenca Grau and B. Motik (2012)
"Reasoning over Ontologies with Hidden Content: The Import-by-Query Approach",
Volume 45, pages 197-255
For quick access go to
Abstract:
There is currently a growing interest in techniques for hiding parts of the signature of an ontology Kh that is being reused by another ontology Kv. Towards this goal, in this paper we propose the import-by-query framework, which makes the content of Kh accessible through a limited query interface. If Kv reuses the symbols from Kh in a certain restricted way, one can reason over Kv U Kh by accessing only Kv and the query interface. We map out the landscape of the import-by-query problem. In particular, we outline the limitations of our framework and prove that certain restrictions on the expressivity of Kh and the way in which Kv reuses symbols from Kh are strictly necessary to enable reasoning in our setting. We also identify cases in which reasoning is possible and we present suitable import-by-query reasoning algorithms.
R. Hoshino and K. Kawarabayashi (2012)
"Generating Approximate Solutions to the TTP using a Linear Distance Relaxation",
Volume 45, pages 257-286
For quick access go to
Abstract:
In some domestic professional sports leagues, the home stadiums are located in cities connected by a common train line running in one direction. For these instances, we can incorporate this geographical information to determine optimal or nearly-optimal solutions to the n-team Traveling Tournament Problem (TTP), an NP-hard sports scheduling problem whose solution is a double round-robin tournament schedule that minimizes the sum total of distances traveled by all n teams.
We introduce the Linear Distance Traveling Tournament Problem (LD-TTP), and solve it for n=4 and n=6, generating the complete set of possible solutions through elementary combinatorial techniques. For larger n, we propose a novel "expander construction" that generates an approximate solution to the LD-TTP. For n congruent to 4 modulo 6, we show that our expander construction produces a feasible double round-robin tournament schedule whose total distance is guaranteed to be no worse than 4/3 times the optimal solution, regardless of where the n teams are located. This 4/3-approximation for the LD-TTP is stronger than the currently best-known ratio of 5/3 + epsilon for the general TTP.
We conclude the paper by applying this linear distance relaxation to general (non-linear) n-team TTP instances, where we develop fast approximate solutions by simply "assuming" the n teams lie on a straight line and solving the modified problem. We show that this technique surprisingly generates the distance-optimal tournament on all benchmark sets on 6 teams, as well as close-to-optimal schedules for larger n, even when the teams are located around a circle or positioned in three-dimensional space.
P. Gutierrez and P. Meseguer (2012)
"Removing Redundant Messages in N-ary BnB-ADOPT",
Volume 45, pages 287-304
For quick access go to
Abstract:
This note considers how to modify BnB-ADOPT, a well-known algorithm for optimally solving distributed constraint optimization problems, with a double aim: (i) to avoid sending most of the redundant messages and (ii) to handle cost functions of any arity. Some of the messages exchanged by BnB-ADOPT turned out to be redundant. Removing most of the redundant messages increases substantially communication efficiency: the number of exchanged messages is - in most cases - at least three times fewer (keeping the other measures almost unchanged), and termination and optimality are maintained. On the other hand, handling n-ary cost functions was addressed in the original work, but the presence of thresholds makes their practical usage more complex. Both issues - removing most of the redundant messages and efficiently handling n-ary cost functions - can be combined, producing the new version BnB-ADOPT+. Experimentally, we show the benefits of this version over the original one.
A. M. Rush and M. J. Collins (2012)
"A Tutorial on Dual Decomposition and Lagrangian Relaxation for Inference in Natural Language Processing",
Volume 45, pages 305-362
For quick access go to
Abstract:
Dual decomposition, and more generally Lagrangian relaxation, is a classical method for combinatorial optimization; it has recently been applied to several inference problems in natural language processing (NLP). This tutorial gives an overview of the technique. We describe example algorithms, describe formal guarantees for the method, and describe practical issues in implementing the algorithms. While our examples are predominantly drawn from the NLP literature, the material should be of general relevance to inference problems in machine learning. A central theme of this tutorial is that Lagrangian relaxation is naturally applied in conjunction with a broad class of combinatorial algorithms, allowing inference in models that go significantly beyond previous work on Lagrangian relaxation for inference in graphical models.
R. A. Rossi, L. K. McDowell, D. W. Aha and J. Neville (2012)
"Transforming Graph Data for Statistical Relational Learning",
Volume 45, pages 363-441
For quick access go to
Abstract:
Relational data representations have become an increasingly important topic due to the recent proliferation of network datasets (e.g., social, biological, information networks) and a corresponding increase in the application of Statistical Relational Learning (SRL) algorithms to these domains. In this article, we examine and categorize techniques for transforming graph-based relational data to improve SRL algorithms. In particular, appropriate transformations of the nodes, links, and/or features of the data can dramatically affect the capabilities and results of SRL algorithms. We introduce an intuitive taxonomy for data representation transformations in relational domains that incorporates link transformation and node transformation as symmetric representation tasks. More specifically, the transformation tasks for both nodes and links include (i) predicting their existence, (ii) predicting their label or type, (iii) estimating their weight or importance, and (iv) system- atically constructing their relevant features. We motivate our taxonomy through detailed examples and use it to survey competing approaches for each of these tasks. We also dis- cuss general conditions for transforming links, nodes, and features. Finally, we highlight challenges that remain to be addressed.
----------------------------------------------------------------
II. Unsubscribing from our Mailing List
To remove yourself from the JAIR subscribers mailing list, visit our
Web site (http://www.jair.org/), follow the link "notify me of new
articles", enter your email address in the form at the bottom of the
page, and follow the directions. In the event that you've already
deleted yourself from the list and we keep sending you messages like
this one, send mail to jair-ed at isi.edu.
----------------------------------------------------------------
From jair-ed at isi.edu Fri Nov 30 10:40:21 2012
From: jair-ed at isi.edu (jair-ed@isi.edu)
Date: Fri, 30 Nov 2012 12:40:21 -0600
Subject: [Jairsubscribers] 2 new articles published by JAIR
Message-ID:
Dear JAIR subscriber:
This message lists papers that have been recently published in JAIR and describes how to access them. (If you wish to remove yourself from this mailing list, see instructions at the end of this message.)
----------------------------------------------------------------
I. New JAIR Articles
I. Abio, R. Nieuwenhuis, A. Oliveras, E. Rodriguez-Carbonell and V. Mayer-Eichberger (2012)
"A New Look at BDDs for Pseudo-Boolean Constraints",
Volume 45, pages 443-480
For quick access go to
Abstract:
Pseudo-Boolean constraints are omnipresent in practical applications, and thus a significant effort has been devoted to the development of good SAT encoding techniques for them. Some of these encodings first construct a Binary Decision Diagram (BDD) for the constraint, and then encode the BDD into a propositional formula. These BDD-based approaches have some important advantages, such as not being dependent on the size of the coefficients, or being able to share the same BDD for representing many constraints.
We first focus on the size of the resulting BDDs, which was considered to be an open problem in our research community. We report on previous work where it was proved that there are Pseudo-Boolean constraints for which no polynomial BDD exists. We also give an alternative and simpler proof assuming that NP is different from Co-NP. More interestingly, here we also show how to overcome the possible exponential blowup of BDDs by \emph{coefficient decomposition}. This allows us to give the first polynomial generalized arc-consistent ROBDD-based encoding for Pseudo-Boolean constraints.
Finally, we focus on practical issues: we show how to efficiently construct such ROBDDs, how to encode them into SAT with only 2 clauses per node, and present experimental results that confirm that our approach is competitive with other encodings and state-of-the-art Pseudo-Boolean solvers.
U. Endriss, U. Grandi and D. Porello (2012)
"Complexity of Judgment Aggregation",
Volume 45, pages 481-514
For quick access go to
Abstract:
We analyse the computational complexity of three problems in judgment aggregation: (1) computing a collective judgment from a profile of individual judgments (the winner determination problem); (2) deciding whether a given agent can influence the outcome of a judgment aggregation procedure in her favour by reporting insincere judgments (the strategic manipulation problem); and (3) deciding whether a given judgment aggregation scenario is guaranteed to result in a logically consistent outcome, independently from what the judgments supplied by the individuals are (the problem of the safety of the agenda). We provide results both for specific aggregation procedures (the quota rules, the premise-based procedure, and a distance-based procedure) and for classes of aggregation procedures characterised in terms of fundamental axioms.
----------------------------------------------------------------
II. Unsubscribing from our Mailing List
To remove yourself from the JAIR subscribers mailing list, visit our
Web site (http://www.jair.org/), follow the link "notify me of new
articles", enter your email address in the form at the bottom of the
page, and follow the directions. In the event that you've already
deleted yourself from the list and we keep sending you messages like
this one, send mail to jair-ed at isi.edu.
----------------------------------------------------------------