From jair-ed at isi.edu Fri Jan 2 06:52:21 2015
From: jair-ed at isi.edu (jair-ed@isi.edu)
Date: Fri, 02 Jan 2015 08:52:21 -0600
Subject: [Jairsubscribers] 6 new articles published by JAIR
Message-ID:
Dear JAIR subscriber:
This message lists papers that have been recently published in JAIR and describes how to access them. (If you wish to remove yourself from this mailing list, see instructions at the end of this message.)
----------------------------------------------------------------
I. New JAIR Articles
G. Stefanoni, B. Motik, M. Kroetzsch and S. Rudolph (2014)
"The Complexity of Answering Conjunctive and Navigational Queries over OWL 2 EL Knowledge Bases",
Volume 51, pages 645-705
For quick access go to
Abstract:
OWL 2 EL is a popular ontology language that supports role inclusions---that is, axioms that capture compositional properties of roles. Role inclusions closely correspond to context-free grammars, which was used to show that answering conjunctive queries (CQs) over OWL 2 EL knowledge bases with unrestricted role inclusions is undecidable. However, OWL 2 EL inherits from OWL 2 DL the syntactic regularity restriction on role inclusions, which ensures that role chains implying a particular role can be described using a finite automaton (FA). This is sufficient to ensure decidability of CQ answering; however, the FAs can be worst-case exponential in size so the known approaches do not provide a tight upper complexity bound.
In this paper, we solve this open problem and show that answering CQs over OWL 2 EL knowledge bases is PSPACE-complete in combined complexity (i.e., the complexity measured in the total size of the input). To this end, we use a novel encoding of regular role inclusions using bounded-stack pushdown automata---that is, FAs extended with a stack of bounded size. Apart from theoretical interest, our encoding can be used in practical tableau algorithms to avoid the exponential blowup due to role inclusions. In addition, we sharpen the lower complexity bound and show that the problem is PSPACE-hard even if we consider only role inclusions as part of the input (i.e., the query and all other parts of the knowledge base are fixed). Finally, we turn our attention to navigational queries over OWL 2 EL knowledge bases, and we show that answering positive, converse-free conjunctive graph XPath queries is PSPACE-complete as well; this is interesting since allowing the converse operator in queries is known to make the problem EXPTIME-hard. Thus, in this paper we present several important contributions to the landscape of the complexity of answering expressive queries over description logic knowledge bases.
O. Cepek, S. Gursky and P. Kucera (2014)
"On Minimum Representations of Matched Formulas",
Volume 51, pages 707-723
For quick access go to
Abstract:
A Boolean formula in conjunctive normal form (CNF) is called matched if the system of sets of variables which appear in individual clauses has a system of distinct representatives. Each matched CNF is trivially satisfiable (each clause can be satisfied by its representative variable). Another property which is easy to see, is that the class of matched CNFs is not closed under partial assignment of truth values to variables. This latter property leads to a fact (proved here) that given two matched CNFs it is co-NP complete to decide whether they are logically equivalent. The construction in this proof leads to another result: a much shorter and simpler proof of the fact that the Boolean minimization problem for matched CNFs is a complete problem for the second level of the polynomial hierarchy. The main result of this paper deals with the structure of clause minimum CNFs. We prove here that if a Boolean function f admits a representation by a matched CNF then every clause minimum CNF representation of f is matched.
C. R. Shelton and G. Ciardo (2014)
"Tutorial on Structured Continuous-Time Markov Processes",
Volume 51, pages 725-778
For quick access go to
Abstract:
A continuous-time Markov process (CTMP) is a collection of variables indexed by a continuous quantity, time. It obeys the Markov property that the distribution over a future variable is independent of past variables given the state at the present time. We introduce continuous-time Markov process representations and algorithms for filtering, smoothing, expected sufficient statistics calculations, and model estimation, assuming no prior knowledge of continuous-time processes but some basic knowledge of probability and statistics. We begin by describing "flat" or unstructured Markov processes and then move to structured Markov processes (those arising from state spaces consisting of assignments to variables) including Kronecker, decision-diagram, and continuous-time Bayesian network representations. We provide the first connection between decision-diagrams and continuous-time Bayesian networks.
P. Kissmann and J. Hoffmann (2014)
"BDD Ordering Heuristics for Classical Planning",
Volume 51, pages 779-804
For quick access go to
Abstract:
Symbolic search using binary decision diagrams (BDDs) can often save large amounts of memory due to its concise representation of state sets. A decisive factor for this method's success is the chosen variable ordering. Generally speaking, it is plausible that dependent variables should be brought close together in order to reduce BDD sizes. In planning, variable dependencies are typically captured by means of causal graphs, and in preceding work these were taken as the basis for finding BDD variable orderings. Starting from the observation that the two concepts of "dependency" are actually quite different, we introduce a framework for assessing the strength of variable ordering heuristics in sub-classes of planning. It turns out that, even for extremely simple planning tasks, causal graph based variable orders may be exponentially worse than optimal.
Experimental results on a wide range of variable ordering variants corroborate our theoretical findings. Furthermore, we show that dynamic reordering is much more effective at reducing BDD size, but it is not cost-effective due to a prohibitive runtime overhead. We exhibit the potential of middle-ground techniques, running dynamic reordering until simple stopping criteria hold.
D. Zilli, O. Parson, G. V. Merrett and A. Rogers (2014)
"A Hidden Markov Model-Based Acoustic Cicada Detector for Crowdsourced Smartphone Biodiversity Monitoring",
Volume 51, pages 805-827
For quick access go to
Abstract:
In recent years, the field of computational sustainability has striven to apply artificial intelligence techniques to solve ecological and environmental problems. In ecology, a key issue for the safeguarding of our planet is the monitoring of biodiversity. Automated acoustic recognition of species aims to provide a cost-effective method for biodiversity monitoring. This is particularly appealing for detecting endangered animals with a distinctive call, such as the New Forest cicada. To this end, we pursue a crowdsourcing approach, whereby the millions of visitors to the New Forest, where this insect was historically found, will help to monitor its presence by means of a smartphone app that can detect its mating call. Existing research in the field of acoustic insect detection has typically focused upon the classification of recordings collected from fixed field microphones. Such approaches segment a lengthy audio recording into individual segments of insect activity, which are independently classified using cepstral coefficients extracted from the recording as features. This paper reports on a contrasting approach, whereby we use crowdsourcing to collect recordings via a smartphone app, and present an immediate feedback to the users as to whether an insect has been found. Our classification approach does not remove silent parts of the recording via segmentation, but instead uses the temporal patterns throughout each recording to classify the insects present. We show that our approach can successfully discriminate between the call of the New Forest cicada and similar insects found in the New Forest, and is robust to common types of environment noise. A large scale trial deployment of our smartphone app collected over 6000 reports of insect activity from over 1000 users. Despite the cicada not having been rediscovered in the New Forest, the effectiveness of this approach was confirmed for both the detection algorithm, which successfully identified the same cicada through the app in countries where the same species
is still present, and of the crowdsourcing methodology, which collected a vast number of recordings and involved thousands of contributors.
B. Bosansky, C. Kiekintveld, V. Lisy and M. Pechoucek (2014)
"An Exact Double-Oracle Algorithm for Zero-Sum Extensive-Form Games with Imperfect Information",
Volume 51, pages 829-866
For quick access go to
Abstract:
Developing scalable solution algorithms is one of the central problems in computational game theory. We present an iterative algorithm for computing an exact Nash equilibrium for two-player zero-sum extensive-form games with imperfect information. Our approach combines two key elements: (1) the compact sequence-form representation of extensive-form games and (2) the algorithmic framework of double-oracle methods. The main idea of our algorithm is to restrict the game by allowing the players to play only selected sequences of available actions. After solving the restricted game, new sequences are added by finding best responses to the current solution using fast algorithms.
We experimentally evaluate our algorithm on a set of games inspired by patrolling scenarios, board, and card games. The results show significant runtime improvements in games admitting an equilibrium with small support, and substantial improvement in memory use even on games with large support. The improvement in memory use is particularly important because it allows our algorithm to solve much larger game instances than existing linear programming methods.
Our main contributions include (1) a generic sequence-form double-oracle algorithm for solving zero-sum extensive-form games; (2) fast methods for maintaining a valid restricted game model when adding new sequences; (3) a search algorithm and pruning methods for computing best-response sequences; (4) theoretical guarantees about the convergence of the algorithm to a Nash equilibrium; (5) experimental analysis of our algorithm on several games, including an approximate version of the algorithm.
----------------------------------------------------------------
II. Unsubscribing from our Mailing List
To remove yourself from the JAIR subscribers mailing list, visit our
Web site (http://www.jair.org/), follow the link "notify me of new
articles", enter your email address in the form at the bottom of the
page, and follow the directions. In the event that you've already
deleted yourself from the list and we keep sending you messages like
this one, send mail to jair-ed at isi.edu.
----------------------------------------------------------------
From jair-ed at isi.edu Sun Feb 1 23:00:14 2015
From: jair-ed at isi.edu (jair-ed@isi.edu)
Date: Mon, 02 Feb 2015 01:00:14 -0600
Subject: [Jairsubscribers] 4 new articles published by JAIR
Message-ID:
Dear JAIR subscriber:
This message lists papers that have been recently published in JAIR and describes how to access them. (If you wish to remove yourself from this mailing list, see instructions at the end of this message.)
----------------------------------------------------------------
I. New JAIR Articles
Gert De Cooman, Jasper De Bock and Márcio Alves Diniz (2015)
"Coherent Predictive Inference under Exchangeability with Imprecise Probabilities",
Volume 52, pages 1-95
For quick access go to
Abstract:
Coherent reasoning under uncertainty can be represented in a very general manner by coherent sets of desirable gambles. In a context that does not allow for indecision, this leads to an approach that is mathematically equivalent to working with coherent conditional probabilities. If we do allow for indecision, this leads to a more general foundation for coherent (imprecise-)probabilistic inference. In this framework, and for a given finite category set, coherent predictive inference under exchangeability can be represented using Bernstein coherent cones of multivariate polynomials on the simplex generated by this category set. This is a powerful generalisation of de Finetti's Representation Theorem allowing for both imprecision and indecision.
We define an inference system as a map that associates a Bernstein coherent cone of polynomials with every finite category set. Many inference principles encountered in the literature can then be interpreted, and represented mathematically, as restrictions on such maps. We discuss, as particular examples, two important inference principles: representation insensitivity?a strengthened version of Walley's representation invariance?and specificity. We show that there is an infinity of inference systems that satisfy these two principles, amongst which we discuss in particular the skeptically cautious inference system, the inference systems corresponding to (a modified version of) Walley and Bernard's Imprecise Dirichlet Multinomial Models (IDMM), the skeptical IDMM inference systems, and the Haldane inference system. We also prove that the latter produces the same posterior inferences as would be obtained using Haldane's improper prior, implying that there is an infinity of proper priors that produce the same coherent posterior inferences as Haldane's improper one. Finally, we impose an additional inference principle that allows us to characterise uniquely the immediate predictions for the IDMM inference systems.
Carmel Domshlak and Vitaly Mirkis (2015)
"Deterministic Oversubscription Planning as Heuristic Search: Abstractions and Reformulations",
Volume 52, pages 97-169
For quick access go to
Abstract:
While in classical planning the objective is to achieve one of the equally attractive goal states at as low total action cost as possible, the objective in deterministic oversubscription planning (OSP) is to achieve an as valuable as possible subset of goals within a fixed allowance of the total action cost. Although numerous applications in various fields share the latter objective, no substantial algorithmic advances have been made in deterministic OSP. Tracing the key sources of progress in classical planning, we identify a severe lack of effective domain-independent approximations for OSP.
With our focus here on optimal planning, our goal is to bridge this gap. Two classes of approximation techniques have been found especially useful in the context of optimal classical planning: those based on state-space abstractions and those based on logical landmarks for goal reachability. The question we study here is whether some similar-in-spirit, yet possibly mathematically different, approximation techniques can be developed for OSP. In the context of abstractions, we define the notion of additive abstractions for OSP, study the complexity of deriving effective abstractions from a rich space of hypotheses, and reveal some substantial, empirically relevant islands of tractability. In the context of landmarks, we show how standard goal-reachability landmarks of certain classical planning tasks can be compiled into the OSP task of interest, resulting in an equivalent OSP task with a lower cost allowance, and thus with a smaller search space. Our empirical evaluation confirms the effectiveness of the proposed techniques, and opens a wide gate for further developments in oversubscription planning.
Yair Wiener and Ran El-Yaniv (2015)
"Agnostic Pointwise-Competitive Selective Classification",
Volume 52, pages 171-201
For quick access go to
Abstract:
Pointwise-competitive classifier from class F is required to classify identically to the best classifier in hindsight from F. For noisy, agnostic settings we present a strategy for learning pointwise-competitive classifiers from a finite training sample provided that the classifier can abstain from prediction at a certain region of its choice. For some interesting hypothesis classes and families of distributions, the measure of this rejected region is shown to be diminishing at a fast rate, with high probability. Exact implementation of the proposed learning strategy is dependent on an ERM oracle that can be hard to compute in the agnostic case. We thus consider a heuristic approximation procedure that is based on SVMs, and show empirically that this algorithm consistently outperforms a traditional rejection mechanism based on distance from decision boundary.
Ronald de Haan, Iyad Kanj and Stefan Szeider (2015)
"On the Subexponential-Time Complexity of CSP",
Volume 52, pages 203-234
For quick access go to
Abstract:
Not all NP-complete problems share the same practical hardness with respect to exact computation. Whereas some NP-complete problems are amenable to efficient computational methods, others are yet to show any such sign. It becomes a major challenge to develop a theoretical framework that is more fine-grained than the theory of NP-completeness, and that can explain the distinction between the exact complexities of various NP-complete problems. This distinction is highly relevant for constraint satisfaction problems under natural restrictions, where various shades of hardness can be observed in practice.
Acknowledging the NP-hardness of such problems, one has to look beyond polynomial time computation. The theory of subexponential-time complexity provides such a framework, and has been enjoying increasing popularity in complexity theory. An instance of the constraint satisfaction problem with n variables over a domain of d values can be solved by brute-force in d^{n} steps (omitting a polynomial factor). In this paper we study the existence of subexponential-time algorithms, that is, algorithms running in d^{o(n)} steps, for various natural restrictions of the constraint satisfaction problem. We consider both the constraint satisfaction problem in which all the constraints are given extensionally as tables, and that in which all the constraints are given intensionally in the form of global constraints. We provide tight characterizations of the subexponential-time complexity of the aforementioned problems with respect to several natural structural parameters, which allows us to draw a detailed landscape of the subexponential-time complexity of the constraint satisfaction problem. Our analysis provides fundamental results indicating whether and when one can significantly improve on the brute-force search approach for solving the constraint satisfaction problem.
----------------------------------------------------------------
II. Unsubscribing from our Mailing List
To remove yourself from the JAIR subscribers mailing list, visit our
Web site (http://www.jair.org/), follow the link "notify me of new
articles", enter your email address in the form at the bottom of the
page, and follow the directions. In the event that you've already
deleted yourself from the list and we keep sending you messages like
this one, send mail to jair-ed at isi.edu.
----------------------------------------------------------------
From jair-ed at isi.edu Wed Apr 1 22:37:38 2015
From: jair-ed at isi.edu (jair-ed@isi.edu)
Date: Thu, 02 Apr 2015 00:37:38 -0500
Subject: [Jairsubscribers] 6 new articles published by JAIR
Message-ID:
Dear JAIR subscriber:
This message lists papers that have been recently published in JAIR and describes how to access them. (If you wish to remove yourself from this mailing list, see instructions at the end of this message.)
----------------------------------------------------------------
I. New JAIR Articles
Broes De Cat, Marc Denecker, Maurice Bruynooghe and Peter Stuckey (2015)
"Lazy Model Expansion: Interleaving Grounding with Search",
Volume 52, pages 235-286
For quick access go to
Abstract:
Finding satisfying assignments for the variables involved in a set of constraints can be cast as a (bounded) model generation problem: search for (bounded) models of a theory in some logic. The state-of-the-art approach for bounded model generation for rich knowledge representation languages like ASP and FO(.) and a CSP modeling language such as Zinc, is ground-and-solve: reduce the theory to a ground or propositional one and apply a search algorithm to the resulting theory.
An important bottleneck is the blow-up of the size of the theory caused by the grounding phase. Lazily grounding the theory during search is a way to overcome this bottleneck. We present a theoretical framework and an implementation in the context of the FO(.) knowledge representation language. Instead of grounding all parts of a theory, justifications are derived for some parts of it. Given a partial assignment for the grounded part of the theory and valid justifications for the formulas of the non-grounded part, the justifications provide a recipe to construct a complete assignment that satisfies the non-grounded part. When a justification for a particular formula becomes invalid during search, a new one is derived; if that fails, the formula is split in a part to be grounded and a part that can be justified. Experimental results illustrate the power and generality of this approach.
Paolo Liberatore (2015)
"Revision by History",
Volume 52, pages 287-329
For quick access go to
Abstract:
This article proposes a solution to the problem of obtaining plausibility information, which is necessary to perform belief revision: given a sequence of revisions, together with their results, derive a possible initial order that has generated them; this is different from the usual assumption of starting from an all-equal initial order and modifying it by a sequence of revisions. Four semantics for iterated revision are considered: natural, restrained, lexicographic and reinforcement. For each, a necessary and sufficient condition to the existence of an order generating a given history of revisions and results is proved. Complexity is proved coNP complete in all cases but one (reinforcement revision with unbounded sequence length).
Shan Xue, Alan Fern and Daniel Sheldon (2015)
"Scheduling Conservation Designs for Maximum Flexibility via Network Cascade Optimization",
Volume 52, pages 331-360
For quick access go to
Abstract:
One approach to conserving endangered species is to purchase and protect a set of land parcels in a way that maximizes the expected future population spread. Unfortunately, an ideal set of parcels may have a cost that is beyond the immediate budget constraints and must thus be purchased incrementally. This raises the challenge of deciding how to schedule the parcel purchases in a way that maximizes the flexibility of budget usage while keeping population spread loss in control. In this paper, we introduce a formulation of this scheduling problem that does not rely on knowing the future budgets of an organization. In particular, we consider scheduling purchases in a way that achieves a population spread no less than desired but delays purchases as long as possible. Such schedules offer conservation planners maximum flexibility and use available budgets in the most efficient way. We develop the problem formally as a stochastic optimization problem over a network cascade model describing a commonly used model of population spread. Our solution approach is based on reducing the stochastic problem to a novel variant of the directed Steiner tree problem, which we call the set-weighted directed Steiner graph problem. We show that this problem is computationally hard, motivating the development of a primal-dual algorithm for the problem that computes both a feasible solution and a bound on the quality of an optimal solution. We evaluate the approach on both real and synthetic conservation data with a standard population spread model. The algorithm is shown to produce near optimal results and is much more scalable than more generic off-the-shelf optimizers. Finally, we evaluate a variant of the algorithm to explore the trade-offs between budget savings and population growth.
Been Kim, Caleb M. Chacha and Julie A. Shah (2015)
"Inferring Team Task Plans from Human Meetings: A Generative Modeling Approach with Logic-Based Prior",
Volume 52, pages 361-398
For quick access go to
Abstract:
We aim to reduce the burden of programming and deploying autonomous systems to work in concert with people in time-critical domains such as military field operations and disaster response. Deployment plans for these operations are frequently negotiated on-the-fly by teams of human planners. A human operator then translates the agreed-upon plan into machine instructions for the robots. We present an algorithm that reduces this translation burden by inferring the final plan from a processed form of the human team's planning conversation. Our hybrid approach combines probabilistic generative modeling with logical plan validation used to compute a highly structured prior over possible plans, enabling us to overcome the challenge of performing inference over a large solution space with only a small amount of noisy data from the team planning session. We validate the algorithm through human subject experimentations and show that it is able to infer a human team's final plan with 86% accuracy on average. We also describe a robot demonstration in which two people plan and execute a first-response collaborative task with a PR2 robot. To the best of our knowledge, this is the first work to integrate a logical planning technique within a generative model to perform plan inference.
Diederik Marijn Roijers, Shimon Whiteson and Frans A. Oliehoek (2015)
"Computing Convex Coverage Sets for Faster Multi-objective Coordination",
Volume 52, pages 399-443
For quick access go to
Abstract:
In this article, we propose new algorithms for multi-objective coordination graphs (MO-CoGs). Key to the efficiency of these algorithms is that they compute a convex coverage set (CCS) instead of a Pareto coverage set (PCS). Not only is a CCS a sufficient solution set for a large class of problems, it also has important characteristics that facilitate more efficient solutions. We propose two main algorithms for computing a CCS in MO-CoGs. Convex multi-objective variable elimination (CMOVE) computes a CCS by performing a series of agent eliminations, which can be seen as solving a series of local multi-objective subproblems. Variable elimination linear support (VELS) iteratively identifies the single weight vector, w, that can lead to the maximal possible improvement on a partial CCS and calls variable elimination to solve a scalarized instance of the problem for w. VELS is faster than CMOVE for small and medium numbers of objectives and can compute an ε-approximate CCS in a fraction of the runtime. In addition, we propose variants of these methods that employ AND/OR tree search instead of variable elimination to achieve memory efficiency. We analyze the runtime and space complexities of these methods, prove their correctness, and compare them empirically against a naive baseline and an existing PCS method, both in terms of memory-usage and runtime. Our results show that, by focusing on the CCS, these methods achieve much better scalability in the number of agents than the current state of the art.
Marie-Catherine de Marneffe, Marta Recasens and Christopher Potts (2015)
"Modeling the Lifespan of Discourse Entities with Application to Coreference Resolution",
Volume 52, pages 445-475
For quick access go to
Abstract:
A discourse typically involves numerous entities, but few are mentioned more than once. Distinguishing those that die out after just one mention (singleton) from those that lead longer lives (coreferent) would dramatically simplify the hypothesis space for coreference resolution models, leading to increased performance. To realize these gains, we build a classifier for predicting the singleton/coreferent distinction. The model?s feature representations synthesize linguistic insights about the factors affecting discourse entity lifespans (especially negation, modality, and attitude predication) with existing results about the benefits of ?surface? (part-of-speech and n-gram-based) features for coreference resolution. The model is effective in its own right, and the feature representations help to identify the anchor phrases in bridging anaphora as well. Furthermore, incorporating the model into two very different state-of-the-art coreference resolution systems, one rule-based and the other learning-based, yields significant performance improvements.
----------------------------------------------------------------
II. Unsubscribing from our Mailing List
To remove yourself from the JAIR subscribers mailing list, visit our
Web site (http://www.jair.org/), follow the link "notify me of new
articles", enter your email address in the form at the bottom of the
page, and follow the directions. In the event that you've already
deleted yourself from the list and we keep sending you messages like
this one, send mail to jair-ed at isi.edu.
----------------------------------------------------------------
From jair-ed at isi.edu Fri May 1 22:55:30 2015
From: jair-ed at isi.edu (jair-ed@isi.edu)
Date: Sat, 02 May 2015 00:55:30 -0500
Subject: [Jairsubscribers] 4 new articles published by JAIR
Message-ID:
Dear JAIR subscriber:
This message lists papers that have been recently published in JAIR and describes how to access them. (If you wish to remove yourself from this mailing list, see instructions at the end of this message.)
----------------------------------------------------------------
I. New JAIR Articles
Athirai A. Irissappane and Jie Zhang (2015)
"A Case-Based Reasoning Framework to Choose Trust Models for Different E-Marketplace Environments",
Volume 52, pages 477-505
For quick access go to
Abstract:
The performance of trust models highly depend on the characteristics of the environments where they are applied. Thus, it becomes challenging to choose a suitable trust model for a given e-marketplace environment, especially when ground truth about the agent (buyer and seller) behavior is unknown (called unknown environment). We propose a case-based reasoning framework to choose suitable trust models for unknown environments, based on the intuition that if a trust model performs well in one environment, it will do so in another similar environment. Firstly, we build a case base with a number of simulated environments (with known ground truth) along with the trust models most suitable for each of them. Given an unknown environment, case-based retrieval algorithms retrieve the most similar case(s), and the trust model of the most similar case(s) is chosen as the most suitable model for the unknown environment. Evaluation results confirm the effectiveness of our framework in choosing suitable trust models for different e-marketplace environments.
Piotr Faliszewski, Edith Hemaspaandra and Lane A. Hemaspaandra (2015)
"Weighted Electoral Control",
Volume 52, pages 507-542
For quick access go to
Abstract:
Although manipulation and bribery have been extensively studied under weighted voting, there has been almost no work done on election control under weighted voting. This is unfortunate, since weighted voting appears in many important natural settings. In this paper, we study the complexity of controlling the outcome of weighted elections through adding and deleting voters. We obtain polynomial-time algorithms, NP-completeness results, and for many NP-complete cases, approximation algorithms. In particular, for scoring rules we completely characterize the complexity of weighted voter control. Our work shows that for quite a few important cases, either polynomial-time exact algorithms or polynomial-time approximation algorithms exist.
Minh Dao-Tran, Thomas Eiter, Michael Fink and Thomas Krennwallner (2015)
"Distributed Evaluation of Nonmonotonic Multi-context Systems",
Volume 52, pages 543-600
For quick access go to
Abstract:
Multi-context Systems (MCSs) are a formalism for systems consisting of knowledge bases (possibly heterogeneous and non-monotonic) that are interlinked via bridge rules, where the global system semantics emerges from the local semantics of the knowledge bases (also called ?contexts?) in an equilibrium. While MCSs and related formalisms are inherently targeted for distributed set- tings, no truly distributed algorithms for their evaluation were available. We address this short- coming and present a suite of such algorithms which includes a basic algorithm DMCS, an ad- vanced version DMCSOPT that exploits topology-based optimizations, and a streaming algorithm DMCS-STREAMING that computes equilibria in packages of bounded size. The algorithms be- have quite differently in several respects, as experienced in thorough experimental evaluation of a system prototype. From the experimental results, we derive a guideline for choosing the appropriate algorithm and running mode in particular situations, determined by the parameter settings.
Haonan Yu, N. Siddharth, Andrei Barbu and Jeffrey Mark Siskind (2015)
"A Compositional Framework for Grounding Language Inference, Generation, and Acquisition in Video",
Volume 52, pages 601-713
For quick access go to
Abstract:
We present an approach to simultaneously reasoning about a video clip and an entire natural-language sentence. The compositional nature of language is exploited to construct models which represent the meanings of entire sentences composed out of the meanings of the words in those sentences mediated by a grammar that encodes the predicate-argument relations. We demonstrate that these models faithfully represent the meanings of sentences and are sensitive to how the roles played by participants (nouns), their characteristics (adjectives), the actions performed (verbs), the manner of such actions (adverbs), and changing spatial relations between participants (prepositions) affect the meaning of a sentence and how it is grounded in video. We exploit this methodology in three ways. In the first, a video clip along with a sentence are taken as input and the participants in the event described by the sentence are highlighted, even when the clip depicts multiple similar simultaneous events. In the second, a video clip is taken as input without a sentence and a sentence is generated that describes an event in that clip. In the third, a corpus of video clips is paired with sentences which describe some of the events in those clips and the meanings of the words in those sentences are learned. We learn these meanings without needing to specify which attribute of the video clips each word in a given sentence refers to. The learned meaning representations are shown to be intelligible to humans.
----------------------------------------------------------------
II. Unsubscribing from our Mailing List
To remove yourself from the JAIR subscribers mailing list, visit our
Web site (http://www.jair.org/), follow the link "notify me of new
articles", enter your email address in the form at the bottom of the
page, and follow the directions. In the event that you've already
deleted yourself from the list and we keep sending you messages like
this one, send mail to jair-ed at isi.edu.
----------------------------------------------------------------
From jair-ed at isi.edu Tue Jun 2 08:53:29 2015
From: jair-ed at isi.edu (jair-ed@isi.edu)
Date: Tue, 02 Jun 2015 10:53:29 -0500
Subject: [Jairsubscribers] 3 new articles published by JAIR
Message-ID:
Dear JAIR subscriber:
This message lists papers that have been recently published in JAIR and describes how to access them. (If you wish to remove yourself from this mailing list, see instructions at the end of this message.)
----------------------------------------------------------------
I. New JAIR Articles
Pannaga Shivaswamy and Thorsten Joachims (2015)
"Coactive Learning",
Volume 53, pages 1-40
For quick access go to
Abstract:
We propose Coactive Learning as a model of interaction between a learning system and a human user, where both have the common goal of providing results of maximum utility to the user. Interactions in the Coactive Learning model take the following form: at each step, the system (e.g. search engine) receives a context (e.g. query) and predicts an object (e.g. ranking); the user responds by correcting the system if necessary, providing a slightly improved but not necessarily optimal object as feedback. We argue that such preference feedback can be inferred in large quantity from observable user behavior (e.g., clicks in web search), unlike the optimal feedback required in the expert model or the cardinal valuations required for bandit learning. Despite the relaxed requirements for the feedback, we show that it is possible to adapt many existing online learning algorithms to the coactive framework. In particular, we provide algorithms that achieve square root regret in terms of cardinal utility, even though the learning algorithm never observes cardinal utility values directly. We also provide an algorithm with logarithmic regret in the case of strongly convex loss functions. An extensive empirical study demonstrates the applicability of our model and algorithms on a movie recommendation task, as well as ranking for web search.
Krishna S. R. Dubba, Anthony G. Cohn, David C. Hogg, Mehul Bhatt and Frank Dylla (2015)
"Learning Relational Event Models from Video",
Volume 53, pages 41-90
For quick access go to
Abstract:
Event models obtained automatically from video can be used in applications ranging from abnormal event detection to content based video retrieval. When multiple agents are involved in the events, characterizing events naturally suggests encoding interactions as relations. Learning event models from this kind of relational spatio-temporal data using relational learning techniques such as Inductive Logic Programming (ILP) hold promise, but have not been successfully applied to very large datasets which result from video data. In this paper, we present a novel framework REMIND (Relational Event Model INDuction) for supervised relational learning of event models from large video datasets using ILP. Efficiency is achieved through the learning from interpretations setting and using a typing system that exploits the type hierarchy of objects in a domain. The use of types also helps prevent over generalization. Furthermore, we also present a type-refining operator and prove that it is optimal. The learned models can be used for recognizing events from previously unseen videos. We also present an extension to the framework by integrating an abduction step that improves the learning performance when there is noise in the input data. The experimental results on several hours of video data from two challenging real world domains (an airport domain and a physical action verbs domain) suggest that the techniques are suitable to real world scenarios.
Davide Grossi, Emiliano Lorini and Francois Schwarzentruber (2015)
"The Ceteris Paribus Structure of Logics of Game Forms",
Volume 53, pages 91-126
For quick access go to
Abstract:
The article introduces a ceteris paribus modal logic, called CP, interpreted on the equivalence classes induced by finite sets of propositional atoms. This logic is studied and then used to embed three logics of strategic interaction, namely atemporal STIT, the coalition logic of propositional control (CL−PC) and the starless fragment of the dynamic logic of propositional assignments (DL−PA). The embeddings highlight a common ceteris paribus structure underpinning the key operators of all these apparently very different logics and show, we argue, remarkable similarities behind some of the most influential formalisms for reasoning about strategic interaction
----------------------------------------------------------------
II. Unsubscribing from our Mailing List
To remove yourself from the JAIR subscribers mailing list, visit our
Web site (http://www.jair.org/), follow the link "notify me of new
articles", enter your email address in the form at the bottom of the
page, and follow the directions. In the event that you've already
deleted yourself from the list and we keep sending you messages like
this one, send mail to jair-ed at isi.edu.
----------------------------------------------------------------
From jair-ed at isi.edu Wed Jul 1 21:38:24 2015
From: jair-ed at isi.edu (jair-ed@isi.edu)
Date: Wed, 01 Jul 2015 23:38:24 -0500
Subject: [Jairsubscribers] 3 new articles published by JAIR
Message-ID:
Dear JAIR subscriber:
This message lists papers that have been recently published in JAIR and describes how to access them. (If you wish to remove yourself from this mailing list, see instructions at the end of this message.)
----------------------------------------------------------------
I. New JAIR Articles
Marijn Heule, Matti Järvisalo, Florian Lonsing, Martina Seidl and Armin Biere (2015)
"Clause Elimination for SAT and QSAT",
Volume 53, pages 127-168
For quick access go to
Abstract:
The famous archetypical NP-complete problem of Boolean satisfiability (SAT) and its PSPACE-complete generalization of quantified Boolean satisfiability (QSAT) have become central declarative programming paradigms through which real-world instances of various computationally hard problems can be efficiently solved. This success has been achieved through several breakthroughs in practical implementations of decision procedures for SAT and QSAT, that is, in SAT and QSAT solvers. Here, simplification techniques for conjunctive normal form (CNF) for SAT and for prenex conjunctive normal form (PCNF) for QSAT---the standard input formats of SAT and QSAT solvers---have recently proven very effective in increasing solver efficiency when applied before (i.e., in preprocessing) or during (i.e., in inprocessing) satisfiability search.
In this article, we develop and analyze clause elimination procedures for pre- and inprocessing. Clause elimination procedures form a family of (P)CNF formula simplification techniques which remove clauses that have specific (in practice polynomial-time) redundancy properties while maintaining the satisfiability status of the formulas. Extending known procedures such as tautology, subsumption, and blocked clause elimination, we introduce novel elimination procedures based on asymmetric variants of these techniques, and also develop a novel family of so-called covered clause elimination procedures, as well as natural liftings of the CNF-level procedures to PCNF. We analyze the considered clause elimination procedures from various perspectives. Furthermore, for the variants not preserving logical equivalence under clause elimination, we show how to reconstruct solutions to original CNFs from satisfying assignments to simplified CNFs, which is important for practical applications for the procedures. Complementing the more theoretical analysis, we present results on an empirical evaluation on the practical importance of the clause elimination procedures in terms of the effect on solver runtimes on standard real-world application benchmarks. It turns out that the importance of applying the clause elimination procedures developed in this work is empirically emphasized in the context of state-of-the-art QSAT solving.
Miquel Esplà-Gomis, Felipe Sánchez-Martínez and Mikel L. Forcada (2015)
"Using Machine Translation to Provide Target-Language Edit Hints in Computer Aided Translation Based on Translation Memories",
Volume 53, pages 169-222
For quick access go to
Abstract:
This paper explores the use of general-purpose machine translation (MT) in assisting the users of computer-aided translation (CAT) systems based on translation memory (TM) to identify the target words in the translation proposals that need to be changed (either replaced or removed) or kept unedited, a task we term as "word-keeping recommendation". MT is used as a black box to align source and target sub-segments on the fly in the translation units (TUs) suggested to the user. Source-language (SL) and target-language (TL) segments in the matching TUs are segmented into overlapping sub-segments of variable length and machine-translated into the TL and the SL, respectively. The bilingual sub-segments obtained and the matching between the SL segment in the TU and the segment to be translated are employed to build the features that are then used by a binary classifier to determine the target words to be changed and those to be kept unedited. In this approach, MT results are never presented to the translator. Two approaches are presented in this work: one using a word-keeping recommendation system which can be trained on the TM used with the CAT system, and a more basic approach which does not require any training.
Experiments are conducted by simulating the translation of texts in several language pairs with corpora belonging to different domains and using three different MT systems. We compare the performance obtained to that of previous works that have used statistical word alignment for word-keeping recommendation, and show that the MT-based approaches presented in this paper are more accurate in most scenarios. In particular, our results confirm that the MT-based approaches are better than the alignment-based approach when using models trained on out-of-domain TMs. Additional experiments were performed to check how dependent the MT-based recommender is on the language pair and MT system used for training. These experiments confirm a high degree of reusability of the recommendation models across various MT systems, but a low level of reusability across language pairs.
Akshat Kumar, Shlomo Zilberstein and Marc Toussaint (2015)
"Probabilistic Inference Techniques for Scalable Multiagent Decision Making",
Volume 53, pages 223-270
For quick access go to
Abstract:
Decentralized POMDPs provide an expressive framework for multiagent sequential decision making. However, the complexity of these models---NEXP-Complete even for two agents---has limited their scalability. We present a promising new class of approximation algorithms by developing novel connections between multiagent planning and machine learning. We show how the multiagent planning problem can be reformulated as inference in a mixture of dynamic Bayesian networks (DBNs). This planning-as-inference approach paves the way for the application of efficient inference techniques in DBNs to multiagent decision making. To further improve scalability, we identify certain conditions that are sufficient to extend the approach to multiagent systems with dozens of agents. Specifically, we show that the necessary inference within the expectation-maximization framework can be decomposed into processes that often involve a small subset of agents, thereby facilitating scalability. We further show that a number of existing multiagent planning models satisfy these conditions. Experiments on large planning benchmarks confirm the benefits of our approach in terms of runtime and scalability with respect to existing techniques.
----------------------------------------------------------------
II. Unsubscribing from our Mailing List
To remove yourself from the JAIR subscribers mailing list, visit our
Web site (http://www.jair.org/), follow the link "notify me of new
articles", enter your email address in the form at the bottom of the
page, and follow the directions. In the event that you've already
deleted yourself from the list and we keep sending you messages like
this one, send mail to jair-ed at isi.edu.
----------------------------------------------------------------
From jair-ed at isi.edu Wed Sep 2 05:21:54 2015
From: jair-ed at isi.edu (jair-ed@isi.edu)
Date: Wed, 02 Sep 2015 07:21:54 -0500
Subject: [Jairsubscribers] 6 new articles published by JAIR
Message-ID:
Dear JAIR subscriber:
This message lists papers that have been recently published in JAIR and describes how to access them. (If you wish to remove yourself from this mailing list, see instructions at the end of this message.)
----------------------------------------------------------------
I. New JAIR Articles
Stefan Funke, Andre Nusser and Sabine Storandt (2015)
"Placement of Loading Stations for Electric Vehicles: No Detours Necessary!",
Volume 53, pages 633-658
For quick access go to
Abstract:
Compared to conventional cars, electric vehicles (EVs) still suffer from considerably shorter cruising ranges. Combined with the sparsity of battery loading stations, the complete transition to E-mobility still seems a long way to go. In this paper, we consider the problem of placing as few loading stations as possible so that on any shortest path there are sufficiently many not to run out of energy. We show how to model this problem and introduce heuristics which provide close-to-optimal solutions even in large road networks.
Daan Bloembergen, Karl Tuyls, Daniel Hennes and Michael Kaisers (2015)
"Evolutionary Dynamics of Multi-Agent Learning: A Survey",
Volume 53, pages 659-697
For quick access go to
Abstract:
The interaction of multiple autonomous agents gives rise to highly dynamic and nondeterministic environments, contributing to the complexity in applications such as automated financial markets, smart grids, or robotics. Due to the sheer number of situations that may arise, it is not possible to foresee and program the optimal behaviour for all agents beforehand. Consequently, it becomes essential for the success of the system that the agents can learn their optimal behaviour and adapt to new situations or circumstances. The past two decades have seen the emergence of reinforcement learning, both in single and multi-agent settings, as a strong, robust and adaptive learning paradigm. Progress has been substantial, and a wide range of algorithms are now available. An important challenge in the domain of multi-agent learning is to gain qualitative insights into the resulting system dynamics. In the past decade, tools and methods from evolutionary game theory have been successfully employed to study multi-agent learning dynamics formally in strategic interactions. This article surveys the dynamical models that have been derived for various multi-agent reinforcement learning algorithms, making it possible to study and compare them qualitatively. Furthermore, new learning algorithms that have been introduced using these evolutionary game theoretic tools are reviewed. The evolutionary models can be used to study complex strategic interactions. Examples of such analysis are given for the domains of automated trading in stock markets and collision avoidance in multi-robot systems. The paper provides a roadmap on the progress that has been achieved in analysing the evolutionary dynamics of multi-agent learning by highlighting the main results and accomplishments.
Johan Kwisthout (2015)
"Tree-Width and the Computational Complexity of MAP Approximations in Bayesian Networks",
Volume 53, pages 699-720
For quick access go to
Abstract:
The problem of finding the most probable explanation to a designated set of variables given partial evidence (the MAP problem) is a notoriously intractable problem in Bayesian networks, both to compute exactly and to approximate. It is known, both from theoretical considerations and from practical experience, that low tree-width is typically an essential prerequisite to efficient exact computations in Bayesian networks. In this paper we investigate whether the same holds for approximating MAP. We define four notions of approximating MAP (by value, structure, rank, and expectation) and argue that all of them are intractable in general. We prove that efficient value-approximations, structure-approximations, and rank-approximations of MAP instances with high tree-width will violate the Exponential Time Hypothesis. In contrast, we show that MAP can sometimes be efficiently expectation-approximated, even in instances with high tree-width, if the most probable explanation has a high probability. We introduce the complexity class FERT, analogous to the class FTP, to capture this notion of fixed-parameter expectation-approximability. We suggest a road-map to future research that yields fixed-parameter tractable results for expectation-approximate MAP, even in graphs with high tree-width.
Piotr Krysta, Orestis Telelis and Carmine Ventre (2015)
"Mechanisms for Multi-unit Combinatorial Auctions with a Few Distinct Goods",
Volume 53, pages 721-744
For quick access go to
Abstract:
We design and analyze deterministic truthful approximation mechanisms for multi-unit Combinatorial Auctions involving only a constant number of distinct goods, each in arbitrary limited supply. Prospective buyers (bidders) have preferences over multisets of items, i.e., for more than one unit per distinct good. Our objective is to determine allocations of multisets that maximize the Social Welfare. Our main results are for multi-minded and submodular bidders. In the first setting each bidder has a positive value for being allocated one multiset from a prespecified demand set of alternatives. In the second setting each bidder is associated to a submodular valuation function that defines his value for the multiset he is allocated. For multi-minded bidders, we design a truthful FPTAS that fully optimizes the Social Welfare, while violating the supply constraints on goods within factor (1+e), for any fixed e>0 (i.e., the approximation applies to the constraints and not to the Social Welfare). This result is best possible, in that full optimization is impossible without violating the supply constraints. For submodular bidders, we obtain a PTAS that approximates the optimum Social Welfare within factor (1+e), for any fixed e>0, without violating the supply constraints. This result is best possible as well. Our allocation algorithms are Maximal-in-Range and yield truthful mechanisms, when paired with Vickrey-Clarke-Groves payments.
Marius Lindauer, Holger H. Hoos, Frank Hutter and Torsten Schaub (2015)
"AutoFolio: An Automatically Configured Algorithm Selector",
Volume 53, pages 745-778
For quick access go to
Abstract:
Algorithm selection (AS) techniques -- which involve choosing from a set of algorithms the one expected to solve a given problem instance most efficiently -- have substantially improved the state of the art in solving many prominent AI problems, such as SAT, CSP, ASP, MAXSAT and QBF. Although several AS procedures have been introduced, not too surprisingly, none of them dominates all others across all AS scenarios. Furthermore, these procedures have parameters whose optimal values vary across AS scenarios. This holds specifically for the machine learning techniques that form the core of current AS procedures, and for their hyperparameters. Therefore, to successfully apply AS to new problems, algorithms and benchmark sets, two questions need to be answered: (i) how to select an AS approach and (ii) how to set its parameters effectively. We address both of these problems simultaneously by using automated algorithm configuration. Specifically, we demonstrate that we can automatically configure claspfolio 2, which implements a large variety of different AS approaches and their respective parameters in a single, highly-parameterized algorithm framework. Our approach, dubbed AutoFolio, allows researchers and practitioners across a broad range of applications to exploit the combined power of many different AS methods. We demonstrate AutoFolio can significantly improve the performance of claspfolio 2 on 8 out of the 13 scenarios from the Algorithm Selection Library, leads to new state-of-the-art algorithm selectors for 7 of these scenarios, and matches state-of-the-art performance (statistically) on all other scenarios. Compared to the best single algorithm for each AS scenario, AutoFolio achieves average speedup factors between 1.3 and 15.4.
Aaron Hunter and James Delgrande (2015)
"Belief Change with Uncertain Action Histories",
Volume 53, pages 779-824
For quick access go to
Abstract:
We consider the iterated belief change that occurs following an alternating sequence of actions and observations. At each instant, an agent has beliefs about the actions that have occurred as well as beliefs about the resulting state of the world. We represent such problems by a sequence of ranking functions, so an agent assigns a quantitative plausibility value to every action and every state at each point in time. The resulting formalism is able to represent fallible belief, erroneous perception, exogenous actions, and failed actions. We illustrate that our framework is a generalization of several existing approaches to belief change, and it appropriately captures the non-elementary interaction between belief update and belief revision.
----------------------------------------------------------------
II. Unsubscribing from our Mailing List
To remove yourself from the JAIR subscribers mailing list, visit our
Web site (http://www.jair.org/), follow the link "notify me of new
articles", enter your email address in the form at the bottom of the
page, and follow the directions. In the event that you've already
deleted yourself from the list and we keep sending you messages like
this one, send mail to jair-ed at isi.edu.
----------------------------------------------------------------
From jair-ed at isi.edu Sun Oct 4 05:04:18 2015
From: jair-ed at isi.edu (jair-ed@isi.edu)
Date: Sun, 04 Oct 2015 07:04:18 -0500
Subject: [Jairsubscribers] 5 new articles published by JAIR
Message-ID:
Dear JAIR subscriber:
This message lists papers that have been recently published in JAIR and describes how to access them. (If you wish to remove yourself from this mailing list, see instructions at the end of this message.)
----------------------------------------------------------------
I. New JAIR Articles
Roy Bar-Haim, Ido Dagan and Jonathan Berant (2015)
"Knowledge-Based Textual Inference via Parse-Tree Transformations",
Volume 54, pages 1-57
For quick access go to
Abstract:
Textual inference is an important component in many applications for understanding natural language. Classical approaches to textual inference rely on logical representations for meaning, which may be regarded as "external" to the natural language itself. However, practical applications usually adopt shallower lexical or lexical-syntactic representations, which correspond closely to language structure. In many cases, such approaches lack a principled meaning representation and inference framework. We describe an inference formalism that operates directly on language-based structures, particularly syntactic parse trees. New trees are generated by applying inference rules, which provide a unified representation for varying types of inferences. We use manual and automatic methods to generate these rules, which cover generic linguistic structures as well as specific lexical-based inferences. We also present a novel packed data-structure and a corresponding inference algorithm that allows efficient implementation of this formalism. We proved the correctness of the new algorithm and established its efficiency analytically and empirically. The utility of our approach was illustrated on two tasks: unsupervised relation extraction from a large corpus, and the Recognizing Textual Entailment (RTE) benchmarks.
Sigve Hortemo Sæther, Jan Arne Telle and Martin Vatshelle (2015)
"Solving #SAT and MAXSAT by Dynamic Programming ",
Volume 54, pages 59-82
For quick access go to
Abstract:
We look at dynamic programming algorithms for propositional model counting, also called #SAT, and MaxSAT. Tools from graph structure theory, in particular treewidth, have been used to successfully identify tractable cases in many subfields of AI, including SAT, Constraint Satisfaction Problems (CSP), Bayesian reasoning, and planning. In this paper we attack #SAT and MaxSAT using similar, but more modern, graph structure tools. The tractable cases will include formulas whose class of incidence graphs have not only unbounded treewidth but also unbounded clique-width. We show that our algorithms extend all previous results for MaxSAT and #SAT achieved by dynamic programming along structural decompositions of the incidence graph of the input formula. We present some limited experimental results, comparing implementations of our algorithms to state-of-the-art #SAT and MaxSAT solvers, as a proof of concept that warrants further research.
Ruben Izquierdo, Armando Suarez and German Rigau (2015)
"Word vs. Class-Based Word Sense Disambiguation",
Volume 54, pages 83-122
For quick access go to
Abstract:
As empirically demonstrated by the Word Sense Disambiguation (WSD) tasks of the last SensEval/SemEval exercises, assigning the appropriate meaning to words in context has resisted all attempts to be successfully addressed. Many authors argue that one possible reason could be the use of inappropriate sets of word meanings. In particular, WordNet has been used as a de-facto standard repository of word meanings in most of these tasks. Thus, instead of using the word senses defined in WordNet, some approaches have derived semantic classes representing groups of word senses. However, the meanings represented by WordNet have been only used for WSD at a very fine-grained sense level or at a very coarse-grained semantic class level (also called SuperSenses). We suspect that an appropriate level of abstraction could be on between both levels. The contributions of this paper are manifold. First, we propose a simple method to automatically derive semantic classes at intermediate levels of abstraction covering all nominal and verbal WordNet meanings. Second, we empirically demonstrate that our automatically derived semantic classes outperform classical approaches based on word senses and more coarse-grained sense groupings. Third, we also demonstrate that our supervised WSD system benefits from using these new semantic classes as additional semantic features while reducing the amount of training examples. Finally, we also demonstrate the robustness of our supervised semantic class-based WSD system when tested on out of domain corpus.
Scott Kiesel, Ethan Burns and Wheeler Ruml (2015)
"Achieving Goals Quickly Using Real-time Search: Experimental Results in Video Games",
Volume 54, pages 123-158
For quick access go to
Abstract:
In real-time domains such as video games, planning happens concurrently with execution and the planning algorithm has a strictly bounded amount of time before it must return the next action for the agent to execute. We explore the use of real-time heuristic search in two benchmark domains inspired by video games. Unlike classic benchmarks such as grid pathfinding and the sliding tile puzzle, these new domains feature exogenous change and directed state space graphs. We consider the setting in which planning and acting are concurrent and we use the natural objective of minimizing goal achievement time. Using both the classic benchmarks and the new domains, we investigate several enhancements to a leading real-time search algorithm, LSS-LRTA*. We show experimentally that 1) it is better to plan after each action or to use a dynamically sized lookahead, 2) A*-based lookahead can cause undesirable actions to be selected, and 3) on-line de-biasing of the heuristic can lead to improved performance. We hope this work encourages future research on applying real-time search in dynamic domains.
Lluís Formiga, Alberto Barrón-Cedeño, Lluís Marquez, Carlos A. Henríquez and José B. Mariño (2015)
"Leveraging Online User Feedback to Improve Statistical Machine Translation",
Volume 54, pages 159-192
For quick access go to
Abstract:
In this article we present a three-step methodology for dynamically improving a statistical machine translation (SMT) system by incorporating human feedback in the form of free edits on the system translations. We target at feedback provided by casual users, which is typically error-prone. Thus, we first propose a filtering step to automatically identify the better user-edited translations and discard the useless ones. A second step produces a pivot-based alignment between source and user-edited sentences, focusing on the errors made by the system. Finally, a third step produces a new translation model and combines it linearly with the one from the original system. We perform a thorough evaluation on a real-world dataset collected from the Reverso.net translation service and show that every step in our methodology contributes significantly to improve a general purpose SMT system. Interestingly, the quality improvement is not only due to the increase of lexical coverage, but to a better lexical selection, reordering, and morphology. Finally, we show the robustness of the methodology by applying it to a different scenario, in which the new examples come from an automatically Web-crawled parallel corpus. Using exactly the same architecture and models provides again a significant improvement of the translation quality of a general purpose baseline SMT system.
----------------------------------------------------------------
II. Unsubscribing from our Mailing List
To remove yourself from the JAIR subscribers mailing list, visit our
Web site (http://www.jair.org/), follow the link "notify me of new
articles", enter your email address in the form at the bottom of the
page, and follow the directions. In the event that you've already
deleted yourself from the list and we keep sending you messages like
this one, send mail to jair-ed at isi.edu.
----------------------------------------------------------------
From jair-ed at isi.edu Mon Nov 2 10:27:11 2015
From: jair-ed at isi.edu (jair-ed@isi.edu)
Date: Mon, 02 Nov 2015 12:27:11 -0600
Subject: [Jairsubscribers] 3 new articles published by JAIR
Message-ID:
Dear JAIR subscriber:
This message lists papers that have been recently published in JAIR and describes how to access them. (If you wish to remove yourself from this mailing list, see instructions at the end of this message.)
----------------------------------------------------------------
I. New JAIR Articles
Hannes Strass (2015)
"Expressiveness of Two-Valued Semantics for Abstract Dialectical Frameworks",
Volume 54, pages 193-231
For quick access go to
Abstract:
We analyse the expressiveness of Brewka and Woltran's abstract dialectical frameworks for two-valued semantics. By expressiveness we mean the ability to encode a desired set of two-valued interpretations over a given propositional vocabulary A using only atoms from A. We also compare ADFs' expressiveness with that of (the two-valued semantics of) abstract argumentation frameworks, normal logic programs and propositional logic. While the computational complexity of the two-valued model existence problem for all these languages is (almost) the same, we show that the languages form a neat hierarchy with respect to their expressiveness. We then demonstrate that this hierarchy collapses once we allow to introduce a linear number of new vocabulary elements. We finally also analyse and compare the representational succinctness of ADFs (for two-valued model semantics), that is, their capability to represent two-valued interpretation sets in a space-efficient manner.
Meir Kalech and Shulamit Reches (2015)
"Decision Making with Dynamic Uncertain Events",
Volume 54, pages 233-275
For quick access go to
Abstract:
When to make a decision is a key question in decision making problems characterized by uncertainty. In this paper we deal with decision making in environments where information arrives dynamically. We address the tradeoff between waiting and stopping strategies. On the one hand, waiting to obtain more information reduces uncertainty, but it comes with a cost. Stopping and making a decision based on an expected utility reduces the cost of waiting, but the decision is based on uncertain information. We propose an optimal algorithm and two approximation algorithms. We prove that one approximation is optimistic - waits at least as long as the optimal algorithm, while the other is pessimistic - stops not later than the optimal algorithm. We evaluate our algorithms theoretically and empirically and show that the quality of the decision in both approximations is near-optimal and much faster than the optimal algorithm. Also, we can conclude from the experiments that the cost function is a key factor to chose the most effective algorithm.
Till Mossakowski and Reinhard Moratz (2015)
"Relations Between Spatial Calculi About Directions and Orientations",
Volume 54, pages 277-308
For quick access go to
Abstract:
Qualitative spatial descriptions characterize essential properties of spatial objects or configurations by relying on relative comparisons rather than measuring. Typically, in qualitative approaches only relatively coarse distinctions between configurations are made. Qualitative spatial knowledge can be used to represent incomplete and underdetermined knowledge in a systematic way. This is especially useful if the task is to describe features of classes of configurations rather than individual configurations.
Although reasoning with them is generally NP-hard, relative directions are important because they play a key role in human spatial descriptions and there are several approaches how to represent them using qualitative methods. In these approaches directions between spatial locations can be expressed as constraints over infinite domains, e.g. the Euclidean plane. The theory of relation algebras has been successfully applied to this field. Viewing relation algebras as universal algebras and applying and modifying standard tools from universal algebra in this work, we (re)define notions of qualitative constraint calculus, of homomorphism between calculi, and of quotient of calculi. Based on this method we derive important properties for spatial calculi from corresponding properties of related calculi. From a conceptual point of view these formal mappings between calculi are a means to translate between different granularities.
----------------------------------------------------------------
II. Unsubscribing from our Mailing List
To remove yourself from the JAIR subscribers mailing list, visit our
Web site (http://www.jair.org/), follow the link "notify me of new
articles", enter your email address in the form at the bottom of the
page, and follow the directions. In the event that you've already
deleted yourself from the list and we keep sending you messages like
this one, send mail to jair-ed at isi.edu.
----------------------------------------------------------------
From jair-ed at isi.edu Thu Dec 10 07:16:36 2015
From: jair-ed at isi.edu (jair-ed@isi.edu)
Date: Thu, 10 Dec 2015 09:16:36 -0600
Subject: [Jairsubscribers] 7 new articles published by JAIR
Message-ID:
Dear JAIR subscriber:
This message lists papers that have been recently published in JAIR and describes how to access them. (If you wish to remove yourself from this mailing list, see instructions at the end of this message.)
----------------------------------------------------------------
I. New JAIR Articles
Hannes Strass (2015)
"Expressiveness of Two-Valued Semantics for Abstract Dialectical Frameworks",
Volume 54, pages 193-231
For quick access go to
Abstract:
We analyse the expressiveness of Brewka and Woltran's abstract dialectical frameworks for two-valued semantics. By expressiveness we mean the ability to encode a desired set of two-valued interpretations over a given propositional vocabulary A using only atoms from A. We also compare ADFs' expressiveness with that of (the two-valued semantics of) abstract argumentation frameworks, normal logic programs and propositional logic. While the computational complexity of the two-valued model existence problem for all these languages is (almost) the same, we show that the languages form a neat hierarchy with respect to their expressiveness. We then demonstrate that this hierarchy collapses once we allow to introduce a linear number of new vocabulary elements. We finally also analyse and compare the representational succinctness of ADFs (for two-valued model semantics), that is, their capability to represent two-valued interpretation sets in a space-efficient manner.
Meir Kalech and Shulamit Reches (2015)
"Decision Making with Dynamic Uncertain Events",
Volume 54, pages 233-275
For quick access go to
Abstract:
When to make a decision is a key question in decision making problems characterized by uncertainty. In this paper we deal with decision making in environments where information arrives dynamically. We address the tradeoff between waiting and stopping strategies. On the one hand, waiting to obtain more information reduces uncertainty, but it comes with a cost. Stopping and making a decision based on an expected utility reduces the cost of waiting, but the decision is based on uncertain information. We propose an optimal algorithm and two approximation algorithms. We prove that one approximation is optimistic - waits at least as long as the optimal algorithm, while the other is pessimistic - stops not later than the optimal algorithm. We evaluate our algorithms theoretically and empirically and show that the quality of the decision in both approximations is near-optimal and much faster than the optimal algorithm. Also, we can conclude from the experiments that the cost function is a key factor to chose the most effective algorithm.
Till Mossakowski and Reinhard Moratz (2015)
"Relations Between Spatial Calculi About Directions and Orientations",
Volume 54, pages 277-308
For quick access go to
Abstract:
Qualitative spatial descriptions characterize essential properties of spatial objects or configurations by relying on relative comparisons rather than measuring. Typically, in qualitative approaches only relatively coarse distinctions between configurations are made. Qualitative spatial knowledge can be used to represent incomplete and underdetermined knowledge in a systematic way. This is especially useful if the task is to describe features of classes of configurations rather than individual configurations.
Although reasoning with them is generally NP-hard, relative directions are important because they play a key role in human spatial descriptions and there are several approaches how to represent them using qualitative methods. In these approaches directions between spatial locations can be expressed as constraints over infinite domains, e.g. the Euclidean plane. The theory of relation algebras has been successfully applied to this field. Viewing relation algebras as universal algebras and applying and modifying standard tools from universal algebra in this work, we (re)define notions of qualitative constraint calculus, of homomorphism between calculi, and of quotient of calculi. Based on this method we derive important properties for spatial calculi from corresponding properties of related calculi. From a conceptual point of view these formal mappings between calculi are a means to translate between different granularities.
Yujiao Zhou, Bernardo Cuenca Grau, Yavor Nenov, Mark Kaminski and Ian Horrocks (2015)
"PAGOdA: Pay-As-You-Go Ontology Query Answering Using a Datalog Reasoner",
Volume 54, pages 309-367
For quick access go to
Abstract:
Answering conjunctive queries over ontology-enriched datasets is a core reasoning task for many applications. Query answering is, however, computationally very expensive, which has led to the development of query answering procedures that sacrifice either expressive power of the ontology language, or the completeness of query answers in order to improve scalability. In this paper, we describe a hybrid approach to query answering over OWL 2 ontologies that combines a datalog reasoner with a fully-fledged OWL 2 reasoner in order to provide scalable `pay-as-you-go' performance. The key feature of our approach is that it delegates the bulk of the computation to the datalog reasoner and resorts to expensive OWL 2 reasoning only as necessary to fully answer the query. Furthermore, although our main goal is to efficiently answer queries over OWL 2 ontologies and data, our technical results are very general and our approach is applicable to first-order knowledge representation languages that can be captured by rules allowing for existential quantification and disjunction in the head; our only assumption is the availability of a datalog reasoner and a fully-fledged reasoner for the language of interest, both of which are used as `black boxes'. We have implemented our techniques in the PAGOdA system, which combines the datalog reasoner RDFox and the OWL 2 reasoner HermiT. Our extensive evaluation shows that PAGOdA succeeds in providing scalable pay-as-you-go query answering for a wide range of OWL 2 ontologies, datasets and queries.
Fazlul Hasan Siddiqui and Patrik Haslum (2015)
"Continuing Plan Quality Optimisation",
Volume 54, pages 369-435
For quick access go to
Abstract:
Finding high quality plans for large planning problems is hard. Although some current anytime planners are often able to improve plans quickly, they tend to reach a limit at which the plans produced are still very far from the best possible, but these planners fail to find any further improvement, even when given several hours of runtime.
We present an approach to continuing plan quality optimisation at larger time scales, and its implementation in a system called BDPO2. Key to this approach is a decomposition into subproblems of improving parts of the current best plan. The decomposition is based on block deordering, a form of plan deordering which identifies hierarchical plan structure. BDPO2 can be seen as an application of the large neighbourhood search (LNS) local search strategy to planning, where the neighbourhood of a plan is defined by replacing one or more subplans with improved subplans. On-line learning is also used to adapt the strategy for selecting subplans and subplanners over the course of plan optimisation.
Even starting from the best plans found by other means, BDPO2 is able to continue improving plan quality, often producing better plans than other anytime planners when all are given enough runtime. The best results, however, are achieved by a combination of different techniques working together.
Igor Rochlin and David Sarne (2015)
"Constraining Information Sharing to Improve Cooperative Information Gathering",
Volume 54, pages 437-469
For quick access go to
Abstract:
This paper considers the problem of cooperation between self-interested agents in acquiring better information regarding the nature of the different options and opportunities available to them. By sharing individual findings with others, the agents can potentially achieve a substantial improvement in overall and individual expected benefits. Unfortunately, it is well known that with self-interested agents equilibrium considerations often dictate solutions that are far from the fully cooperative ones, hence the agents do not manage to fully exploit the potential benefits encapsulated in such cooperation. In this paper we introduce, analyze and demonstrate the benefit of five methods aiming to improve cooperative information gathering. Common to all five that they constrain and limit the information sharing process. Nevertheless, the decrease in benefit due to the limited sharing is outweighed by the resulting substantial improvement in the equilibrium individual information gathering strategies. The equilibrium analysis given in the paper, which, in itself is an important contribution to the study of cooperation between self-interested agents, enables demonstrating that for a wide range of settings an improved individual expected benefit is achieved for all agents when applying each of the five methods.
Joseph Y. Halpern (2015)
"Weighted Regret-Based Likelihood: A New Approach to Describing Uncertainty",
Volume 54, pages 471-492
For quick access go to
Abstract:
Recently, Halpern and Leung suggested representing uncertainty by a set of weighted probability measures, and suggested a way of making decisions based on this representation of uncertainty: maximizing weighted regret. Their paper does not answer an apparently simpler question: what it means, according to this representation of uncertainty, for an event E to be more likely than an event E'. In this paper, a notion of comparative likelihood when uncertainty is represented by a set of weighted probability measures is defined. It generalizes the ordering defined by probability (and by lower probability) in a natural way; a generalization of upper probability can also be defined. A complete axiomatic characterization of this notion of regret-based likelihood is given.
----------------------------------------------------------------
II. Unsubscribing from our Mailing List
To remove yourself from the JAIR subscribers mailing list, visit our
Web site (http://www.jair.org/), follow the link "notify me of new
articles", enter your email address in the form at the bottom of the
page, and follow the directions. In the event that you've already
deleted yourself from the list and we keep sending you messages like
this one, send mail to jair-ed at isi.edu.
----------------------------------------------------------------