From jair-ed at isi.edu Wed Jan 1 22:59:48 2014
From: jair-ed at isi.edu (jair-ed@isi.edu)
Date: Thu, 02 Jan 2014 00:59:48 -0600
Subject: [Jairsubscribers] 3 new articles published by JAIR
Message-ID:
Dear JAIR subscriber:
This message lists papers that have been recently published in JAIR and describes how to access them. (If you wish to remove yourself from this mailing list, see instructions at the end of this message.)
----------------------------------------------------------------
I. New JAIR Articles
E. Franconi, V. Kerhet and N. Ngo (2013)
"Exact Query Reformulation over Databases with First-order and Description Logics Ontologies",
Volume 48, pages 885-922
For quick access go to
Abstract:
We study a general framework for query rewriting in the presence of an arbitrary first-order logic ontology over a database signature. The framework supports deciding the existence of a safe-range first-order equivalent reformulation of a query in terms of the database signature, and if so, it provides an effective approach to construct the reformulation based on interpolation using standard theorem proving techniques (e.g., tableau). Since the reformulation is a safe-range formula, it is effectively executable as an SQL query. At the end, we present a non-trivial application of the framework with ontologies in the very expressive ALCHOIQ description logic, by providing effective means to compute safe-range first-order exact reformulations of queries.
E. Mossel, A. D. Procaccia and M. Z. Racz (2013)
"A Smooth Transition from Powerlessness to Absolute Power",
Volume 48, pages 923-951
For quick access go to
Abstract:
We study the phase transition of the coalitional manipulation problem for generalized scoring rules. Previously it has been shown that, under some conditions on the distribution of votes, if the number of manipulators is o(sqrt{n}), where n is the number of voters, then the probability that a random profile is manipulable by the coalition goes to zero as the number of voters goes to infinity, whereas if the number of manipulators is omega(sqrt{n}), then the probability that a random profile is manipulable goes to one. Here we consider the critical window, where a coalition has size c*sqrt{n}, and we show that as c goes from zero to infinity, the limiting probability that a random profile is manipulable goes from zero to one in a smooth fashion, i.e., there is a smooth phase transition between the two regimes. This result analytically validates recent empirical results, and suggests that deciding the coalitional manipulation problem may be of limited computational hardness in practice.
F. Campeotto, A. Dal Palù, A. Dovier, F. Fioretto and E. Pontelli (2013)
"A Constraint Solver for Flexible Protein Model",
Volume 48, pages 953-1000
For quick access go to
Abstract:
This paper proposes the formalization and implementation of a novel class of constraints aimed at modeling problems related to placement of multi-body systems in the 3-dimensional space. Each multi-body is a system composed of body elements, connected by joint relationships and constrained by geometric properties. The emphasis of this investigation is the use of multi-body systems to model native conformations of protein structures---where each body represents an entity of the protein (e.g., an amino acid, a small peptide) and the geometric constraints are related to the spatial properties of the composing atoms. The paper explores the use of the proposed class of constraints to support a variety of different structural analysis of proteins, such as loop modeling and structure prediction.
The declarative nature of a constraint-based encoding provides elaboration tolerance and the ability to make use of any additional knowledge in the analysis studies. The filtering capabilities of the proposed constraints also allow to control the number of representative solutions that are withdrawn from the conformational space of the protein, by means of criteria driven by uniform distribution sampling principles. In this scenario it is possible to select the desired degree of precision and/or number of solutions. The filtering component automatically excludes configurations that violate the spatial and geometric properties of the composing multi-body system. The paper illustrates the implementation of a constraint solver based on the multi-body perspective and its empirical evaluation on protein structure analysis problems.
----------------------------------------------------------------
II. Unsubscribing from our Mailing List
To remove yourself from the JAIR subscribers mailing list, visit our
Web site (http://www.jair.org/), follow the link "notify me of new
articles", enter your email address in the form at the bottom of the
page, and follow the directions. In the event that you've already
deleted yourself from the list and we keep sending you messages like
this one, send mail to jair-ed at isi.edu.
----------------------------------------------------------------
From jair-ed at isi.edu Sat Feb 1 23:45:01 2014
From: jair-ed at isi.edu (jair-ed@isi.edu)
Date: Sun, 02 Feb 2014 01:45:01 -0600
Subject: [Jairsubscribers] 3 new articles published by JAIR
Message-ID:
Dear JAIR subscriber:
This message lists papers that have been recently published in JAIR and describes how to access them. (If you wish to remove yourself from this mailing list, see instructions at the end of this message.)
----------------------------------------------------------------
I. New JAIR Articles
E. Bruni, N. K. Tran and M. Baroni (2014)
"Multimodal Distributional Semantics",
Volume 49, pages 1-47
For quick access go to
Abstract:
Distributional semantic models derive computational representations of word meaning from the patterns of co-occurrence of words in text. Such models have been a success story of computational linguistics, being able to provide reliable estimates of semantic relatedness for the many semantic tasks requiring them. However, distributional models extract meaning information exclusively from text, which is an extremely impoverished basis compared to the rich perceptual sources that ground human semantic knowledge. We address the lack of perceptual grounding of distributional models by exploiting computer vision techniques that automatically identify discrete ?visual words? in images, so that the distributional representation of a word can be extended to also encompass its co-occurrence with the visual words of images it is associated with. We propose a flexible architecture to integrate text- and image-based distributional information, and we show in a set of empirical tests that our integrated model is superior to the purely text-based approach, and it provides somewhat complementary semantic information with respect to the latter.
L. Climent, R. J. Wallace, M. A. Salido and F. Barber (2014)
"Robustness and Stability in Constraint Programming under Dynamism and Uncertainty",
Volume 49, pages 49-78
For quick access go to
Abstract:
Many real life problems that can be solved by constraint programming, come from uncertain and dynamic environments. Because of the dynamism, the original problem may change over time, and thus the solution found for the original problem may become invalid. For this reason, dealing with such problems has become an important issue in the fields of constraint programming. In some cases, there exist extant knowledge about the uncertain and dynamic environment. In other cases, this information is fragmentary or unknown. In this paper, we extend the concept of robustness and stability for Constraint Satisfaction Problems (CSPs) with ordered domains, where only limited assumptions need to be made as to possible changes. We present a search algorithm that searches for both robust and stable solutions for CSPs of this nature. It is well-known that meeting both criteria simultaneously is a desirable objective for constraint solving in uncertain and dynamic environments. We also present compelling evidence that our search algorithm outperforms other general-purpose algorithms for dynamic CSPs using random instances and benchmarks derived from real life problems.
P. M. Dung and P. M. Thang (2014)
"Closure and Consistency In Logic-Associated Argumentation",
Volume 49, pages 79-109
For quick access go to
Abstract:
Properties like logical closure and consistency are important properties in any logical reasoning system. Caminada and Amgoud showed that not every logic-based argument system satisfies these relevant properties. But under conditions like closure under contraposition or transposition of the monotonic part of the underlying logic, ASPIC-like systems satisfy these properties. In contrast, the logical closure and consistency properties are not well-understood for other well-known and widely applied systems like logic programming or assumption based argumentation. Though conditions like closure under contraposition or transposition seem intuitive in ASPIC-like systems, they rule out many sensible ASPIC-like systems that satisfy both properties of closure and consistency.
We present a new condition referred to as the self-contradiction axiom that guarantees the consistency property in both ASPIC-like and assumption-based systems and is implied by both properties of closure under contraposition or transposition. We develop a logic-associated abstract argumentation framework, by associating abstract argumentation with abstract logics to represent the conclusions of arguments. We show that logic-associated abstract argumentation frameworks capture ASPIC-like systems (without preferences) and assumption-based argumentation. We present two simple and natural properties of compactness and cohesion in logic-associated abstract argumentation frameworks and show that they capture the logical closure and consistency properties. We demonstrate that in both assumption-based argumentation and ASPIC-like systems, cohesion follows naturally from the self-contradiction axiom. We further give a translation from ASPIC-like systems (without preferences) into equivalent assumption-based systems that keeps the self-contradiction axiom invariant.
----------------------------------------------------------------
II. Unsubscribing from our Mailing List
To remove yourself from the JAIR subscribers mailing list, visit our
Web site (http://www.jair.org/), follow the link "notify me of new
articles", enter your email address in the form at the bottom of the
page, and follow the directions. In the event that you've already
deleted yourself from the list and we keep sending you messages like
this one, send mail to jair-ed at isi.edu.
----------------------------------------------------------------
From jair-ed at isi.edu Tue Apr 1 02:03:22 2014
From: jair-ed at isi.edu (jair-ed@isi.edu)
Date: Tue, 01 Apr 2014 04:03:22 -0500
Subject: [Jairsubscribers] 12 new articles published by JAIR
Message-ID:
Dear JAIR subscriber:
This message lists papers that have been recently published in JAIR and describes how to access them. (If you wish to remove yourself from this mailing list, see instructions at the end of this message.)
----------------------------------------------------------------
I. New JAIR Articles
J. W. Crandall (2014)
"Towards Minimizing Disappointment in Repeated Games",
Volume 49, pages 111-142
For quick access go to
Abstract:
We consider the problem of learning in repeated games against arbitrary associates. Specifically, we study the ability of expert algorithms to quickly learn effective strategies in repeated games, towards the ultimate goal of learning near-optimal behavior against any arbitrary associate within only a handful of interactions. Our contribution is three-fold. First, we advocate a new metric, called disappointment, for evaluating expert algorithms in repeated games. Unlike minimizing traditional notions of regret, minimizing disappointment in repeated games is equivalent to maximizing payoffs. Unfortunately, eliminating disappointment is impossible to guarantee in general. However, it is possible for an expert algorithm to quickly achieve low disappointment against many known classes of algorithms in many games. Second, we show that popular existing expert algorithms often fail to achieve low disappointment against a variety of associates, particularly in early rounds of the game. Finally, we describe a new meta-algorithm that can be applied to existing expert algorithms to substantially reduce disappointment in many two-player repeated games when associates follow various static, reinforcement learning, and expert algorithms.
J. Y. Halpern and Y. Moses (2014)
"A Procedural Characterization of Solution Concepts in Games",
Volume 49, pages 143-170
For quick access go to
Abstract:
We show how game-theoretic solution concepts such as Nash equilibrium, correlated equilibrium, rationalizability, and sequential equilibrium can be given a uniform definition in terms of a knowledge-based program with counterfactual semantics. In a precise sense, this program can be viewed as providing a procedural characterization of rationality.
S. Schiffel and M. Thielscher (2014)
"Representing and Reasoning About the Rules of General Games With Imperfect Information",
Volume 49, pages 171-206
For quick access go to
Abstract:
A general game player is a system that can play previously unknown games just by being given their rules. For this purpose, the Game Description Language (GDL) has been developed as a high-level knowledge representation formalism to communicate game rules to players. In this paper, we address a fundamental limitation of state-of-the-art methods and systems for General Game Playing, namely, their being confined to deterministic games with complete information about the game state. We develop a simple yet expressive extension of standard GDL that allows for formalising the rules of arbitrary finite, n-player games with randomness and incomplete state knowledge. In the second part of the paper, we address the intricate reasoning challenge for general game-playing systems that comes with the new description language. We develop a full embedding of extended GDL into the Situation Calculus augmented by Scherl and Levesque's knowledge fluent. We formally prove that this provides a sound and complete reasoning method for players' knowledge about game states as well as about the knowledge of the other players.
K. R. Apt and G. Schaefer (2014)
"Selfishness Level of Strategic Games",
Volume 49, pages 207-240
For quick access go to
Abstract:
We introduce a new measure of the discrepancy in strategic games between the social welfare in a Nash equilibrium and in a social optimum, that we call selfishness level. It is the smallest fraction of the social welfare that needs to be offered to each player to achieve that a social optimum is realized in a pure Nash equilibrium. The selfishness level is unrelated to the price of stability and the price of anarchy and is invariant under positive linear transformations of the payoff functions. Also, it naturally applies to other solution concepts and other forms of games.
We study the selfishness level of several well-known strategic games. This allows us to quantify the implicit tension within a game between players' individual interests and the impact of their decisions on the society as a whole. Our analyses reveal that the selfishness level often provides a deeper understanding of the characteristics of the underlying game that influence the players' willingness to cooperate.
In particular, the selfishness level of finite ordinal potential games is finite, while that of weakly acyclic games can be infinite. We derive explicit bounds on the selfishness level of fair cost sharing games and linear congestion games, which depend on specific parameters of the underlying game but are independent of the number of players. Further, we show that the selfishness level of the $n$-players Prisoner's Dilemma is c/(b(n-1)-c), where b and c are the benefit and cost for cooperation, respectively, that of the n-players public goods game is (1-c/n)/(c-1), where c is the public good multiplier, and that of the Traveler's Dilemma game is (b-1)/2, where b is the bonus. Finally, the selfishness level of Cournot competition (an example of an infinite ordinal potential game), Tragedy of the Commons, and Bertrand competition is infinite.
D. Berrar (2014)
"An Empirical Evaluation of Ranking Measures With Respect to Robustness to Noise",
Volume 49, pages 241-267
For quick access go to
Abstract:
Ranking measures play an important role in model evaluation and selection. Using both synthetic and real-world data sets, we investigate how different types and levels of noise affect the area under the ROC curve (AUC), the area under the ROC convex hull, the scored AUC, the Kolmogorov-Smirnov statistic, and the H-measure. In our experiments, the AUC was, overall, the most robust among these measures, thereby reinvigorating it as a reliable metric despite its well-known deficiencies. This paper also introduces a novel ranking measure, which is remarkably robust to noise yet conceptually simple.
T. Eiter, M. Fink, T. Krennwallner, C. Redl and P. Schüller (2014)
"Efficient HEX-Program Evaluation Based on Unfounded Sets",
Volume 49, pages 269-321
For quick access go to
Abstract:
HEX-programs extend logic programs under the answer set semantics with external computations through external atoms. As reasoning from ground Horn programs with nonmonotonic external atoms of polynomial complexity is already on the second level of the polynomial hierarchy, minimality checking of answer set candidates needs special attention. To this end, we present an approach based on unfounded sets as a generalization of related techniques for ASP programs. The unfounded set detection is expressed as a propositional SAT problem, for which we provide two different encodings and optimizations to them. We then integrate our approach into a previously developed evaluation framework for HEX-programs, which is enriched by additional learning techniques that aim at avoiding the reconstruction of the same or related unfounded sets. Furthermore, we provide a syntactic criterion that allows one to skip the minimality check in many cases. An experimental evaluation shows that the new approach significantly decreases runtime.
L. Cigler and B. Faltings (2014)
"Symmetric Subgame-Perfect Equilibria in Resource Allocation",
Volume 49, pages 323-361
For quick access go to
Abstract:
We analyze symmetric protocols to rationally coordinate on an asymmetric, efficient allocation in an infinitely repeated N-agent, C-resource allocation problems, where the resources are all homogeneous. Bhaskar proposed one way to achieve this in 2-agent, 1-resource games: Agents start by symmetrically randomizing their actions, and as soon as they each choose different actions, they start to follow a potentially asymmetric "convention" that prescribes their actions from then on. We extend the concept of convention to the general case of infinitely repeated resource allocation games with N agents and C resources. We show that for any convention, there exists a symmetric subgame-perfect equilibrium which implements it. We present two conventions: bourgeois, where agents stick to the first allocation; and market, where agents pay for the use of resources, and observe a global coordination signal which allows them to alternate between different allocations. We define price of anonymity of a convention as a ratio between the maximum social payoff of any (asymmetric) strategy profile and the expected social payoff of the subgame-perfect equilibrium which implements the convention. We show that while the price of anonymity of the bourgeois convention is infinite, the market convention decreases this price by reducing the conflict between the agents.
V. Belle and G. Lakemeyer (2014)
"Multiagent Only Knowing in Dynamic Systems",
Volume 49, pages 363-402
For quick access go to
Abstract:
The idea of "only knowing" a collection of sentences, as proposed by Levesque, has been previously shown to be very useful in characterizing knowledge-based agents: in terms of a specification, a precise and perspicuous account of the beliefs and non-beliefs is obtained in a monotonic setting. Levesque's logic is based on a first-order modal language with quantifying-in, thus allowing for de re versus de dicto distinctions, among other things. However, the logic and its recent dynamic extension only deal with the case of a single agent. In this work, we propose a first-order multiagent framework with knowledge, actions, sensing and only knowing, that is shown to inherit all the features of the single agent version. Most significantly, we prove reduction theorems by means of which reasoning about knowledge and actions in the framework simplifies to non-epistemic, non-dynamic reasoning about the initial situation.
G. Greco and F. Scarcello (2014)
"Mechanisms for Fair Allocation Problems: No-Punishment Payment Rules in Verifiable Settings",
Volume 49, pages 403-449
For quick access go to
Abstract:
Mechanism design is considered in the context of fair allocations of indivisible goods with monetary compensation, by focusing on problems where agents' declarations on allocated goods can be verified before payments are performed. A setting is considered where verification might be subject to errors, so that payments have to be awarded under the presumption of innocence, as incorrect declared values do not necessarily mean manipulation attempts by the agents. Within this setting, a mechanism is designed that is shown to be truthful, efficient, and budget-balanced. Moreover, agents' utilities are fairly determined by the Shapley value of suitable coalitional games, and enjoy highly desirable properties such as equal treatment of equals, envy-freeness, and a stronger one called individual-optimality. In particular, the latter property guarantees that, for every agent, her/his utility is the maximum possible one over any alternative optimal allocation.
The computational complexity of the proposed mechanism is also studied. It turns out that it is #P-complete so that, to deal with applications with many agents involved, two polynomial-time randomized variants are also proposed: one that is still truthful and efficient, and which is approximately budget-balanced with high probability, and another one that is truthful in expectation, while still budget-balanced and efficient.
B. Han, P. Cook and T. Baldwin (2014)
"Text-Based Twitter User Geolocation Prediction",
Volume 49, pages 451-500
For quick access go to
Abstract:
Geographical location is vital to geospatial applications like local search and event detection. In this paper, we investigate and improve on the task of text-based geolocation prediction of Twitter users. Previous studies on this topic have typically assumed that geographical references (e.g., gazetteer terms, dialectal words) in a text are indicative of its author?s location. However, these references are often buried in informal, ungrammatical, and multilingual data, and are therefore non-trivial to identify and exploit. We present an integrated geolocation prediction framework and investigate what factors impact on prediction accuracy. First, we evaluate a range of feature selection methods to obtain ?location indicative words?. We then evaluate the impact of non-geotagged tweets, language, and user-declared metadata on geolocation prediction. In addition, we evaluate the impact of temporal variance on model generalisation, and discuss how users differ in terms of their geolocatability.
We achieve state-of-the-art results for the text-based Twitter user geolocation task, and also provide the most extensive exploration of the task to date. Our findings provide valuable insights into the design of robust, practical text-based geolocation prediction systems.
P. Yang and W. Gao (2014)
"Information-Theoretic Multi-view Domain Adaptation: A Theoretical and Empirical Study",
Volume 49, pages 501-525
For quick access go to
Abstract:
Multi-view learning aims to improve classification performance by leveraging the consistency among different views of data. The incorporation of multiple views was paid little attention in the studies of domain adaptation, where the view consistency based on source data is largely violated in the target domain due to the distribution gap between different domain data. In this paper, we leverage multiple views for cross-domain document classification. The central idea is to strengthen the views' consistency on target data by identifying the associations of domain-specific features from different domains. We present an Information-theoretic Multi-view Adaptation Model (IMAM) using a multi-way clustering scheme, where word and link clusters can draw together seemingly unrelated features across domains, which boosts the consistency between document clusterings that are based on the respective word and link views. Moreover, we demonstrate that IMAM can always find the document clustering with the minimal disagreement rate to the overlap of view-based clusterings. We provide both theoretical and empirical justifications of the proposed method. Our experiments show that IMAM significantly outperforms traditional multi-view algorithm co-training, the co-training-based adaptation algorithm CODA, the single-view transfer model CoCC and the large-margin-based multi-view transfer model MVTL-LM.
K. Hoki and T. Kaneko (2014)
"Large-Scale Optimization for Evaluation Functions with Minimax Search",
Volume 49, pages 527-568
For quick access go to
Abstract:
This paper presents a new method, Minimax Tree Optimization (MMTO), to learn a heuristic evaluation function of a practical alpha-beta search program. The evaluation function may be a linear or non-linear combination of weighted features, and the weights are the parameters to be optimized. To control the search results so that the move decisions agree with the game records of human experts, a well-modeled objective function to be minimized is designed. Moreover, a numerical iterative method is used to nd local minima of the objective function, and more than forty million parameters are adjusted by using a small number of hyper parameters. This method was applied to shogi, a major variant of chess in which the evaluation function must handle a larger state space than in chess. Experimental results show that the large-scale optimization of the evaluation function improves the playing strength of shogi programs, and the new method performs signicantly better than other methods. Implementation of the new method in our shogi program Bonanza made substantial contributions to the program's rst-place nish in the 2013 World Computer Shogi Championship. Additionally, we present preliminary evidence of broader applicability of our method to other two-player games such as chess.
----------------------------------------------------------------
II. Unsubscribing from our Mailing List
To remove yourself from the JAIR subscribers mailing list, visit our
Web site (http://www.jair.org/), follow the link "notify me of new
articles", enter your email address in the form at the bottom of the
page, and follow the directions. In the event that you've already
deleted yourself from the list and we keep sending you messages like
this one, send mail to jair-ed at isi.edu.
----------------------------------------------------------------
From jair-ed at isi.edu Thu May 1 21:55:38 2014
From: jair-ed at isi.edu (jair-ed@isi.edu)
Date: Thu, 01 May 2014 23:55:38 -0500
Subject: [Jairsubscribers] 6 new articles published by JAIR
Message-ID:
Dear JAIR subscriber:
This message lists papers that have been recently published in JAIR and describes how to access them. (If you wish to remove yourself from this mailing list, see instructions at the end of this message.)
----------------------------------------------------------------
I. New JAIR Articles
Y. Wu, P. Austrin, T. Pitassi and D. Liu (2014)
"Inapproximability of Treewidth and Related Problems",
Volume 49, pages 569-600
For quick access go to
Abstract:
Graphical models, such as Bayesian Networks and Markov networks play an important role in artificial intelligence and machine learning. Inference is a central problem to be solved on these networks. This, and other problems on these graph models are often known to be hard to solve in general, but tractable on graphs with bounded Treewidth. Therefore, finding or approximating the Treewidth of a graph is a fundamental problem related to inference in graphical models. In this paper, we study the approximability of a number of graph problems: Treewidth and Pathwidth of graphs, Minimum Fill-In, One-Shot Black (and Black-White) pebbling costs of directed acyclic graphs, and a variety of different graph layout problems such as Minimum Cut Linear Arrangement and Interval Graph Completion. We show that, assuming the recently introduced Small Set Expansion Conjecture, all of these problems are NP-hard to approximate to within any constant factor in polynomial time.
S. J. Chen, A. Choi and A. Darwiche (2014)
"Algorithms and Applications for the Same-Decision Probability",
Volume 49, pages 601-633
For quick access go to
Abstract:
When making decisions under uncertainty, the optimal choices are often difficult to discern, especially if not enough information has been gathered. Two key questions in this regard relate to whether one should stop the information gathering process and commit to a decision (stopping criterion), and if not, what information to gather next (selection criterion). In this paper, we show that the recently introduced notion, Same-Decision Probability (SDP), can be useful as both a stopping and a selection criterion, as it can provide additional insight and allow for robust decision making in a variety of scenarios. This query has been shown to be highly intractable, being PP^PP-complete, and is exemplary of a class of queries which correspond to the computation of certain expectations. We propose the first exact algorithm for computing the SDP, and demonstrate its effectiveness on several real and synthetic networks. Finally, we present new complexity results, such as the complexity of computing the SDP on models with a Naive Bayes structure. Additionally, we prove that computing the non-myopic value of information is complete for the same complexity class as computing the SDP.
S. Nofal, K. Atkinson and P. E. Dunne (2014)
"Algorithms for Argumentation Semantics: Labeling Attacks as a Generalization of Labeling Arguments",
Volume 49, pages 635-668
For quick access go to
Abstract:
A Dung argumentation framework (AF) is a pair (A,R): A is a set of abstract arguments and R ⊆ A×A is a binary relation, so-called the attack relation, for capturing the conflicting arguments. Labeling based algorithms for enumerating extensions (i.e. sets of acceptable arguments) have been set out such that arguments (i.e. elements of A) are the only subject for labeling. In this paper we present implemented algorithms for listing extensions by labeling attacks (i.e. elements of R) along with arguments. Specifically, these algorithms are concerned with enumerating all extensions of an AF under a number of argumentation semantics: preferred, stable, complete, semi stable, stage, ideal and grounded. Our algorithms have impact, in particular, on enumerating extensions of AF-extended models that allow attacks on attacks. To demonstrate this impact, we instantiate our algorithms for an example of such models: namely argumentation frameworks with recursive attacks (AFRA), thereby we end up with unified algorithms that enumerate extensions of any AF/AFRA.
M. L. Bonet, S. Buss and J. Johannsen (2014)
"Improved Separations of Regular Resolution from Clause Learning Proof Systems",
Volume 49, pages 669-703
For quick access go to
Abstract:
This paper studies the relationship between resolution and conflict driven clause learning (CDCL) without restarts, and refutes some conjectured possible separations. We prove that the guarded, xor-ified pebbling tautology clauses, which Urquhart proved are hard for regular resolution, as well as the guarded graph tautology clauses of Alekhnovich, Johannsen, Pitassi, and Urquhart have polynomial size pool resolution refutations that use only input lemmas as learned clauses. For the latter set of clauses, we extend this to prove that a CDCL search without restarts can refute these clauses in polynomial time, provided it makes the right choices for decision literals and clause learning. This holds even if the CDCL search is required to greedily process conflicts arising from unit propagation. This refutes the conjecture that the guarded graph tautology clauses or the guarded xor-ified pebbling tautology clauses can be used to separate CDCL without restarts from general resolution. Together with subsequent results by Buss and Kolodziejczyk, this means we lack any good conjectures about how to establish the exact logical strength of conflict-driven clause learning without restarts.
S. W. Carden (2014)
"Convergence of a Q-learning Variant for Continuous States and Actions",
Volume 49, pages 705-731
For quick access go to
Abstract:
This paper presents a reinforcement learning algorithm for solving infinite horizon Markov Decision Processes under the expected total discounted reward criterion when both the state and action spaces are continuous. This algorithm is based on Watkins' Q-learning, but uses Nadaraya-Watson kernel smoothing to generalize knowledge to unvisited states. As expected, continuity conditions must be imposed on the mean rewards and transition probabilities. Using results from kernel regression theory, this algorithm is proven capable of producing a Q-value function estimate that is uniformly within an arbitrary tolerance of the true Q-value function with probability one. The algorithm is then applied to an example problem to empirically show convergence as well.
N. Fernandez Garcia, J. Arias Fisteus and L. Sanchez Fernandez (2014)
"Comparative Evaluation of Link-Based Approaches for Candidate Ranking in Link-to-Wikipedia Systems",
Volume 49, pages 733-773
For quick access go to
Abstract:
In recent years, the task of automatically linking pieces of text (anchors) mentioned in a document to Wikipedia articles that represent the meaning of these anchors has received extensive research attention. Typically, link-to-Wikipedia systems try to find a set of Wikipedia articles that are candidates to represent the meaning of the anchor and, later, rank these candidates to select the most appropriate one. In this ranking process the systems rely on context information obtained from the document where the anchor is mentioned and/or from Wikipedia. In this paper we center our attention in the use of Wikipedia links as context information. In particular, we offer a review of several candidate ranking approaches in the state-of-the-art that rely on Wikipedia link information. In addition, we provide a comparative empirical evaluation of the different approaches on five different corpora: the TAC 2010 corpus and four corpora built from actual Wikipedia articles and news items.
----------------------------------------------------------------
II. Unsubscribing from our Mailing List
To remove yourself from the JAIR subscribers mailing list, visit our
Web site (http://www.jair.org/), follow the link "notify me of new
articles", enter your email address in the form at the bottom of the
page, and follow the directions. In the event that you've already
deleted yourself from the list and we keep sending you messages like
this one, send mail to jair-ed at isi.edu.
----------------------------------------------------------------
From jair-ed at isi.edu Sun Jun 1 09:12:58 2014
From: jair-ed at isi.edu (jair-ed@isi.edu)
Date: Sun, 01 Jun 2014 11:12:58 -0500
Subject: [Jairsubscribers] 6 new articles published by JAIR
Message-ID:
Dear JAIR subscriber:
This message lists papers that have been recently published in JAIR and describes how to access them. (If you wish to remove yourself from this mailing list, see instructions at the end of this message.)
----------------------------------------------------------------
I. New JAIR Articles
M. Zhang, X. Xiao, D. Xiong and Q. Liu (2014)
"Topic-Based Dissimilarity and Sensitivity Models for Translation Rule Selection",
Volume 50, pages 1-30
For quick access go to
Abstract:
Translation rule selection is a task of selecting appropriate translation rules for an ambiguous source-language segment. As translation ambiguities are pervasive in statistical machine translation, we introduce two topic-based models for translation rule selection which incorporates global topic information into translation disambiguation. We associate each synchronous translation rule with source- and target-side topic distributions.With these topic distributions, we propose a topic dissimilarity model to select desirable (less dissimilar) rules by imposing penalties for rules with a large value of dissimilarity of their topic distributions to those of given documents. In order to encourage the use of non-topic specific translation rules, we also present a topic sensitivity model to balance translation rule selection between generic rules and topic-specific rules. Furthermore, we project target-side topic distributions onto the source-side topic model space so that we can benefit from topic information of both the source and target language. We integrate the proposed topic dissimilarity and sensitivity model into hierarchical phrase-based machine translation for synchronous translation rule selection. Experiments show that our topic-based translation rule selection model can substantially improve translation quality.
Y. Wang, Y. Zhang, Y. Zhou and M. Zhang (2014)
"Knowledge Forgetting in Answer Set Programming",
Volume 50, pages 31-70
For quick access go to
Abstract:
The ability of discarding or hiding irrelevant information has been
recognized as an important feature for knowledge based systems, including answer set programming. The notion of strong equivalence in answer set programming plays an important role for different problems as it gives rise to a substitution principle and amounts to knowledge equivalence of logic programs. In this paper, we uniformly propose a semantic knowledge forgetting, called HT- and FLP-forgetting, for logic programs under stable model and FLP-stable model semantics, respectively. Our proposed knowledge forgetting discards exactly the knowledge of a logic program which is relevant to forgotten variables. Thus it preserves strong equivalence in the sense that strongly equivalent logic programs will remain strongly equivalent after forgetting the same variables. We show that this semantic forgetting result is always expressible; and we prove a representation theorem stating that the HT- and FLP-forgetting can be precisely characterized by Zhang-Zhou's four forgetting postulates under the HT- and FLP-model semantics, respectively. We also reveal underlying connections between the proposed forgetting and the forgetting of propositional logic, and provide complexity results for decision problems in relation to the forgetting. An application of the proposed forgetting is also considered in a conflict solving scenario.
A. Fern, S. Natarajan, K. Judah and P. Tadepalli (2014)
"A Decision-Theoretic Model of Assistance",
Volume 50, pages 71-104
For quick access go to
Abstract:
There is a growing interest in intelligent assistants for a variety of applications from sorting email to helping people with disabilities to do their daily chores. In this paper, we formulate the problem of intelligent assistance in a decision-theoretic framework, and present both theoretical and empirical results. We first introduce a class of POMDPs called hidden-goal MDPs (HGMDPs), which formalizes the problem of interactively assisting an agent whose goal is hidden and whose actions are observable. In spite of its restricted nature, we show that optimal action selection for HGMDPs is PSPACE-complete even for deterministic dynamics. We then introduce a more restricted model called helper action MDPs (HAMDPs), which are sufficient for modeling many real-world problems. We show classes of HAMDPs for which efficient algorithms are possible. More interestingly, for general HAMDPs we show that a simple myopic policy achieves a near optimal regret, compared to an oracle assistant that knows the agent's goal. We then introduce more sophisticated versions of this policy for the general case of HGMDPs that we combine with a novel approach for quickly learning about the agent being assisted. We evaluate our approach in two game-like computer environments where human subjects perform tasks, and in a real-world domain of providing assistance during folder navigation in a computer desktop environment. The results show that in all three domains the framework results in an assistant that substantially reduces user effort with only modest computation.
B. de Keijzer, T. B. Klos and Y. Zhang (2014)
"Finding Optimal Solutions for Voting Game Design Problems",
Volume 50, pages 105-140
For quick access go to
Abstract:
In many circumstances where multiple agents need to make a joint decision, voting is used to aggregate the agents' preferences. Each agent's vote carries a weight, and if the sum of the weights of the agents in favor of some outcome is larger than or equal to a given quota, then this outcome is decided upon. The distribution of weights leads to a certain distribution of power. Several `power indices' have been proposed to measure such power. In the so-called inverse problem, we are given a target distribution of power, and are asked to come up with a game in the form of a quota, plus an assignment of weights to the players whose power distribution is as close as possible to the target distribution (according to some specied distance measure).
Here we study solution approaches for the larger class of voting game design (VGD) problems, one of which is the inverse problem. In the general VGD problem, the goal is to find a voting game (with a given number of players) that optimizes some function over these games. In the inverse problem, for example, we look for a weighted voting game that minimizes the distance between the distribution of power among the players and a given target distribution of power (according to a given distance measure). Our goal is to find algorithms that solve voting game design problems exactly, and we approach this goal by enumerating all games in the class of games of interest.
We first present a doubly exponential algorithm for enumerating the set of simple games. We then improve on this algorithm for the class of weighted voting games and obtain a quadratic exponential (i.e., 2^O(n^2)) algorithm for enumerating them. We show that this improved algorithm runs in output-polynomial time, making it the fastest possible enumeration algorithm up to a polynomial factor. Finally, we propose an exact anytime-algorithm that runs in exponential time for the power index weighted voting game design problem (the `inverse problem'). We implement this algorithm to find a weighted voting game with a normalized Banzhaf power distribution closest to a target power index, and perform experiments to obtain some insights about the set of weighted voting games. We remark that our algorithm is applicable to optimizing any exponential-time computable function, the distance of the normalized Banzhaf index to a target power index is merely taken as an example.
M. Goldenberg, A. Felner, R. Stern, G. Sharon, N. Sturtevant, R. C. Holte and J. Schaeffer (2014)
"Enhanced Partial Expansion A*",
Volume 50, pages 141-187
For quick access go to
Abstract:
When solving instances of problem domains that feature a large branching factor, A* may generate a large number of nodes whose cost is greater than the cost of the optimal solution. We designate such nodes as surplus. Generating surplus nodes and adding them to the OPEN list may dominate both time and memory of the search. A recently introduced variant of A* called Partial Expansion A* (PEA*) deals with the memory aspect of this problem. When expanding a node n, PEA* generates all of its children and puts into OPEN only the children with f = f (n). n is re-inserted in the OPEN list with the f -cost of the best discarded child. This guarantees that surplus nodes are not inserted into OPEN.
In this paper, we present a novel variant of A* called Enhanced Partial Expansion A* (EPEA*) that advances the idea of PEA* to address the time aspect. Given a priori domain- and heuristic- specific knowledge, EPEA* generates only the nodes with f = f(n). Although EPEA* is not always applicable or practical, we study several variants of EPEA*, which make it applicable to a large number of domains and heuristics. In particular, the ideas of EPEA* are applicable to IDA* and to the domains where pattern databases are traditionally used. Experimental studies show significant improvements in run-time and memory performance for several standard benchmark applications. We provide several theoretical studies to facilitate an understanding of the new algorithm.
J. De Bock and G. de Cooman (2014)
"An Efficient Algorithm for Estimating State Sequences in Imprecise Hidden Markov Models",
Volume 50, pages 189-233
For quick access go to
Abstract:
We present an efficient exact algorithm for estimating state sequences from outputs or observations in imprecise hidden Markov models (iHMMs). The uncertainty linking one state to the next, and that linking a state to its output, is represented by a set of probability mass functions instead of a single such mass function. We consider as best estimates for state sequences the maximal sequences for the posterior joint state model conditioned on the observed output sequence, associated with a gain function that is the indicator of the state sequence. This corresponds to and generalises finding the state sequence with the highest posterior probability in (precise-probabilistic) HMMs, thereby making our algorithm a generalisation of the one by Viterbi. We argue that the computational complexity of our algorithm is at worst quadratic in the length of the iHMM, cubic in the number of states, and essentially linear in the number of maximal state sequences. An important feature of our imprecise approach is that there may be more than one maximal sequence, typically in those instances where its precise-probabilistic counterpart is sensitive to the choice of prior. For binary iHMMs, we investigate experimentally how the number of maximal state sequences depends on the model parameters. We also present an application in optical character recognition, demonstrating that our algorithm can be usefully applied to robustify the inferences made by its precise-probabilistic counterpart.
----------------------------------------------------------------
II. Unsubscribing from our Mailing List
To remove yourself from the JAIR subscribers mailing list, visit our
Web site (http://www.jair.org/), follow the link "notify me of new
articles", enter your email address in the form at the bottom of the
page, and follow the directions. In the event that you've already
deleted yourself from the list and we keep sending you messages like
this one, send mail to jair-ed at isi.edu.
----------------------------------------------------------------
From jair-ed at isi.edu Wed Jul 2 05:10:52 2014
From: jair-ed at isi.edu (jair-ed@isi.edu)
Date: Wed, 02 Jul 2014 07:10:52 -0500
Subject: [Jairsubscribers] 7 new articles published by JAIR
Message-ID:
Dear JAIR subscriber:
This message lists papers that have been recently published in JAIR and describes how to access them. (If you wish to remove yourself from this mailing list, see instructions at the end of this message.)
----------------------------------------------------------------
I. New JAIR Articles
N. Rivera, L. Illanes, J. A. Baier and C. Hernandez (2014)
"Reconnection with the Ideal Tree: A New Approach to Real-Time Search",
Volume 50, pages 235-264
For quick access go to
Abstract:
Many applications, ranging from video games to dynamic robotics, require solving single-agent, deterministic search problems in partially known environments under very tight time constraints. Real-Time Heuristic Search (RTHS) algorithms are specifically designed for those applications. As a subroutine, most of them invoke a standard, but bounded, search algorithm that searches for the goal. In this paper we present FRIT, a simple approach for single-agent deterministic search problems under tight constraints and partially known environments that unlike traditional RTHS does not search for the goal but rather searches for a path that connects the current state with a so-called ideal tree T . When the agent observes that an arc in the tree cannot be traversed in the actual environment, it removes such an arc from T and then carries out a reconnection search whose objective is to find a path between the current state and any node in T . The reconnection search is done using an algorithm that is passed as a parameter to FRIT. If such a parameter is an RTHS algorithm, then the resulting algorithm can be an RTHS algorithm. We show, in addition, that FRIT may be fed with a (bounded) complete blind-search algorithm. We evaluate our approach over grid pathfinding benchmarks including game maps and mazes. Our results show that FRIT, used with RTAA*, a standard RTHS algorithm, outperforms RTAA* significantly; by one order of magnitude under tight time constraints. In addition, FRIT(daRTAA*) substantially outperforms daRTAA*, a state-of-the-art RTHS algorithm, usually obtaining solutions 50% cheaper on average when performing the same search effort. Finally, FRIT(BFS), i.e., FRIT using breadth-first-search, obtains best-quality solutions when time is limited compared to Adaptive A* and Repeated A*. Finally we show that Bug2, a pathfinding-specific navigation algorithm, outperforms FRIT(BFS) when planning time is extremely limited, but when given more time, the situation reverses.
M. Suda (2014)
"Property Directed Reachability for Automated Planning",
Volume 50, pages 265-319
For quick access go to
Abstract:
Property Directed Reachability (PDR) is a very promising recent method for deciding reachability in symbolically represented transition systems. While originally conceived as a model checking algorithm for hardware circuits, it has already been successfully applied in several other areas. This paper is the first investigation of PDR from the perspective of automated planning.
Similarly to the planning as satisfiability paradigm, PDR draws its strength from internally employing an efficient SAT-solver. We show that most standard encoding schemes of planning into SAT can be directly used to turn PDR into a planning algorithm. As a non-obvious alternative, we propose to replace the SAT-solver inside PDR by a planning-specific procedure implementing the same interface. This SAT-solver free variant is not only more efficient, but offers additional insights and opportunities for further improvements. An experimental comparison to the state of the art planners finds it highly competitive, solving most problems on several domains.
F.M. Delle Fave, A.X. Jiang, Z. Yin, C. Zhang, M. Tambe, S. Kraus and J. P. Sullivan (2014)
"Game-Theoretic Patrolling with Dynamic Execution Uncertainty and a Case Study on a Real Transit System",
Volume 50, pages 321-367
For quick access go to
Abstract:
Attacker-Defender Stackelberg security games (SSGs) have emerged as an important research area in multi-agent systems. However, existing SSGs models yield fixed, static, schedules which fail in dynamic domains where defenders face execution uncertainty, i.e., in domains where defenders may face unanticipated disruptions of their schedules. A concrete example is an application involving checking fares on trains, where a defender's schedule is frequently interrupted by fare evaders, making static schedules useless.
To address this shortcoming, this paper provides four main contributions. First, we present a novel general Bayesian Stackelberg game model for security resource allocation in dynamic uncertain domains. In this new model, execution uncertainty is handled by using a Markov decision process (MDP) for generating defender policies. Second, we study the problem of computing a Stackelberg equilibrium for this game and exploit problem structure to reduce it to a polynomial-sized optimization problem. Shifting to evaluation, our third contribution shows, in simulation, that our MDP-based policies overcome the failures of previous SSG algorithms. In so doing, we can now build a complete system, that enables handling of schedule interruptions and, consequently, to conduct some of the first controlled experiments on SSGs in the field. Hence, as our final contribution, we present results from a real-world experiment on Metro trains in Los Angeles validating our MDP-based model, and most importantly, concretely measuring the benefits of SSGs for security resource allocation.
J.R. Doppa, A. Fern and P. Tadepalli (2014)
"HC-Search: A Learning Framework for Search-based Structured Prediction",
Volume 50, pages 369-407
For quick access go to
Abstract:
Structured prediction is the problem of learning a function that maps structured inputs to structured outputs. Prototypical examples of structured prediction include part-of-speech tagging and semantic segmentation of images. Inspired by the recent successes of search-based structured prediction, we introduce a new framework for structured prediction called HC-Search. Given a structured input, the framework uses a search procedure guided by a learned heuristic H to uncover high quality candidate outputs and then employs a separate learned cost function C to select a final prediction among those outputs. The overall loss of this prediction architecture decomposes into the loss due to H not leading to high quality outputs, and the loss due to C not selecting the best among the generated outputs. Guided by this decomposition, we minimize the overall loss in a greedy stage-wise manner by first training H to quickly uncover high quality outputs via imitation learning, and then training C to correctly rank the outputs generated via H according to their true losses. Importantly, this training procedure is sensitive to the particular loss function of interest and the time-bound allowed for predictions. Experiments on several benchmark domains show that our approach significantly outperforms several state-of-the-art methods.
R. Bredereck, J. Chen, S. Hartung, S. Kratsch, R. Niedermeier, O. Suchy and G. J. Woeginger (2014)
"A Multivariate Complexity Analysis of Lobbying in Multiple Referenda",
Volume 50, pages 409-446
For quick access go to
Abstract:
Assume that each of n voters may or may not approve each of m issues. If an agent (the lobby) may influence up to k voters, then the central question of the NP-hard Lobbying problem is whether the lobby can choose the voters to be influenced so that as a result each issue gets a majority of approvals. This problem can be modeled as a simple matrix modification problem: Can one replace k rows of a binary n x m-matrix by k all-1 rows such that each column in the resulting matrix has a majority of 1s? Significantly extending on previous work that showed parameterized intractability (W[2]-completeness) with respect to the number k of modified rows, we study how natural parameters such as n, m, k, or the "maximum number of 1s missing for any column to have a majority of 1s" (referred to as "gap value g") govern the computational complexity of Lobbying. Among other results, we prove that Lobbying is fixed-parameter tractable for parameter m and provide a greedy logarithmic-factor approximation algorithm which solves Lobbying even optimally if m < 5. We also show empirically that this greedy algorithm performs well on general instances. As a further key result, we prove that Lobbying is LOGSNP-complete for constant values g>0, thus providing a first natural complete problem from voting for this complexity class of limited nondeterminism.
M. Cooper, F. Maris and P. Régnier (2014)
"Monotone Temporal Planning: Tractability, Extensions and Applications",
Volume 50, pages 447-485
For quick access go to
Abstract:
This paper describes a polynomially-solvable class of temporal planning problems. Polynomiality follows from two assumptions. Firstly, by supposing that each sub-goal fluent can be established by at most one action, we can quickly determine which actions are necessary in any plan. Secondly, the monotonicity of sub-goal fluents allows us to express planning as an instance of STP≠ (Simple Temporal Problem with difference constraints). This class includes temporally-expressive problems requiring the concurrent execution of actions, with potential applications in the chemical, pharmaceutical and construction industries.
We also show that any (temporal) planning problem has a monotone relaxation which can lead to the polynomial-time detection of its unsolvability in certain cases. Indeed we show that our relaxation is orthogonal to relaxations based on the ignore-deletes approach used in classical planning since it preserves deletes and can also exploit temporal information.
E. Keyder, J. Hoffmann and P. Haslum (2014)
"Improving Delete Relaxation Heuristics Through Explicitly Represented Conjunctions",
Volume 50, pages 487-533
For quick access go to
Abstract:
Heuristic functions based on the delete relaxation compute upper and lower bounds on the optimal delete-relaxation heuristic h+, and are of paramount importance in both optimal and satisficing planning. Here we introduce a principled and flexible technique for improving h+, by augmenting delete-relaxed planning tasks with a limited amount of delete information. This is done by introducing special fluents that explicitly represent conjunctions of fluents in the original planning task, rendering h+ the perfect heuristic h* in the limit. Previous work has introduced a method in which the growth of the task is potentially exponential in the number of conjunctions introduced. We formulate an alternative technique relying on conditional effects, limiting the growth of the task to be linear in this number. We show that this method still renders h+ the perfect heuristic h* in the limit. We propose techniques to find an informative set of conjunctions to be introduced in different settings, and analyze and extend existing methods for lower-bounding and upper-bounding h+ in the presence of conditional effects. We evaluate the resulting heuristic functions empirically on a set of IPC benchmarks, and show that they are sometimes much more informative than standard delete-relaxation heuristics.
----------------------------------------------------------------
II. Unsubscribing from our Mailing List
To remove yourself from the JAIR subscribers mailing list, visit our
Web site (http://www.jair.org/), follow the link "notify me of new
articles", enter your email address in the form at the bottom of the
page, and follow the directions. In the event that you've already
deleted yourself from the list and we keep sending you messages like
this one, send mail to jair-ed at isi.edu.
----------------------------------------------------------------
From jair-ed at isi.edu Fri Aug 1 22:28:00 2014
From: jair-ed at isi.edu (jair-ed@isi.edu)
Date: Sat, 02 Aug 2014 00:28:00 -0500
Subject: [Jairsubscribers] 5 new articles published by JAIR
Message-ID:
Dear JAIR subscriber:
This message lists papers that have been recently published in JAIR and describes how to access them. (If you wish to remove yourself from this mailing list, see instructions at the end of this message.)
----------------------------------------------------------------
I. New JAIR Articles
D. Terekhov, T. T. Tran, D. G. Down and J.C. Beck (2014)
"Integrating Queueing Theory and Scheduling for Dynamic Scheduling Problems",
Volume 50, pages 535-572
For quick access go to
Abstract:
Dynamic scheduling problems consist of both challenging combinatorics, as found in classical scheduling problems, and stochastics due to uncertainty about the arrival times, resource requirements, and processing times of jobs. To address these two challenges, we investigate the integration of queueing theory and scheduling. The former reasons about long-run stochastic system characteristics, whereas the latter typically deals with short-term combinatorics. We investigate two simple problems to isolate the core differences and potential synergies between the two approaches: a two-machine dynamic flowshop and a flexible queueing network. We show for the first time that stability, a fundamental characteristic in queueing theory, can be applied to approaches that periodically solve combinatorial scheduling problems. We empirically demonstrate that for a dynamic flowshop, the use of combinatorial reasoning has little impact on schedule quality beyond queueing approaches. In contrast, for the more complicated flexible queueing network, a novel algorithm that combines long-term guidance from queueing theory with short-term combinatorial decision making outperforms all other tested approaches. To our knowledge, this is the first time that such a hybrid of queueing theory and scheduling techniques has been proposed and evaluated.
A. Rey and J. Rothe (2014)
"False-Name Manipulation in Weighted Voting Games is Hard for Probabilistic Polynomial Time",
Volume 50, pages 573-601
For quick access go to
Abstract:
False-name manipulation refers to the question of whether a player in a weighted voting game can increase her power by splitting into several players and distributing her weight among these false identities. Relatedly, the beneficial merging problem asks whether a coalition of players can increase their power in a weighted voting game by merging their weights. For the problems of whether merging or splitting players in weighted voting games is beneficial in terms of the Shapley--Shubik and the normalized Banzhaf index, merely NP-hardness lower bounds are known, leaving the question about their exact complexity open. For the Shapley--Shubik and the probabilistic Banzhaf index, we raise these lower bounds to hardness for PP, "probabilistic polynomial time," a class considered to be by far a larger class than NP. For both power indices, we provide matching upper bounds for beneficial merging and, whenever the new players' weights are given, also for beneficial splitting, thus resolving previous conjectures in the affirmative. Relatedly, we consider the beneficial annexation problem, asking whether a single player can increase her power by taking over other players' weights. It is known that annexation is never disadvantageous for the Shapley--Shubik index, and that beneficial annexation is NP-hard for the normalized Banzhaf index. We show that annexation is never disadvantageous for the probabilistic Banzhaf index either, and for both the Shapley--Shubik index and the probabilistic Banzhaf index we show that it is NP-complete to decide whether annexing another player is advantageous. Moreover, we propose a general framework for merging and splitting that can be applied to different classes and representations of games.
D. D. Maua, C. P. de Campos, A. Benavoli and A. Antonucci (2014)
"Probabilistic Inference in Credal Networks: New Complexity Results",
Volume 50, pages 603-637
For quick access go to
Abstract:
Credal networks are graph-based statistical models whose parameters take values in a set, instead of being sharply specified as in traditional statistical models (e.g., Bayesian networks). The computational complexity of inferences on such models depends on the irrelevance/independence concept adopted. In this paper, we study inferential complexity under the concepts of epistemic irrelevance and strong independence. We show that inferences under strong independence are NP-hard even in trees with binary variables except for a single ternary one. We prove that under epistemic irrelevance the polynomial-time complexity of inferences in credal trees is not likely to extend to more general models (e.g., singly connected topologies). These results clearly distinguish networks that admit efficient inferences and those where inferences are most likely hard, and settle several open questions regarding their computational complexity. We show that these results remain valid even if we disallow the use of zero probabilities. We also show that the computation of bounds on the probability of the future state in a hidden Markov model is the same whether we assume epistemic irrelevance or strong independence, and we prove a similar result for inference in naive Bayes structures. These inferential equivalences are important for practitioners, as hidden Markov models and naive Bayes structures are used in real applications of imprecise probability.
A. Gerevini, A. Saetti and M. Vallati (2014)
"Planning through Automatic Portfolio Configuration: The PbP Approach",
Volume 50, pages 639-696
For quick access go to
Abstract:
In the field of domain-independent planning, several powerful planners implementing different techniques have been developed. However, no one of these systems outperforms all others in every known benchmark domain. In this work, we propose a multi-planner approach that automatically configures a portfolio of planning techniques for each given domain. The configuration process for a given domain uses a set of training instances to: (i) compute and analyze some alternative sets of macro-actions for each planner in the portfolio identifying a (possibly empty) useful set, (ii) select a cluster of planners, each one with the identified useful set of macro-actions, that is expected to perform best, and (iii) derive some additional information for configuring the execution scheduling of the selected planners at planning time. The resulting planning system, called PbP (Portfolio- based Planner), has two variants focusing on speed and plan quality. Different versions of PbP entered and won the learning track of the sixth and seventh International Planning Competitions. In this paper, we experimentally analyze PbP considering planning speed and plan quality in depth. We provide a collection of results that help to understand PbP?s behavior, and demonstrate the effectiveness of our approach to configuring a portfolio of planners with macro-actions.
D. Bergman, A. A. Cire and W. van Hoeve (2014)
"MDD Propagation for Sequence Constraints",
Volume 50, pages 697-722
For quick access go to
Abstract:
We study propagation for the Sequence constraint in the context of constraint programming based on limited-width MDDs. Our first contribution is proving that establishing MDD-consistency for Sequence is NP-hard. Yet, we also show that this task is fixed parameter tractable with respect to the length of the sub-sequences. In addition, we propose a partial filtering algorithm that relies on a specific decomposition of the constraint and a novel extension of MDD filtering to node domains. We experimentally evaluate the performance of our proposed filtering algorithm, and demonstrate that the strength of the MDD propagation increases as the maximum width is increased. In particular, MDD propagation can outperform conventional domain propagation for Sequence by reducing the search tree size and solving time by several orders of magnitude. Similar improvements are observed with respect to the current best MDD approach that applies the decomposition of Sequence into Among constraints.
----------------------------------------------------------------
II. Unsubscribing from our Mailing List
To remove yourself from the JAIR subscribers mailing list, visit our
Web site (http://www.jair.org/), follow the link "notify me of new
articles", enter your email address in the form at the bottom of the
page, and follow the directions. In the event that you've already
deleted yourself from the list and we keep sending you messages like
this one, send mail to jair-ed at isi.edu.
----------------------------------------------------------------
From jair-ed at isi.edu Tue Sep 2 20:30:12 2014
From: jair-ed at isi.edu (jair-ed@isi.edu)
Date: Tue, 02 Sep 2014 22:30:12 -0500
Subject: [Jairsubscribers] 6 new articles published by JAIR
Message-ID:
Dear JAIR subscriber:
This message lists papers that have been recently published in JAIR and describes how to access them. (If you wish to remove yourself from this mailing list, see instructions at the end of this message.)
----------------------------------------------------------------
I. New JAIR Articles
S. Kiritchenko, X. Zhu and S. M. Mohammad (2014)
"Sentiment Analysis of Short Informal Texts",
Volume 50, pages 723-762
For quick access go to
Abstract:
We describe a state-of-the-art sentiment analysis system that detects (a) the sentiment of short informal textual messages such as tweets and SMS (message-level task) and (b) the sentiment of a word or a phrase within a message (term-level task). The system is based on a supervised statistical text classification approach leveraging a variety of surface-form, semantic, and sentiment features. The sentiment features are primarily derived from novel high-coverage tweet-specific sentiment lexicons. These lexicons are automatically generated from tweets with sentiment-word hashtags and from tweets with emoticons. To adequately capture the sentiment of words in negated contexts, a separate sentiment lexicon is generated for negated words.
The system ranked first in the SemEval-2013 shared task `Sentiment Analysis in Twitter' (Task 2), obtaining an F-score of 69.02 in the message-level task and 88.93 in the term-level task. Post-competition improvements boost the performance to an F-score of 70.45 (message-level task) and 89.50 (term-level task). The system also obtains state-of-the-art performance on two additional datasets: the SemEval-2013 SMS test set and a corpus of movie review excerpts. The ablation experiments demonstrate that the use of the automatically generated lexicons results in performance gains of up to 6.5 absolute percentage points.
A. M. S. Barreto, J. Pineau and D. Precup (2014)
"Policy Iteration Based on Stochastic Factorization",
Volume 50, pages 763-803
For quick access go to
Abstract:
When a transition probability matrix is represented as the product of two stochastic matrices, one can swap the factors of the multiplication to obtain another transition matrix that retains some fundamental characteristics of the original. Since the derived matrix can be much smaller than its precursor, this property can be exploited to create a compact version of a Markov decision process (MDP), and hence to reduce the computational cost of dynamic programming. Building on this idea, this paper presents an approximate policy iteration algorithm called policy iteration based on stochastic factorization, or PISF for short. In terms of computational complexity, PISF replaces standard policy iteration's cubic dependence on the size of the MDP with a function that grows only linearly with the number of states in the model. The proposed algorithm also enjoys nice theoretical properties: it always terminates after a finite number of iterations and returns a decision policy whose performance only depends on the quality of the stochastic factorization. In particular, if the approximation error in the factorization is sufficiently small, PISF computes the optimal value function of the MDP. The paper also discusses practical ways of factoring an MDP and illustrates the usefulness of the proposed algorithm with an application involving a large-scale decision problem of real economical interest.
U. Thayasivam and P. Doshi (2014)
"Speeding Up Iterative Ontology Alignment using Block-Coordinate Descent",
Volume 50, pages 805-845
For quick access go to
Abstract:
In domains such as biomedicine, ontologies are prominently utilized for annotating data. Consequently, aligning ontologies facilitates integrating data. Several algorithms exist for automatically aligning ontologies with diverse levels of performance. As alignment applications evolve and exhibit online run time constraints, performing the alignment in a reasonable amount of time without compromising the quality of the alignment is a crucial challenge. A large class of alignment algorithms is iterative and often consumes more time than others in delivering solutions of high quality. We present a novel and general approach for speeding up the multivariable optimization process utilized by these algorithms. Specifically, we use the technique of block-coordinate descent (BCD), which exploits the subdimensions of the alignment problem identified using a partitioning scheme. We integrate this approach into multiple well-known alignment algorithms and show that the enhanced algorithms generate similar or improved alignments in significantly less time on a comprehensive testbed of ontology pairs. Because BCD does not overly constrain how we partition or order the parts, we vary the partitioning and ordering schemes in order to empirically determine the best schemes for each of the selected algorithms. As biomedicine represents a key application domain for ontologies, we introduce a comprehensive biomedical ontology testbed for the community in order to evaluate alignment algorithms. Because biomedical ontologies tend to be large, default iterative techniques find it difficult to produce a good quality alignment within a reasonable amount of time. We align a significant number of ontology pairs from this testbed using BCD-enhanced algorithms. Our contributions represent an important step toward making a significant class of alignment techniques computationally feasible.
Y. Zick, E. Markakis and E. Elkind (2014)
"Arbitration and Stability in Cooperative Games with Overlapping Coalitions",
Volume 50, pages 847-884
For quick access go to
Abstract:
Overlapping Coalition Formation (OCF) games, introduced by Chalkiadakis, Elkind, Markakis, Polukarov and Jennings in 2010, are cooperative games where players can simultaneously participate in several coalitions. Capturing the notion of stability in OCF games is a difficult task:deviating players may continue to contribute resources to joint projects with non-deviators, and the crucial question is what payoffs the deviators expect to receive from such projects. Chalkiadakis et al. introduce three stability concepts for OCF games---the conservative core, the refined core, and the optimistic core---that are based on different answers to this question. In this paper, we propose a unified framework for the study of stability in the OCF setting, which encompasses the stability concepts considered by Chalkiadakis et al. as well as a wide variety of alternative stability concepts. Our approach is based on the notion of arbitration functions, which determine the payoff obtained by the deviators, given their deviation and the current allocation of resources. We provide a characterization of stable outcomes under arbitration. We then conduct an in-depth study of four types of arbitration functions, which correspond to four notions of the core; these include the three notions of the core considered by Chalkiadakis et al. Our results complement those of Chalkiadakis et al. and answer questions left open by their work. In particular, we show that OCF games with the conservative arbitration function are essentially equivalent to non-OCF games, by relating the conservative core of an OCF game to the core of a non-overlapping cooperative game, and use this result to obtain a strictly weaker sufficient condition for conservative core non-emptiness than the one given by Chalkiadakis et al.
A. Veit, Y. Xu, R. Zheng, N. Chakraborty and K. Sycara (2014)
"Demand Side Energy Management via Multiagent Coordination in Consumer Cooperatives",
Volume 50, pages 885-922
For quick access go to
Abstract:
A key challenge in creating a sustainable and energy-efficient society is to make consumer demand adaptive to the supply of energy, especially to the renewable supply. In this article, we propose a partially-centralized organization of consumers (or agents), namely, a consumer cooperative that purchases electricity from the market. In the cooperative, a central coordinator buys the electricity for the whole group. The technical challenge is that consumers make their own demand decisions, based on their private demand constraints and preferences, which they do not share with the coordinator or other agents. We propose a novel multiagent coordination algorithm, to shape the energy demand of the cooperative. To coordinate individual consumers under incomplete information, the coordinator determines virtual price signals that it sends to the consumers to induce them to shift their demands when required. We prove that this algorithm converges to the central optimal solution and minimizes the electric energy cost of the cooperative. Additionally, we present results on the time complexity of the iterative algorithm and its implications for agents' incentive compatibility. Furthermore, we perform simulations based on real world consumption data to (a) characterize the convergence properties of our algorithm and (b) understand the effect of differing demand characteristics of participants as well as of different price functions on the cost reduction. The results show that the convergence time scales linearly with the agent population size and length of the optimization horizon. Finally, we observe that as participants' flexibility of shifting their demands increases, cost reduction increases and that the cost reduction is not sensitive to variation in consumption patterns of the consumers.
B. Bonet and H. Geffner (2014)
"Belief Tracking for Planning with Sensing: Width, Complexity and Approximations",
Volume 50, pages 923-970
For quick access go to
Abstract:
We consider the problem of belief tracking in a planning setting where states are valuations over a set of variables that are partially observable, and beliefs stand for the sets of states that are possible. While the problem is intractable in the worst case, it has been recently shown that in deterministic conformant and contingent problems, belief tracking is exponential in a width parameter that is often bounded and small. In this work, we extend these results in two ways. First, we introduce a width notion that applies to non-deterministic problems as well, develop a factored belief tracking algorithm that is exponential in the problem width, and show how it applies to existing benchmarks. Second, we introduce a meaningful, powerful, and sound approximation scheme, beam tracking, that is exponential in a smaller parameter, the problem causal width, and has much broader applicability. We illustrate the value of this algorithm over large instances of problems such as Battleship, Minesweeper, and Wumpus, where it yields state-of-the-art performance in real-time.
----------------------------------------------------------------
II. Unsubscribing from our Mailing List
To remove yourself from the JAIR subscribers mailing list, visit our
Web site (http://www.jair.org/), follow the link "notify me of new
articles", enter your email address in the form at the bottom of the
page, and follow the directions. In the event that you've already
deleted yourself from the list and we keep sending you messages like
this one, send mail to jair-ed at isi.edu.
----------------------------------------------------------------
From jair-ed at isi.edu Wed Oct 1 21:33:01 2014
From: jair-ed at isi.edu (jair-ed@isi.edu)
Date: Wed, 01 Oct 2014 23:33:01 -0500
Subject: [Jairsubscribers] 7 new articles published by JAIR
Message-ID:
Dear JAIR subscriber:
This message lists papers that have been recently published in JAIR and describes how to access them. (If you wish to remove yourself from this mailing list, see instructions at the end of this message.)
----------------------------------------------------------------
I. New JAIR Articles
K. Woodsend and M. Lapata (2014)
"Text Rewriting Improves Semantic Role Labeling",
Volume 51, pages 133-164
For quick access go to
Abstract:
Large-scale annotated corpora are a prerequisite to developing high-performance NLP systems. Such corpora are expensive to produce, limited in size, often demanding linguistic expertise. In this paper we use text rewriting as a means of increasing the amount of labeled data available for model training. Our method uses automatically extracted rewrite rules from comparable corpora and bitexts to generate multiple versions of sentences annotated with gold standard labels. We apply this idea to semantic role labeling and show that a model trained on rewritten data outperforms the state of the art on the CoNLL-2009 benchmark dataset.
R. Micalizio and P. Torasso (2014)
"Cooperative Monitoring to Diagnose Multiagent Plans",
Volume 51, pages 1-70
For quick access go to
Abstract:
Diagnosing the execution of a Multiagent Plan (MAP) means identifying and explaining action failures (i.e., actions that did not reach their expected effects). Current approaches to MAP diagnosis are substantially centralized, and assume that action failures are independent of each other.
In this paper, the diagnosis of MAPs, executed in a dynamic and partially observable environment, is addressed in a fully distributed and asynchronous way; in addition, action failures are no longer assumed as independent of each other.
The paper presents a novel methodology, named Cooperative Weak-Committed Monitoring (CWCM), enabling agents to cooperate while monitoring their own actions. Cooperation helps the agents to cope with very scarcely observable environments: what an agent cannot observe directly can be acquired from other agents. CWCM exploits nondeterministic action models to carry out two main tasks: detecting action failures and building trajectory-sets (i.e., structures representing the knowledge an agent has about the environment in the recent past). Relying on trajectory-sets, each agent is able to explain its own action failures in terms of exogenous events that have occurred during the execution of the actions themselves. To cope with dependent failures, CWCM is coupled with a diagnostic engine that distinguishes between primary and secondary action failures.
An experimental analysis demonstrates that the CWCM methodology, together with the proposed diagnostic inferences, are effective in identifying and explaining action failures even in scenarios where the system observability is significantly reduced.
M. Winikoff and S. Cranefield (2014)
"On the Testability of BDI Agent Systems",
Volume 51, pages 71-131
For quick access go to
Abstract:
Before deploying a software system we need to assure ourselves (and stakeholders) that the system will behave correctly. This assurance is usually done by testing the system. However, it is intuitively obvious that adaptive systems, including agent-based systems, can exhibit complex behaviour, and are thus harder to test. In this paper we examine this "obvious intuition" in the case of Belief-Desire-Intention (BDI) agents. We analyse the size of the behaviour space of BDI agents and show that although the intuition is correct, the factors that influence the size are not what we expected them to be. Specifically, we found that the introduction of failure handling had a much larger effect on the size of the behaviour space than we expected. We also discuss the implications of these findings on the testability of BDI agents.
Z. Feldman and C. Domshlak (2014)
"Simple Regret Optimization in Online Planning for Markov Decision Processes",
Volume 51, pages 165-205
For quick access go to
Abstract:
We consider online planning in Markov decision processes (MDPs). In online planning, the agent focuses on its current state only, deliberates about the set of possible policies from that state onwards and, when interrupted, uses the outcome of that exploratory deliberation to choose what action to perform next. Formally, the performance of algorithms for online planning is assessed in terms of simple regret, the agent's expected performance loss when the chosen action, rather than an optimal one, is followed.
To date, state-of-the-art algorithms for online planning in general MDPs are either best effort, or guarantee only polynomial-rate reduction of simple regret over time. Here we introduce a new Monte-Carlo tree search algorithm, BRUE, that guarantees exponential-rate and smooth reduction of simple regret. At a high level, BRUE is based on a simple yet non-standard state-space sampling scheme, MCTS2e, in which different parts of each sample are dedicated to different exploratory objectives. We further extend BRUE with a variant of ``learning by forgetting.'' The resulting parametrized algorithm, BRUE(alpha), exhibits even more attractive formal guarantees than BRUE. Our empirical evaluation shows that both BRUE and its generalization, BRUE(alpha), are also very effective in practice and compare favorably to the state-of-the-art.
A. Adiga, C. J. Kuhlman, H. S. Mortveit and A. K. S. Vullikanti (2014)
"Sensitivity of Diffusion Dynamics to Network Uncertainty",
Volume 51, pages 207-226
For quick access go to
Abstract:
Simple diffusion processes on networks have been used to model, analyze and predict diverse phenomena such as spread of diseases, information and memes. More often than not, the underlying network data is noisy and sampled. This prompts the following natural question: how sensitive are the diffusion dynamics and subsequent conclusions to uncertainty in the network structure?
In this paper, we consider two popular diffusion models: Independent cascade (IC) model and Linear threshold (LT) model. We study how the expected number of vertices that are influenced/infected, for particular initial conditions, are affected by network perturbations. Through rigorous analysis under the assumption of a reasonable perturbation model we establish the following main results. (1) For the IC model, we characterize the sensitivity to network perturbation in terms of the critical probability for phase transition of the network. We find that the expected number of infections is quite stable, unless the transmission probability is close to the critical probability. (2) We show that the standard LT model with uniform edge weights is relatively stable under network perturbations. (3) We study these sensitivity questions using extensive simulations on diverse real world networks and find that our theoretical predictions for both models match the observations quite closely. (4) Experimentally, the transient behavior, i.e., the time series of the number of infections, in both models appears to be more sensitive to network perturbations.
Z. Zhuang and M. Pagnucco (2014)
"Entrenchment-Based Horn Contraction",
Volume 51, pages 227-254
For quick access go to
Abstract:
The AGM framework is the benchmark approach in belief change. Since the framework assumes an underlying logic containing classical Propositional Logic, it can not be applied to systems with a logic weaker than Propositional Logic. To remedy this limitation, several researchers have studied AGM-style contraction and revision under the Horn fragment of Propositional Logic (i.e., Horn logic). In this paper, we contribute to this line of research by investigating the Horn version of the AGM entrenchment-based contraction. The study is challenging as the construction of entrenchment-based contraction refers to arbitrary disjunctions which are not expressible under Horn logic. In order to adapt the construction to Horn logic, we make use of a Horn approximation technique called Horn strengthening. We provide a representation theorem for the newly constructed contraction which we refer to as entrenchment-based Horn contraction. Ideally, contractions defined under Horn logic (i.e., Horn contractions) should be as rational as AGM contraction. We propose the notion of Horn equivalence which intuitively captures the equivalence between Horn contraction and AGM contraction. We show that, under this notion, entrenchment-based Horn contraction is equivalent to a restricted form of entrenchment-based contraction.
C. Bäckström, A. Jonsson and P. Jonsson (2014)
"Automaton Plans",
Volume 51, pages 255-291
For quick access go to
Abstract:
Macros have long been used in planning to represent subsequences of operators. Macros can be used in place of individual operators during search, sometimes reducing the effort required to find a plan to the goal. Another use of macros is to compactly represent long plans. In this paper we introduce a novel solution concept called automaton plans in which plans are represented using hierarchies of automata. Automaton plans can be viewed as an extension of macros that enables parameterization and branching. We provide several examples that illustrate how automaton plans can be useful, both as a compact representation of exponentially long plans and as an alternative to sequential solutions in benchmark domains such as Logistics and Grid. We also compare automaton plans to other compact plan representations from the literature, and find that automaton plans are strictly more expressive than macros, but strictly less expressive than HTNs and certain representations allowing efficient sequential access to the operators of the plan.
----------------------------------------------------------------
II. Unsubscribing from our Mailing List
To remove yourself from the JAIR subscribers mailing list, visit our
Web site (http://www.jair.org/), follow the link "notify me of new
articles", enter your email address in the form at the bottom of the
page, and follow the directions. In the event that you've already
deleted yourself from the list and we keep sending you messages like
this one, send mail to jair-ed at isi.edu.
----------------------------------------------------------------
From cebly at cs.toronto.edu Tue Nov 11 14:53:09 2014
From: cebly at cs.toronto.edu (Craig Boutilier)
Date: Tue, 11 Nov 2014 17:53:09 -0500 (EST)
Subject: [Jairsubscribers] Follow JAIR on Twitter
Message-ID:
The Journal of Artificial Intelligence Research (JAIR) is now on
Twitter. Please see:
https://twitter.com/JAIR_Editor
and follow @JAIR_Editor to keep up-to-date with JAIR news,
initiatives, and new articles.
Craig Boutilier
Editor-in-Chief
Journal of Artificial Intelligence Research
From cebly at cs.toronto.edu Tue Nov 11 14:54:16 2014
From: cebly at cs.toronto.edu (Craig Boutilier)
Date: Tue, 11 Nov 2014 17:54:16 -0500 (EST)
Subject: [Jairsubscribers] new JAIR Track on Human Computation and AI
Message-ID:
The Journal of Artificial Intelligence Research (JAIR) is pleased to announce
the launch of the JAIR Special Track on Human Computation and Artificial
Intelligence. This special track has been established as a natural home for
the publication of leading research at the intersection of human computation
and artificial intelligence. For more information, please see:
http://www.jair.org/specialtrack-hcomp.html
Craig Boutilier
Editor-in-Chief
Journal of Artificial Intelligence Research
From jair-ed at isi.edu Tue Dec 2 06:31:23 2014
From: jair-ed at isi.edu (jair-ed@isi.edu)
Date: Tue, 02 Dec 2014 08:31:23 -0600
Subject: [Jairsubscribers] 10 new articles published by JAIR
Message-ID:
Dear JAIR subscriber:
This message lists papers that have been recently published in JAIR and describes how to access them. (If you wish to remove yourself from this mailing list, see instructions at the end of this message.)
----------------------------------------------------------------
I. New JAIR Articles
R. Nissim and R. Brafman (2014)
"Distributed Heuristic Forward Search for Multi-agent Planning",
Volume 51, pages 293-332
For quick access go to
Abstract:
This paper deals with the problem of classical planning for multiple cooperative agents who have private information about their local state and capabilities they do not want to reveal. Two main approaches have recently been proposed to solve this type of problem -- one is based on reduction to distributed constraint satisfaction, and the other on partial-order planning techniques. In classical single-agent planning, constraint-based and partial-order planning techniques are currently dominated by heuristic forward search. The question arises whether it is possible to formulate a distributed heuristic forward search algorithm for privacy-preserving classical multi-agent planning. Our work provides a positive answer to this question in the form of a general approach to distributed state-space search in which each agent performs only the part of the state expansion relevant to it. The resulting algorithms are simple and efficient -- outperforming previous algorithms by orders of magnitude -- while offering similar flexibility to that of forward-search algorithms for single-agent planning. Furthermore, one particular variant of our general approach yields a distributed version of the A* algorithm that is the first cost-optimal distributed algorithm for privacy-preserving planning.
F. Belardinelli, A. Lomuscio and F. Patrizi (2014)
"Verification of Agent-Based Artifact Systems",
Volume 51, pages 333-376
For quick access go to
Abstract:
Artifact systems are a novel paradigm for specifying and implementing business processes described in terms of interacting modules called artifacts. Artifacts consist of data and lifecycles, accounting respectively for the relational structure of the artifacts? states and their possible evolutions over time. In this paper we put forward artifact-centric multi-agent systems, a novel formalisation of artifact systems in the context of multi-agent systems operating on them. Differently from the usual process-based models of services, we give a semantics that explicitly accounts for the data structures on which artifact systems are defined.
We study the model checking problem for artifact-centric multi-agent systems against specifications expressed in a quantified version of temporal-epistemic logic expressing the knowledge of the agents in the exchange. We begin by noting that the problem is undecidable in general. We identify a noteworthy class of systems that admit bisimilar, finite abstractions. It follows that we can verify these systems by investigating their finite abstractions; we also show that the corresponding model checking problem is EXPSPACE-complete. We then introduce artifact-centric programs, compact and declarative representations of the programs governing both the artifact system and the agents. We show that, while these in principle generate infinite-state systems, under natural conditions their verification problem can be solved on finite abstractions that can be effectively computed from the programs. We exemplify the theoretical results here pursued through a mainstream procurement scenario from the artifact systems literature.
A. Metodi, R. Stern, M. Kalech and M. Codish (2014)
"A Novel SAT-Based Approach to Model Based Diagnosis",
Volume 51, pages 377-411
For quick access go to
Abstract:
This paper introduces a novel encoding of Model Based Diagnosis (MBD) to Boolean Satisfaction (SAT) focusing on minimal cardinality diagnosis. The encoding is based on a combination of sophisticated MBD preprocessing algorithms and the application of a SAT compiler which optimizes the encoding to provide more succinct CNF representations than obtained with previous works. Experimental evidence indicates that our approach is superior to all published algorithms for minimal cardinality MBD. In particular, we can determine, for the first time, minimal cardinality diagnoses for the entire standard ISCAS-85 and 74XXX benchmarks. Our results open the way to improve the state-of-the-art on a range of similar MBD problems.
S. Cai, C. Luo and K. Su (2014)
"Scoring Functions Based on Second Level Score for k-SAT with Long Clauses",
Volume 51, pages 413-441
For quick access go to
Abstract:
It is widely acknowledged that stochastic local search (SLS) algorithms can efficiently find models for satisfiable instances of the satisfiability (SAT) problem, especially for random k-SAT instances. However, compared to random 3-SAT instances where SLS algorithms have shown great success, random k-SAT instances with long clauses remain very difficult. Recently, the notion of second level score, denoted as "score_2", was proposed for improving SLS algorithms on long-clause SAT instances, and was first used in the powerful CCASat solver as a tie breaker.
In this paper, we propose three new scoring functions based on score_2. Despite their simplicity, these functions are very effective for solving random k-SAT with long clauses. The first function combines score and score_2, and the second one additionally integrates the diversification property "age". These two functions are used in developing a new SLS algorithm called CScoreSAT. Experimental results on large random 5-SAT and 7-SAT instances near phase transition show that CScoreSAT significantly outperforms previous SLS solvers. However, CScoreSAT cannot rival its competitors on random k-SAT instances at phase transition. We improve CScoreSAT for such instances by another scoring function which combines score_2 with age. The resulting algorithm HScoreSAT exhibits state-of-the-art performance on random k-SAT (k>3) instances at phase transition. We also study the computation of score_2, including its implementation and computational complexity.
B. de Wilde, A. W. ter Mors and C. Witteveen (2014)
"Push and Rotate: a Complete Multi-agent Pathfinding Algorithm",
Volume 51, pages 443-492
For quick access go to
Abstract:
Multi-agent Pathfinding is a relevant problem in a wide range of domains, for example in robotics and video games research. Formally, the problem considers a graph consisting of vertices and edges, and a set of agents occupying vertices. An agent can only move to an unoccupied, neighbouring vertex, and the problem of finding the minimal sequence of moves to transfer each agent from its start location to its destination is an NP-hard problem.
We present Push and Rotate, a new algorithm that is complete for Multi-agent Pathfinding problems in which there are at least two empty vertices. Push and Rotate first divides the graph into subgraphs within which it is possible for agents to reach any position of the subgraph, and then uses the simple push, swap, and rotate operations to find a solution; a post-processing algorithm is also presented that eliminates redundant moves. Push and Rotate can be seen as extending Luna and Bekris's Push and Swap algorithm, which we showed to be incomplete in a previous publication.
In our experiments we compare our approach with the Push and Swap, MAPP, and Bibox algorithms. The latter algorithm is restricted to a smaller class of instances as it requires biconnected graphs, but can nevertheless be considered state of the art due to its strong performance. Our experiments show that Push and Swap suffers from incompleteness, MAPP is generally not competitive with Push and Rotate, and Bibox is better than Push and Rotate on randomly generated biconnected instances, while Push and Rotate performs better on grids.
A. G. Cohn, S. Li, W. Liu and J. Renz (2014)
"Reasoning about Topological and Cardinal Direction Relations Between 2-Dimensional Spatial Objects",
Volume 51, pages 493-532
For quick access go to
Abstract:
Increasing the expressiveness of qualitative spatial calculi is an essential step towards meeting the requirements of applications. This can be achieved by combining existing calculi in a way that we can express spatial information using relations from multiple calculi. The great challenge is to develop reasoning algorithms that are correct and complete when reasoning over the combined information. Previous work has mainly studied cases where the interaction between the combined calculi was small, or where one of the two calculi was very simple. In this paper we tackle the important combination of topological and directional information for extended spatial objects. We combine some of the best known calculi in qualitative spatial reasoning, the RCC8 algebra for representing topological information, and the Rectangle Algebra (RA) and the Cardinal Direction Calculus (CDC) for directional information. We consider two different interpretations of the RCC8 algebra, one uses a weak connectedness relation, the other uses a strong connectedness relation. In both interpretations, we show that reasoning with topological and directional information is decidable and remains in NP. Our computational complexity results unveil the significant differences between RA and CDC, and that between weak and strong RCC8 models. Take the combination of basic RCC8 and basic CDC constraints as an example: we show that the consistency problem is in P only when we use the strong RCC8 algebra and explicitly know the corresponding basic RA constraints.
A. Lopez-Ortiz, S. Angelopoulos and A. M. Hamel (2014)
"Optimal Scheduling of Contract Algorithms for Anytime Problem-Solving",
Volume 51, pages 533-554
For quick access go to
Abstract:
A contract algorithm is an algorithm which is given, as part of the input, a specified amount of allowable computation time. The algorithm must then complete its execution within the allotted time. An interruptible algorithm, in contrast, can be interrupted at an arbitrary point in time, at which point it must report its currently best solution. It is known that contract algorithms can simulate interruptible algorithms using iterative deepening techniques. This simulation is done at a penalty in the performance of the solution, as measured by the so-called acceleration ratio.
In this paper we give matching (i.e., optimal) upper and lower bounds for the acceleration ratio under such a simulation. We assume the most general setting in which n problem instances must be solved by means of scheduling executions of contract algorithms in $m$ identical parallel processors. This resolves an open conjecture of Bernstein, Filkenstein, and Zilberstein who gave an optimal schedule under the restricted setting of round robin and length-increasing schedules, but whose optimality in the general unrestricted case remained open.
Lastly, we show how to evaluate the average acceleration ratio of the class of exponential strategies in the setting of n problem instances and m parallel processors. This is a broad class of schedules that tend to be either optimal or near-optimal, for several variants of the basic problem.
D. Cohen, J. Crampton, A. Gagarin, G. Gutin and M. Jones (2014)
"Iterative Plan Construction for the Workflow Satisfiability Problem",
Volume 51, pages 555-577
For quick access go to
Abstract:
The Workflow Satisfiability Problem (WSP) is a problem of practical interest that arises whenever tasks need to be performed by authorized users, subject to constraints defined by business rules. We are required to decide whether there exists a plan - an assignment of tasks to authorized users - such that all constraints are satisfied. It is natural to see the WSP as a subclass of the Constraint Satisfaction Problem (CSP) in which the variables are tasks and the domain is the set of users. What makes the WSP distinctive is that the number of tasks is usually very small compared to the number of users, so it is appropriate to ask for which constraint languages the WSP is fixed-parameter tractable (FPT), parameterized by the number of tasks.
This novel approach to the WSP, using techniques from CSP, has enabled us to design a generic algorithm which is FPT for several families of workflow constraints considered in the literature. Furthermore, we prove that the union of FPT languages remains FPT if they satisfy a simple compatibility condition. Lastly, we identify a new FPT constraint language, user-independent constraints, that includes many of the constraints of interest in business processing systems. We demonstrate that our generic algorithm has provably optimal running time O*(2^(klog k)), for this language, where k is the number of tasks.
I. Kash, A. D. Procaccia and N. Shah (2014)
"No Agent Left Behind: Dynamic Fair Division of Multiple Resources",
Volume 51, pages 579-603
For quick access go to
Abstract:
Recently fair division theory has emerged as a promising approach for allocation of multiple computational resources among agents. While in reality agents are not all present in the system simultaneously, previous work has studied static settings where all relevant information is known upfront. Our goal is to better understand the dynamic setting. On the conceptual level, we develop a dynamic model of fair division, and propose desirable axiomatic properties for dynamic resource allocation mechanisms. On the technical level, we construct two novel mechanisms that provably satisfy some of these properties, and analyze their performance using real data. We believe that our work informs the design of superior multiagent systems, and at the same time expands the scope of fair division theory by initiating the study of dynamic and fair resource allocation mechanisms.
P. Nguyen, M. Hilario and A. Kalousis (2014)
"Using Meta-mining to Support Data Mining Workflow Planning and Optimization",
Volume 51, pages 605-644
For quick access go to
Abstract:
Knowledge Discovery in Databases is a complex process that involves many different data processing and learning operators. Today's Knowledge Discovery Support Systems can contain several hundred operators. A major challenge is to assist the user in designing workflows which are not only valid but also -- ideally -- optimize some performance measure associated with the user goal. In this paper we present such a system. The system relies on a meta-mining module which analyses past data mining experiments and extracts meta-mining models which associate dataset characteristics with workflow descriptors in view of workflow performance optimization. The meta-mining model is used within a data mining workflow planner, to guide the planner during the workflow planning. We learn the meta-mining models using a similarity learning approach, and extract the workflow descriptors by mining the workflows for generalized relational patterns accounting also for domain knowledge provided by a data mining ontology. We evaluate the quality of the data mining workflows that the system produces on a collection of real world datasets coming from biology and show that it produces workflows that are significantly better than alternative methods that can only do workflow selection and not planning.
----------------------------------------------------------------
II. Unsubscribing from our Mailing List
To remove yourself from the JAIR subscribers mailing list, visit our
Web site (http://www.jair.org/), follow the link "notify me of new
articles", enter your email address in the form at the bottom of the
page, and follow the directions. In the event that you've already
deleted yourself from the list and we keep sending you messages like
this one, send mail to jair-ed at isi.edu.
----------------------------------------------------------------
From cebly at cs.toronto.edu Sun Dec 7 16:30:25 2014
From: cebly at cs.toronto.edu (Craig Boutilier)
Date: Sun, 7 Dec 2014 19:30:25 -0500 (EST)
Subject: [Jairsubscribers] new JAIR Special Track on Cross-language
Algorithms and Applications
Message-ID:
The Journal of Artificial Intelligence Research (JAIR) is pleased to
announce the launch of the JAIR Special Track on Cross-language Algorithms
and Applications. The core AI technologies of speech and natural language
processing must address the challenge of processing multiple languages,
including bridging the nomenclature gap for the same concepts, and
developing algorithms and applications that not only scale to multiple
languages but also leverage cross-lingual similarities. The track is
intended to publish leading research on cross-language algorithms and
applications, focusing on unified themes that can help with the
development of the science of multi- and cross-lingualism.
For more information on scope and submission deadlines, please see:
http://www.jair.org/specialtrack-claa.html
Craig Boutilier
Editor-in-Chief
Journal of Artificial Intelligence Research
From cebly at cs.toronto.edu Wed Dec 24 06:02:28 2014
From: cebly at cs.toronto.edu (Craig Boutilier)
Date: Wed, 24 Dec 2014 09:02:28 -0500 (EST)
Subject: [Jairsubscribers] Support JAIR for the Holidays/2015
Message-ID:
As most of you know, JAIR is a community-run, grass roots journal,
operated and published by the very people who make up its intended
audience---AI researchers. While almost all the work involved in
publishing the journal is contributed by volunteers, we do have some
expenses (e.g., small amounts of administrative support, software and
hardware maintenance). We work to keep our expenses as low as possible,
to avoid publication or access fees. Nevertheless we rely on the financial
support of the community to help us with expenses which we cannot avoid.
Please consider making a financial contribution to JAIR. Your support
will help us with our mission to freely disseminate scientific results
worldwide!
Our publisher, AI Access Foundation, is a 501(c)(3) nonprofit public
benefit corporation (TIN 77-0345849) in the United States. Donations are
tax deductable. AI Access Foundation welcomes all donations, including
individual and corporate gifts of money, equipment, and personnel.
Please see
http://www.jair.org/support_JAIR.html
for further details.
Thanks!
JAIR Editorial Staff and the AI Access Foundation