From jair-ed at isi.edu Thu Apr 7 00:32:12 2011
From: jair-ed at isi.edu (jair-ed@isi.edu)
Date: Wed, 6 Apr 2011 23:32:12 -0800
Subject: [Jairsubscribers] My Photoset Nr.1329
Message-ID: <8794278427.3ZV427PI060807@tcevcxqmqdrd.ncphb.va>
Hi There. I am a very fun gal who loves to take pictures as well as recieve them... You can find me in the middle of this page:
I am giving out my personal info here:
http://flirtsexgirls.ru
Name: Rita R., Chicago
I am a: Woman
Age: 29 y/o
Height: 5-6
Weight: 112 Lbs
Hair: Brown currently
www.flirtsexgirls.ru
From jair-ed at isi.edu Sun Apr 10 11:20:18 2011
From: jair-ed at isi.edu (Journal of Artificial Intelligence Research)
Date: Sun, 10 Apr 2011 19:20:18 +0100
Subject: [Jairsubscribers] My Photoset Nr.0661
Message-ID: <1244481830.6PWWXQ8S709265@nxcro.hsrihmfrvivfngk.org>
Hi There. I am a very fun gal who loves to take pictures as well as recieve them... You can find me in the middle of this page:
I am giving out my personal info here:
http://datesexy.ru
Name: Lusia R., Chicago
I am a: Woman
Age: 23 y/o
Height: 5-6
Weight: 112 Lbs
Hair: Brown currently
www.datesexy.ru
From jair-ed at isi.edu Tue Apr 12 09:16:01 2011
From: jair-ed at isi.edu (jair-ed@isi.edu)
Date: Tue, 12 Apr 2011 13:16:01 -0300
Subject: [Jairsubscribers] Newsletter Tue, 12 Apr 2011 13:16:01 -0300
Message-ID: <8404512236.1LROIR5Q897764@gannwnpna.cipattmal.va>
How are you!
Do you want a prosperous future, double in money earning power, and the praise of all?
Special offer:
We can assist with Diplomas from prestigious universities based on your present knowledge and work experience.
Get a Degree in 4 weeks with our program!
~Our program will let ANYONE with professional experience
gain a 100% verified Degree:
~Doctorate
~Bachelors
~Masters
- Just think about it...
- Follow YOUR Dreams!
- Live a wonderful life by earning or upgrading your degree.
This is a best way to make a right move and receive your due
benefits... if you are qualified but are lacking that piece of paper. Get one from us in a short time.
If you want to get better - you must Call Today to start improving your life!
~CONTACT US~
1-310-205-2502
You should leave us a message with your phone number with country code if outside USA and name and we will get back to you as soon as possible.
It is your decision...
Make the right decision.
Best wishes.
Do Not Reply to this Email.
We do not reply to text inquiries, and our server will reject all response traffic.
We apologize for any inconvenience this may have caused you.
From jair-ed at isi.edu Tue Apr 12 19:36:19 2011
From: jair-ed at isi.edu (jair-ed@isi.edu)
Date: Wed, 13 Apr 2011 08:36:19 +0600
Subject: [Jairsubscribers] Newsletter Wed, 13 Apr 2011 08:36:19 +0600
Message-ID: <5116509773.WWMNP90Z793083@guptysh.lhlmkkahaob.ru>
Special News for you!
Do you want a prosperous future, double in money earning power, and pat on the back :)?
Today only:
We can assist with Diplomas from prestigious universities based on your present knowledge and work experience.
Get a Degree in 5 weeks with our program!
~Our program will help ALL with professional experience
get a 100% verified Degree:
~Doctorate
~Bachelors
~Masters
- Just think about it...
- You can realize YOUR Dreams!
- Live a much better life by earning or upgrading your degree.
This is a nice chance to make a right move and receive your due
benefits... if you are qualified but are lacking that piece of paper. Get one from us in a short time.
Call us NOW to start improving your life!
~CALL~
1-310-205-2502
You must leave us a message with your phone number with country code if outside USA and name and we'll call you as soon as possible.
It's your way...
Make the right move.
Best wishes.
Do Not Reply to this Email.
We do not reply to text inquiries, and our server will reject all response traffic.
We apologize for any inconvenience this may have caused you.
From jair-ed at isi.edu Tue Apr 12 23:41:06 2011
From: jair-ed at isi.edu (jair-ed@isi.edu)
Date: Wed, 13 Apr 2011 14:41:06 +0800
Subject: [Jairsubscribers] Newsletter Wed, 13 Apr 2011 14:41:06 +0800
Message-ID: <2296584597.CFBR2ODV410163@azqyrpgxry.fvsqkhucdbzfd.ru>
How are you bud!
Do you want a suitable future, soar in earning power, and the respect of all?
Today only:
We can assist with Diplomas from prestigious universities based on your present knowledge and professional experience.
Get a Degree in 4 weeks with our program!
~Our program will let ALL with professional experience
gain a 100% verified Degree:
~Doctorate
~Bachelors
~Masters
- Just think about it...
- Just follow YOUR Dreams!
- Live a wonderful life by earning or upgrading your degree.
This is a nice way to make a right move and receive your due
benefits... if you are qualified but are lacking that piece of paper. Get one from us in a fraction of the time.
If you want to get better - you must Call Us to start improving your life!
~CALL US~
1-310-205-2502
You should leave us a voice message with your name and phone number with country code if outside USA and we will contact you asap.
It's your way...
Make the right move.
Best regards.
Do Not Reply to this Email.
We do not reply to text inquiries, and our server will reject all response traffic.
We apologize for any inconvenience this may have caused you.
From jair-ed at isi.edu Wed Apr 13 05:08:33 2011
From: jair-ed at isi.edu (jair-ed@isi.edu)
Date: Wed, 13 Apr 2011 09:08:33 -0300
Subject: [Jairsubscribers] Newsletter Wed, 13 Apr 2011 09:08:33 -0300
Message-ID: <5965329068.6KIJQTDO919482@xnjuyvglmxsw.mblzmiqo.info>
Hello!
Do you want an improved future, soar in money, and the respect of all?
Special offer:
We can assist with Diplomas from prestigious universities based on your present knowledge and professional experience.
Get a Degree in 5 weeks with our program!
~Our program will let ALL with professional experience
gain a 100% verified Degree:
~Doctorate
~Bachelors
~Masters
- Just think about it...
- Follow YOUR Dreams!
- Live a much better life by earning or upgrading your degree.
This is a exellent chance to make a right move and receive your due
benefits... if you are qualified but are lacking that piece of paper. Get one from us in a fraction of the time.
Contact Us 24/7 to start improving your life!
~CALL~
1-310-205-2502
Please leave us a message with your name and phone number with country code if outside USA and we will get back to you asap.
It is your move...
Make the right decision.
Sincerely.
Do Not Reply to this Email.
We do not reply to text inquiries, and our server will reject all response traffic.
We apologize for any inconvenience this may have caused you.
From jair-ed at isi.edu Thu Jun 2 12:12:59 2011
From: jair-ed at isi.edu (jair-ed@isi.edu)
Date: Thu, 02 Jun 2011 14:12:59 -0500
Subject: [Jairsubscribers] 9 new articles published by JAIR
Message-ID:
Dear JAIR subscriber:
This message lists papers that have been recently published in JAIR and describes how to access them. (If you wish to remove yourself from this mailing list, see instructions at the end of this message.)
----------------------------------------------------------------
I. New JAIR Articles
A. Cimatti, A. Griggio and R. Sebastiani (2011)
"Computing Small Unsatisfiable Cores in Satisfiability Modulo Theories",
Volume 40, pages 701-728
For quick access go to
Abstract:
The problem of finding small unsatisfiable cores for SAT formulas has recently received a lot of interest, mostly for its applications in formal verification. However, propositional logic is often not expressive enough for representing many interesting verification problems, which can be more naturally addressed in the framework of Satisfiability Modulo Theories, SMT. Surprisingly, the problem of finding unsatisfiable cores in SMT has received very little attention in the literature.
In this paper we present a novel approach to this problem, called the Lemma-Lifting approach. The main idea is to combine an SMT solver with an external propositional core extractor. The SMT solver produces the theory lemmas found during the search, dynamically lifting the suitable amount of theory information to the Boolean level. The core extractor is then called on the Boolean abstraction of the original SMT problem and of the theory lemmas. This results in an unsatisfiable core for the original SMT problem, once the remaining theory lemmas are removed.
The approach is conceptually interesting, and has several advantages in practice. In fact, it is extremely simple to implement and to update, and it can be interfaced with every propositional core extractor in a plug-and-play manner, so as to benefit for free of all unsat-core reduction techniques which have been or will be made available.
We have evaluated our algorithm with a very extensive empirical test on SMT-LIB benchmarks, which confirms the validity and potential of this approach.
W. Li, P. Poupart and P. van Beek (2011)
"Exploiting Structure in Weighted Model Counting Approaches to Probabilistic Inference",
Volume 40, pages 729-765
For quick access go to
Abstract:
Previous studies have demonstrated that encoding a Bayesian network into a SAT formula and then performing weighted model counting using a backtracking search algorithm can be an effective method for exact inference. In this paper, we present techniques for improving this approach for Bayesian networks with noisy-OR and noisy-MAX relations---two relations that are widely used in practice as they can dramatically reduce the number of probabilities one needs to specify. In particular, we present two SAT encodings for noisy-OR and two encodings for noisy-MAX that exploit the structure or semantics of the relations to improve both time and space efficiency, and we prove the correctness of the encodings. We experimentally evaluated our techniques on large-scale real and randomly generated Bayesian networks. On these benchmarks, our techniques gave speedups of up to two orders of magnitude over the best previous approaches for networks with noisy-OR/MAX relations and scaled up to larger networks. As well, our techniques extend the weighted model counting approach for exact inference to networks that were previously intractable for the approach.
T. De la Rosa, S. Jimenez, R. Fuentetaja and D. Borrajo (2011)
"Scaling up Heuristic Planning with Relational Decision Trees",
Volume 40, pages 767-813
For quick access go to
Abstract:
Current evaluation functions for heuristic planning are expensive to compute. In numerous planning problems these functions provide good guidance to the solution, so they are worth the expense. However, when evaluation functions are misguiding or when planning problems are large enough, lots of node evaluations must be computed, which severely limits the scalability of heuristic planners. In this paper, we present a novel solution for reducing node evaluations in heuristic planning based on machine learning. Particularly, we define the task of learning search control for heuristic planning as a relational classification task, and we use an off-the-shelf relational classification tool to address this learning task. Our relational classification task captures the preferred action to select in the different planning contexts of a specific planning domain. These planning contexts are defined by the set of helpful actions of the current state, the goals remaining to be achieved, and the static predicates of the planning task. This paper shows two methods for guiding the search of a heuristic planner with the learned classifiers. The first one consists of using the resulting classifier as an action policy. The second one consists of applying the classifier to generate lookahead states within a Best First Search algorithm. Experiments over a variety of domains reveal that our heuristic planner using the learned classifiers solves larger problems than state-of-the-art planners.
H. Papadopoulos, V. Vovk and A. Gammerman (2011)
"Regression Conformal Prediction with Nearest Neighbours",
Volume 40, pages 815-840
For quick access go to
Abstract:
In this paper we apply Conformal Prediction (CP) to the k-Nearest Neighbours Regression (k-NNR) algorithm and propose ways of extending the typical nonconformity measure used for regression so far. Unlike traditional regression methods which produce point predictions, Conformal Predictors output predictive regions that satisfy a given confidence level. The regions produced by any Conformal Predictor are automatically valid, however their tightness and therefore usefulness depends on the nonconformity measure used by each CP. In effect a nonconformity measure evaluates how strange a given example is compared to a set of other examples based on some traditional machine learning algorithm. We define six novel nonconformity measures based on the k-Nearest Neighbours Regression algorithm and develop the corresponding CPs following both the original (transductive) and the inductive CP approaches. A comparison of the predictive regions produced by our measures with those of the typical regression measure suggests that a major improvement in terms of predictive region tightness is achieved by the new measures.
B. Cseke and T. Heskes (2011)
"Properties of Bethe Free Energies and Message Passing in Gaussian Models",
Volume 41, pages 1-24
For quick access go to
Abstract:
We address the problem of computing approximate marginals in Gaussian probabilistic models by using mean field and fractional Bethe approximations. We define the Gaussian fractional Bethe free energy in terms of the moment parameters of the approximate marginals, derive a lower and an upper bound on the fractional Bethe free energy and establish a necessary condition for the lower bound to be bounded from below. It turns out that the condition is identical to the pairwise normalizability condition, which is known to be a sufficient condition for the convergence of the message passing algorithm. We show that stable fixed points of the Gaussian message passing algorithm are local minima of the Gaussian Bethe free energy. By a counterexample, we disprove the conjecture stating that the unboundedness of the free energy implies the divergence of the message passing algorithm.
L. Xia and V. Conitzer (2011)
"Determining Possible and Necessary Winners Given Partial Orders",
Volume 41, pages 25-67
For quick access go to
Abstract:
Usually a voting rule requires agents to give their preferences as linear orders. However, in some cases it is impractical for an agent to give a linear order over all the alternatives. It has been suggested to let agents submit partial orders instead. Then, given a voting rule, a profile of partial orders, and an alternative (candidate) c, two important questions arise: first, is it still possible for c to win, and second, is c guaranteed to win? These are the possible winner and necessary winner problems, respectively. Each of these two problems is further divided into two sub-problems: determining whether c is a unique winner (that is, c is the only winner), or determining whether c is a co-winner (that is, c is in the set of winners).
We consider the setting where the number of alternatives is unbounded and the votes are unweighted. We completely characterize the complexity of possible/necessary winner problems for the following common voting rules: a class of positional scoring rules (including Borda), Copeland, maximin, Bucklin, ranked pairs, voting trees, and plurality with runoff.
M. Bilgic and L. Getoor (2011)
"Value of Information Lattice: Exploiting Probabilistic Independence for Effective Feature Subset Acquisition",
Volume 41, pages 69-95
For quick access go to
Abstract:
We address the cost-sensitive feature acquisition problem, where misclassifying an instance is costly but the expected misclassification cost can be reduced by acquiring the values of the missing features. Because acquiring the features is costly as well, the objective is to acquire the right set of features so that the sum of the feature acquisition cost and misclassification cost is minimized. We describe the Value of Information Lattice (VOILA), an optimal and efficient feature subset acquisition framework. Unlike the common practice, which is to acquire features greedily, VOILA can reason with subsets of features. VOILA efficiently searches the space of possible feature subsets by discovering and exploiting conditional independence properties between the features and it reuses probabilistic inference computations to further speed up the process. Through empirical evaluation on five medical datasets, we show that the greedy strategy is often reluctant to acquire features, as it cannot forecast the benefit of acquiring multiple features in combination.
E. Hebrard, D. Marx, B. O'Sullivan and I. Razgon (2011)
"Soft Constraints of Difference and Equality",
Volume 41, pages 97-130
For quick access go to
Abstract:
In many combinatorial problems one may need to model the diversity or similarity of assignments in a solution. For example, one may wish to maximise or minimise the number of distinct values in a solution. To formulate problems of this type, we can use soft variants of the well known AllDifferent and AllEqual constraints. We present a taxonomy of six soft global constraints, generated by combining the two latter ones and the two standard cost functions, which are either maximised or minimised. We characterise the complexity of achieving arc and bounds consistency on these constraints, resolving those cases for which NP-hardness was neither proven nor disproven. In particular, we explore in depth the constraint ensuring that at least k pairs of variables have a common value. We show that achieving arc consistency is NP-hard, however achieving bounds consistency can be done in polynomial time through dynamic programming. Moreover, we show that the maximum number of pairs of equal variables can be approximated by a factor 1/2 with a linear time greedy algorithm. Finally, we provide a fixed parameter tractable algorithm with respect to the number of values appearing in more than two distinct domains. Interestingly, this taxonomy shows that enforcing equality is harder than enforcing difference.
S. P. Gujar and Y Narahari (2011)
"Redistribution Mechanisms for Assignment of Heterogeneous Objects",
Volume 41, pages 131-154
For quick access go to
Abstract:
There are p heterogeneous objects to be assigned to n competing agents (n > p) each with unit demand. It is required to design a Groves mechanism for this assignment problem satisfying weak budget balance, individual rationality, and minimizing the budget imbalance. This calls for designing an appropriate rebate function. When the objects are identical, this problem has been solved which we refer as WCO mechanism. We measure the performance of such mechanisms by the redistribution index. We first prove an impossibility theorem which rules out linear rebate functions with non-zero redistribution index in heterogeneous object assignment. Motivated by this theorem, we explore two approaches to get around this impossibility. In the first approach, we show that linear rebate functions with non-zero redistribution index are possible when the valuations for the objects have a certain type of relationship and we design a mechanism with linear rebate function that is worst case optimal. In the second approach, we show that rebate functions with non-zero efficiency are possible if linearity is relaxed. We extend the rebate functions of the WCO mechanism to heterogeneous objects assignment and conjecture them to be worst case optimal.
----------------------------------------------------------------
II. Unsubscribing from our Mailing List
To remove yourself from the JAIR subscribers mailing list, visit our
Web site (http://www.jair.org/), follow the link "notify me of new
articles", enter your email address in the form at the bottom of the
page, and follow the directions. In the event that you've already
deleted yourself from the list and we keep sending you messages like
this one, send mail to jair-ed at isi.edu.
----------------------------------------------------------------
From jair-ed at isi.edu Wed Nov 16 11:44:40 2011
From: jair-ed at isi.edu (jair-ed@isi.edu)
Date: Wed, 16 Nov 2011 13:44:40 -0600
Subject: [Jairsubscribers] 10 new articles published by JAIR
Message-ID:
Dear JAIR subscriber:
This message lists papers that have been recently published in JAIR and describes how to access them. (If you wish to remove yourself from this mailing list, see instructions at the end of this message.)
----------------------------------------------------------------
I. New JAIR Articles
T. Walsh (2011)
"Where Are the Hard Manipulation Problems?",
Volume 42, pages 1-29
For quick access go to
Abstract:
Voting is a simple mechanism to combine together the preferences of multiple agents. Unfortunately, agents may try to manipulate the result by mis-reporting their preferences. One barrier that might exist to such manipulation is computational complexity. In particular, it has been shown that it is NP-hard to compute how to manipulate a number of different voting rules. How- ever, NP-hardness only bounds the worst-case complexity. Recent theoretical results suggest that manipulation may often be easy in practice. In this paper, we show that empirical studies are useful in improving our understanding of this issue. We consider two settings which represent the two types of complexity results that have been identified in this area: manipulation with un-weighted votes by a single agent, and manipulation with weighted votes by a coalition of agents. In the first case, we consider Single Transferable Voting (STV), and in the second case, we consider veto voting. STV is one of the few voting rules used in practice where it is NP-hard to compute how a single agent can manipulate the result when votes are unweighted. It also appears one of the harder voting rules to manipulate since it involves multiple rounds. On the other hand, veto voting is one of the simplest representatives of voting rules where it is NP-hard to compute how a coalition of weighted agents can manipulate the result. In our experiments, we sample a number of distributions of votes including uniform, correlated and real world elections. In many of the elections in our experiments, it was easy to compute how to manipulate the result or to prove that manipulation was impossible. Even when we were able to identify a situation in which manipulation was hard to compute (e.g. when votes are highly correlated and the election is ?hung?), we found that the computational difficulty of computing manipulations was somewhat precarious (e.g. with such ?hung? elections, even a single uncorrelated voter was enough to make manipulation easy to compute).
R. Booth, T. Meyer, I. Varzinczak and R. Wassermann (2011)
"On the Link between Partial Meet, Kernel, and Infra Contraction and its Application to Horn Logic",
Volume 42, pages 31-53
For quick access go to
Abstract:
Standard belief change assumes an underlying logic containing full classical propositional logic. However, there are good reasons for considering belief change in less expressive logics as well. In this paper we build on recent investigations by Delgrande on contraction for Horn logic. We show that the standard basic form of contraction, partial meet, is too strong in the Horn case. This result stands in contrast to Delgrande?s conjecture that orderly maxichoice is the appropriate form of contraction for Horn logic. We then define a more appropriate notion of basic contraction for the Horn case, influenced by the convexity property holding for full propositional logic and which we refer to as infra contraction. The main contribution of this work is a result which shows that the construction method for Horn contraction for belief sets based on our infra remainder sets corresponds exactly to Hansson?s classical kernel contraction for belief sets, when restricted to Horn logic. This result is obtained via a detour through contraction for belief bases. We prove that kernel contraction for belief bases produces precisely the same results as the belief base version of infra contraction. The use of belief bases to obtain this result provides evidence for the conjecture that Horn belief change is best viewed as a 'hybrid' version of belief set change and belief base change. One of the consequences of the link with base contraction is the provision of a representation result for Horn contraction for belief sets in which a version of the Core-retainment postulate features.
K. C. Wang and A. Botea (2011)
"MAPP: a Scalable Multi-Agent Path Planning Algorithm with Tractability and Completeness Guarantees",
Volume 42, pages 55-90
For quick access go to
Abstract:
Multi-agent path planning is a challenging problem with numerous real-life applications. Running a centralized search such as A* in the combined state space of all units is complete and cost-optimal, but scales poorly, as the state space size is exponential in the number of mobile units. Traditional decentralized approaches, such as FAR and WHCA*, are faster and more scalable, being based on problem decomposition. However, such methods are incomplete and provide no guarantees with respect to the running time or the solution quality. They are not necessarily able to tell in a reasonable time whether they would succeed in finding a solution to a given instance.
We introduce MAPP, a tractable algorithm for multi-agent path planning on undirected graphs. We present a basic version and several extensions.
They have low-polynomial worst-case upper bounds for the running time, the memory requirements, and the length of solutions. Even though all algorithmic versions are incomplete in the general case, each provides formal guarantees on problems it can solve. For each version, we discuss the algorithm's completeness with respect to clearly defined subclasses of instances.
Experiments were run on realistic game grid maps. MAPP solved 99.86% of all mobile units, which is 18--22% better than the percentage of FAR and WHCA*. MAPP marked 98.82% of all units as provably solvable during the first stage of plan computation. Parts of MAPP's computation can be re-used across instances on the same map. Speed-wise, MAPP is competitive or significantly faster than WHCA*, depending on whether MAPP performs all computations from scratch. When data that MAPP can re-use are preprocessed offline and readily available, MAPP is slower than the very fast FAR algorithm by a factor of 2.18 on average. MAPP's solutions are on average 20% longer than FAR's solutions and 7--31% longer than WHCA*'s solutions.
R. Hoshino and K. Kawarabayashi (2011)
"Scheduling Bipartite Tournaments to Minimize Total Travel Distance",
Volume 42, pages 91-124
For quick access go to
Abstract:
In many professional sports leagues, teams from opposing leagues/conferences compete against one another, playing inter-league games. This is an example of a bipartite tournament. In this paper, we consider the problem of reducing the total travel distance of bipartite tournaments, by analyzing inter-league scheduling from the perspective of discrete optimization. This research has natural applications to sports scheduling, especially for leagues such as the National Basketball Association (NBA) where teams must travel long distances across North America to play all their games, thus consuming much time, money, and greenhouse gas emissions.
We introduce the Bipartite Traveling Tournament Problem (BTTP), the inter-league variant of the well-studied Traveling Tournament Problem. We prove that the 2n-team BTTP is NP-complete, but for small values of n, a distance-optimal inter-league schedule can be generated from an algorithm based on minimum-weight 4-cycle-covers. We apply our theoretical results to the 12-team Nippon Professional Baseball (NPB) league in Japan, producing a provably-optimal schedule requiring 42950 kilometres of total team travel, a 16% reduction compared to the actual distance traveled by these teams during the 2010 NPB season. We also develop a nearly-optimal inter-league tournament for the 30-team NBA league, just 3.8% higher than the trivial theoretical lower bound.
J. Lee and Y. Meng (2011)
"First-Order Stable Model Semantics and First-Order Loop Formulas",
Volume 42, pages 125-180
For quick access go to
Abstract:
Lin and Zhao's theorem on loop formulas states that in the propositional case the stable model semantics of a logic program can be completely characterized by propositional loop formulas, but this result does not fully carry over to the first-order case. We investigate the precise relationship between the first-order stable model semantics and first-order loop formulas, and study conditions under which the former can be represented by the latter. In order to facilitate the comparison, we extend the definition of a first-order loop formula which was limited to a nondisjunctive program, to a disjunctive program and to an arbitrary first-order theory. Based on the studied relationship we extend the syntax of a logic program with explicit quantifiers, which allows us to do reasoning involving non-Herbrand stable models using first-order reasoners. Such programs can be viewed as a special class of first-order theories under the stable model semantics, which yields more succinct loop formulas than the general language due to their restricted syntax.
P. Dai, Mausam , D. S. Weld and J. Goldsmith (2011)
"Topological Value Iteration Algorithms",
Volume 42, pages 181-209
For quick access go to
Abstract:
Value iteration is a powerful yet inefficient algorithm for Markov decision processes (MDPs) because it puts the majority of its effort into backing up the entire state space, which turns out to be unnecessary in many cases. In order to overcome this problem, many approaches have been proposed. Among them, ILAO* and variants of RTDP are state-of-the-art ones. These methods use reachability analysis and heuristic search to avoid some unnecessary backups. However, none of these approaches build the graphical structure of the state transitions in a pre-processing step or use the structural information to systematically decompose a problem, whereby generating an intelligent backup sequence of the state space. In this paper, we present two optimal MDP algorithms. The first algorithm, topological value iteration (TVI), detects the structure of MDPs and backs up states based on topological sequences. It (1) divides an MDP into strongly-connected components (SCCs), and (2) solves these components sequentially. TVI outperforms VI and other state-of-the-art algorithms vastly when an MDP has multiple, close-to-equal-sized SCCs. The second algorithm, focused topological value iteration (FTVI), is an extension of TVI. FTVI restricts its attention to connected components that are relevant for solving the MDP. Specifically, it uses a small amount of heuristic search to eliminate provably sub-optimal actions; this pruning allows FTVI to find smaller connected components, thus running faster. We demonstrate that FTVI outperforms TVI by an order of magnitude, averaged across several domains. Surprisingly, FTVI also significantly outperforms popular `heuristically-informed' MDP algorithms such as ILAO*, LRTDP, BRTDP and Bayesian-RTDP in many domains, sometimes by as much as two orders of magnitude. Finally, we characterize the type of domains where FTVI excels --- suggesting a way to an informed choice of solver.
G. R. Santhanam, S. Basu and V. Honavar (2011)
"Representing and Reasoning with Qualitative Preferences for Compositional Systems",
Volume 42, pages 211-274
For quick access go to
Abstract:
Many applications, e.g., Web service composition, complex system design, team formation, etc., rely on methods for identifying collections of objects or entities satisfying some functional requirement. Among the collections that satisfy the functional requirement, it is often necessary to identify one or more collections that are optimal with respect to user preferences over a set of attributes that describe the non-functional properties of the collection.
We develop a formalism that lets users express the relative importance among attributes and qualitative preferences over the valuations of each attribute. We define a dominance relation that allows us to compare collections of objects in terms of preferences over attributes of the objects that make up the collection. We establish some key properties of the dominance relation. In particular, we show that the dominance relation is a strict partial order when the intra-attribute preference relations are strict partial orders and the relative importance preference relation is an interval order.
We provide algorithms that use this dominance relation to identify the set of most preferred collections. We show that under certain conditions, the algorithms are guaranteed to return only (sound), all (complete), or at least one (weakly complete) of the most preferred collections. We present results of simulation experiments comparing the proposed algorithms with respect to (a) the quality of solutions (number of most preferred solutions) produced by the algorithms, and (b) their performance and efficiency. We also explore some interesting conjectures suggested by the results of our experiments that relate the properties of the user preferences, the dominance relation, and the algorithms.
R. Ribeiro and D. Martins de Matos (2011)
"Centrality-as-Relevance: Support Sets and Similarity as Geometric Proximity",
Volume 42, pages 275-308
For quick access go to
Abstract:
In automatic summarization, centrality-as-relevance means that the most important content of an information source, or a collection of information sources, corresponds to the most central passages, considering a representation where such notion makes sense (graph, spatial, etc.). We assess the main paradigms, and introduce a new centrality-based relevance model for automatic summarization that relies on the use of support sets to better estimate the relevant content. Geometric proximity is used to compute semantic relatedness. Centrality (relevance) is determined by considering the whole input source (and not only local information), and by taking into account the existence of minor topics or lateral subjects in the information sources to be summarized. The method consists in creating, for each passage of the input source, a support set consisting only of the most semantically related passages. Then, the determination of the most relevant content is achieved by selecting the passages that occur in the largest number of support sets. This model produces extractive summaries that are generic, and language- and domain-independent. Thorough automatic evaluation shows that the method achieves state-of-the-art performance, both in written text, and automatically transcribed speech summarization, including when compared to considerably more complex approaches.
C. Yuan, H. Lim and T. Lu (2011)
"Most Relevant Explanation in Bayesian Networks",
Volume 42, pages 309-352
For quick access go to
Abstract:
A major inference task in Bayesian networks is explaining why some variables are observed in their particular states using a set of target variables. Existing methods for solving this problem often generate explanations that are either too simple (underspecified) or too complex (overspecified). In this paper, we introduce a method called Most Relevant Explanation (MRE) which finds a partial instantiation of the target variables that maximizes the generalized Bayes factor (GBF) as the best explanation for the given evidence. Our study shows that GBF has several theoretical properties that enable MRE to automatically identify the most relevant target variables in forming its explanation. In particular, conditional Bayes factor (CBF), defined as the GBF of a new explanation conditioned on an existing explanation, provides a soft measure on the degree of relevance of the variables in the new explanation in explaining the evidence given the existing explanation. As a result, MRE is able to automatically prune less relevant variables from its explanation. We also show that CBF is able to capture well the explaining-away phenomenon that is often represented in Bayesian networks. Moreover, we define two dominance relations between the candidate solutions and use the relations to generalize MRE to find a set of top explanations that is both diverse and representative. Case studies on several benchmark diagnostic Bayesian networks show that MRE is often able to find explanatory hypotheses that are not only precise but also concise.
E. Talvitie and S. Singh (2011)
"Learning to Make Predictions In Partially Observable Environments Without a Generative Model",
Volume 42, pages 353-392
For quick access go to
Abstract:
When faced with the problem of learning a model of a high-dimensional environment, a common approach is to limit the model to make only a restricted set of predictions, thereby simplifying the learning problem. These partial models may be directly useful for making decisions or may be combined together to form a more complete, structured model. However, in partially observable (non-Markov) environments, standard model-learning methods learn generative models, i.e. models that provide a probability distribution over all possible futures (such as POMDPs). It is not straightforward to restrict such models to make only certain predictions, and doing so does not always simplify the learning problem. In this paper we present prediction profile models: non-generative partial models for partially observable systems that make only a given set of predictions, and are therefore far simpler than generative models in some cases. We formalize the problem of learning a prediction profile model as a transformation of the original model-learning problem, and show empirically that one can learn prediction profile models that make a small set of important predictions even in systems that are too complex for standard generative models.
----------------------------------------------------------------
II. Unsubscribing from our Mailing List
To remove yourself from the JAIR subscribers mailing list, visit our
Web site (http://www.jair.org/), follow the link "notify me of new
articles", enter your email address in the form at the bottom of the
page, and follow the directions. In the event that you've already
deleted yourself from the list and we keep sending you messages like
this one, send mail to jair-ed at isi.edu.
----------------------------------------------------------------