DSpace Collection:
http://hdl.handle.net/2381/4072
2015-03-30T09:22:16Z
2015-03-30T09:22:16Z
From Communicating Machines to Graphical Choreographies
Lange, Julien
Tuosto, Emilio
Yoshida, Nobuko
http://hdl.handle.net/2381/31932
2015-03-28T03:00:22Z
2015-03-27T15:38:12Z
Title: From Communicating Machines to Graphical Choreographies
Authors: Lange, Julien; Tuosto, Emilio; Yoshida, Nobuko
Editors: Rajamani, S. K.; Walker, D.
Abstract: Graphical choreographies, or global graphs, are general multiparty session specifications featuring expressive constructs such as forking, merging, and joining for representing application-level protocols. Global graphs can be directly translated into modelling notations such as BPMN and UML. This paper presents an algorithm whereby a global graph can be constructed from asynchronous interactions represented by communicating finite-state machines (CFSMs). Our results include: a sound and complete characterisation of a subset of safe CFSMs from which global graphs can be constructed; an algorithm to translate CFSMs to global graphs; a time complexity analysis; and an implementation of our theory, as well as an experimental evaluation.
2015-03-27T15:38:12Z
Inferring Extended Finite State Machine Models from Software Executions
Walkinshaw, Neil
Taylor, R.
Derrick, J.
http://hdl.handle.net/2381/31765
2015-03-05T02:02:28Z
2015-03-04T16:59:57Z
Title: Inferring Extended Finite State Machine Models from Software Executions
Authors: Walkinshaw, Neil; Taylor, R.; Derrick, J.
Editors: Robbes, R.; Oliveto, R.; Di Penta, M.
Abstract: The ability to reverse-engineer models of software behaviour is valuable for a wide range of software maintenance, validation and verification tasks. Current reverse-engineering techniques focus either on control-specific behaviour (e.g., in the form of Finite State Machines), or data-specific behaviour (e.g., as pre / post-conditions or invariants). However, typical software behaviour is usually a product of the two; models must combine both aspects to fully represent the software's operation. Extended Finite State Machines (EFSMs) provide such a model. Although attempts have been made to infer EFSMs, these have been problematic. The models inferred by these techniques can be non-deterministic, the inference algorithms can be inflexible, and only applicable to traces with specific characteristics. This paper presents a novel EFSM inference technique that addresses the problems of inflexibility and non-determinism. It also adapts an experimental technique from the field of Machine Learning to evaluate EFSM inference techniques, and applies it to three diverse software systems.
Description: INSPEC Accession Number: 13916984
2015-03-04T16:59:57Z
Minimum Spanning Tree Verification under Uncertainty
Erlebach, Thomas R.
Hoffmann, Michael
http://hdl.handle.net/2381/31666
2015-02-14T02:01:26Z
2015-02-13T11:59:12Z
Title: Minimum Spanning Tree Verification under Uncertainty
Authors: Erlebach, Thomas R.; Hoffmann, Michael
Abstract: In the verification under uncertainty setting, an algorithm is given, for each input item, an uncertainty area that is guaranteed to contain the exact input value, as well as an assumed input value. An update of an input item reveals its exact value. If the exact value is equal to the assumed value, we say that the update verifies the assumed value. We consider verification under uncertainty for the minimum spanning tree (MST) problem for undirected weighted graphs, where each edge is associated with an uncertainty area and an assumed edge weight. The objective of an algorithm is to compute the smallest set of updates with the property that, if the updates of all edges in the set verify their assumed weights, the edge set of an MST can be computed. We give a polynomial-time optimal algorithm for the MST verification problem by relating the choices of updates to vertex covers in a bipartite auxiliary graph. Furthermore, we consider an alternative uncertainty setting where the vertices are embedded in the plane, the weight of an edge is the Euclidean distance between the endpoints of the edge, and the uncertainty is about the location of the vertices. An update of a vertex yields the exact location of that vertex. We prove that the MST verification problem in this vertex uncertainty setting is NP-hard. This shows a surprising difference in complexity between the edge and vertex uncertainty settings of the MST verification problem.
2015-02-13T11:59:12Z
Computational complexity of traffic hijacking under BGP and S-BGP
Chiesa, M.
Di Battista, G.
Patrignani, M.
Erlebach, Thomas
http://hdl.handle.net/2381/31661
2015-02-13T02:01:28Z
2015-02-12T15:21:43Z
Title: Computational complexity of traffic hijacking under BGP and S-BGP
Authors: Chiesa, M.; Di Battista, G.; Patrignani, M.; Erlebach, Thomas
Abstract: Harmful Internet hijacking incidents put in evidence how fragile the Border Gateway Protocol (BGP) is, which is used to exchange routing information between Autonomous Systems (ASes). As proved by recent research contributions, even S-BGP, the secure variant of BGP that is being deployed, is not fully able to blunt traffic attraction attacks. Given a traffic flow between two ASes, we study how difficult it is for a malicious AS to devise a strategy for hijacking or intercepting that flow. We show that this problem marks a sharp difference between BGP and S-BGP. Namely, while it is solvable, under reasonable assumptions, in polynomial time for the type of attacks that are usually performed in BGP, it is NP-hard for S-BGP. Our study has several by-products. E.g., we solve a problem left open in the literature, stating when performing a hijacking in S-BGP is equivalent to performing an interception.
2015-02-12T15:21:43Z
Approximation algorithms for disjoint st-paths with minimum activation cost
Alqahtani, Hasna Mohsen
Erlebach, Thomas
http://hdl.handle.net/2381/31648
2015-02-12T02:01:29Z
2015-02-11T13:28:34Z
Title: Approximation algorithms for disjoint st-paths with minimum activation cost
Authors: Alqahtani, Hasna Mohsen; Erlebach, Thomas
Abstract: In network activation problems we are given a directed or undirected graph G = (V,E) with a family {f (x, x) : (u,v) ∈ E} of monotone non-decreasing activation functions from D to {0,1}, where D is a constant-size domain. The goal is to find activation values x for all v ∈ V of minimum total cost Σ x such that the activated set of edges satisfies some connectivity requirements. Network activation problems generalize several problems studied in the network literature such as power optimization problems. We devise an approximation algorithm for the fundamental problem of finding the Minimum Activation Cost Pair of Node-Disjoint st-Paths (MA2NDP). The algorithm achieves approximation ratio 1.5 for both directed and undirected graphs. We show that a ρ-approximation algorithm for MA2NDP with fixed activation values for s and t yields a ρ-approximation algorithm for the Minimum Activation Cost Pair of Edge-Disjoint st-Paths (MA2EDP) problem. We also study the MA2NDP and MA2EDP problems for the special case |D| = 2.
2015-02-11T13:28:34Z
Establishing the Source Code Disruption Caused by Automated Remodularisation Tools
Hall, M.
Khojaye, Muhammad
Walkinshaw, Neil
McMinn, P.
http://hdl.handle.net/2381/31618
2015-02-06T02:01:58Z
2015-02-05T14:45:08Z
Title: Establishing the Source Code Disruption Caused by Automated Remodularisation Tools
Authors: Hall, M.; Khojaye, Muhammad; Walkinshaw, Neil; McMinn, P.
Editors: Poshyvanik, D.; Zaidman, A.
Abstract: Current software remodularisation tools only operate on abstractions of a software system. In this paper, we investigate the actual impact of automated remodularisation on source code using a tool that automatically applies remodularisations as refactorings. This shows us that a typical remodularisation (as computed by the Bunch tool) will require changes to thousands of lines of code, spread throughout the system (typically no code files remain untouched). In a typical multi-developer project this presents a serious integration challenge, and could contribute to the low uptake of such tools in an industrial context. We relate these findings with our ongoing research into techniques that produce iterative commit friendly code changes to address this problem.
2015-02-05T14:45:08Z
Minimum activation cost node-disjoint paths in graphs with bounded treewidth
Alqahtani, Hasna Mohsen
Erlebach, Thomas
http://hdl.handle.net/2381/31524
2015-01-31T02:01:32Z
2015-01-30T12:01:13Z
Title: Minimum activation cost node-disjoint paths in graphs with bounded treewidth
Authors: Alqahtani, Hasna Mohsen; Erlebach, Thomas
Abstract: In activation network problems we are given a directed or undirected graph G = (V,E) with a family {f uv : (u,v) ∈ E} of monotone non-decreasing activation functions from D2 to {0,1}, where D is a constant-size subset of the non-negative real numbers, and the goal is to find activation values xv for all v ∈ V of minimum total cost ∑ v ∈ V x v such that the activated set of edges satisfies some connectivity requirements. We propose algorithms that optimally solve the minimum activation cost of k node-disjoint st-paths (st-MANDP) problem in O(tw ((5 + tw)|D|)2tw + 2|V|3) time and the minimum activation cost of node-disjoint paths (MANDP) problem for k disjoint terminal pairs (s 1,t 1),…,(s k ,t k ) in O(tw ((4 + 3tw)|D|)2tw + 2|V|) time for graphs with treewidth bounded by tw.
2015-01-30T12:01:13Z
Lem : Reusable engineering of real-world semantics
Mulligan, Dominic P.
Gray, Kathryn E.
Sewell, Peter
Owens, Scott
Ridge, Tom
http://hdl.handle.net/2381/29328
2014-12-10T02:01:48Z
2014-12-09T10:33:34Z
Title: Lem : Reusable engineering of real-world semantics
Authors: Mulligan, Dominic P.; Gray, Kathryn E.; Sewell, Peter; Owens, Scott; Ridge, Tom
Abstract: Recent years have seen remarkable successes in rigorous engineering: using mathematically rigorous semantic models (not just idealised calculi) of real-world processors, programming languages, protocols, and security mechanisms, for testing, proof, analysis, and design. Building these models is challenging, requiring experimentation, dialogue with vendors or standards bodies, and validation; their scale adds engineering issues akin to those of programming to the task of writing clear and usable mathematics. But language and tool support for specification is lacking. Proof assistants can be used but bring their own difficulties, and a model produced in one, perhaps requiring many person-years effort and maintained over an extended period, cannot be used by those familiar with another. We introduce Lem, a language for engineering reusable large-scale semantic models. The Lem design takes inspiration both from functional programming languages and from proof assistants, and Lem definitions are translatable into OCaml for testing, Coq, HOL4, and Isabelle/HOL for proof, and LaTeX and HTML for presentation. This requires a delicate balance of expressiveness, careful library design, and implementation of transformations - akin to compilation, but subject to the constraint of producing usable and human-readable code for each target. Lem's effectiveness is demonstrated by its use in practice. © 2014 ACM.
2014-12-09T10:33:34Z
Simple, efficient, sound and complete combinator parsing for all context-free grammars, using an oracle
Ridge, Tom
http://hdl.handle.net/2381/29327
2014-12-10T02:01:48Z
2014-12-09T10:20:37Z
Title: Simple, efficient, sound and complete combinator parsing for all context-free grammars, using an oracle
Authors: Ridge, Tom
Abstract: Parsers for context-free grammars can be implemented directly and naturally in a functional style known as “combinator parsing”, using recursion following the structure of the grammar rules. Traditionally parser combinators have struggled to handle all features of context-free grammars, such as left recursion.
Previous work introduced novel parser combinators that could be used to parse all context-free grammars. A parser generator built using these combinators was proved both sound and complete in the HOL4 theorem prover. Unfortunately the performance was not as good as other parsing methods such as Earley parsing.
In this paper, we build on this previous work, and combine it in novel ways with existing parsing techniques such as Earley parsing. The result is a sound-and-complete combinator parsing library that can handle all context-free grammars, and has good performance.
Description: timestamp: Tue, 09 Sep 2014 10:57:11 +0200 biburl: http://dblp.uni-trier.de/rec/bib/conf/sle/Ridge14 bibsource: dblp computer science bibliography, http://dblp.org
2014-12-09T10:20:37Z
Succinct indices for range queries with applications to orthogonal range maxima
Farzan, Arash
Munro, J. Ian
Raman, Rajeev
http://hdl.handle.net/2381/29206
2014-10-29T02:01:32Z
2014-10-28T10:40:48Z
Title: Succinct indices for range queries with applications to orthogonal range maxima
Authors: Farzan, Arash; Munro, J. Ian; Raman, Rajeev
Abstract: We consider the problem of preprocessing N points in 2D, each endowed with a priority, to answer the following queries: given a axis-parallel rectangle, determine the point with the largest priority in the rectangle. Using the ideas of the effective entropy of range maxima queries and succinct indices for range maxima queries, we obtain a structure that uses O(N) words and answers the above query in O(logN loglogN) time. This a direct improvement of Chazelle's result from 1985 [10] for this problem - Chazelle required O(N/ε) words to answer queries in O((logN)[superscript 1+ε]) time for any constant ε > 0.
2014-10-28T10:40:48Z
Succinct representations of ordinal trees
Raman, Rajeev
Rao, S. Srinivasa
http://hdl.handle.net/2381/29205
2014-10-29T02:01:31Z
2014-10-28T10:33:36Z
Title: Succinct representations of ordinal trees
Authors: Raman, Rajeev; Rao, S. Srinivasa
Abstract: We survey succinct representations of ordinal, or rooted, ordered trees. Succinct representations use space that is close to the appropriate information-theoretic minimum, but support operations on the tree rapidly, usually in O(1) time.
2014-10-28T10:33:36Z
Encodings for range selection and top-k queries
Grossi, Roberto
Iacono, John
Navarro, Gonzalo
Raman, Rajeev
Rao, S. Srinivasa
http://hdl.handle.net/2381/29203
2014-10-29T02:01:30Z
2014-10-28T10:16:57Z
Title: Encodings for range selection and top-k queries
Authors: Grossi, Roberto; Iacono, John; Navarro, Gonzalo; Raman, Rajeev; Rao, S. Srinivasa
Abstract: We study the problem of encoding the positions the top-k elements of an array A[1..n] for a given parameter 1 ≤ k ≤ n. Specifically, for any i and j, we wish create a data structure that reports the positions of the largest k elements in A[i..j] in decreasing order, without accessing A at query time. This is a natural extension of the well-known encoding range-maxima query problem, where only the position of the maximum in A[i..j] is sought, and finds applications in document retrieval and ranking. We give (sometimes tight) upper and lower bounds for this problem and some variants thereof.
2014-10-28T10:16:57Z
Discovery of network properties with all-shortest-paths queries
Bilo, Davide
Erlebach, Thomas Rainer
Mihalak, Matus
Widmayer, Peter
http://hdl.handle.net/2381/29198
2014-10-24T01:01:32Z
2014-10-23T15:40:02Z
Title: Discovery of network properties with all-shortest-paths queries
Authors: Bilo, Davide; Erlebach, Thomas Rainer; Mihalak, Matus; Widmayer, Peter
Editors: Shvartsman, A. A.; Felber, P.
Abstract: We consider the problem of discovering properties (such as the diameter) of an unknown network G(V,E) with a minimum number of queries. Initially, only the vertex set V
of the network is known. Information about the edges and non-edges of the network can be obtained
by querying nodes of the network. A query at a node q∈V returns the
union of all shortest paths from q to all other nodes in V. We study the
problem as an online problem - an algorithm does not initially know the
edge set of the network, and has to decide where to make the next query
based on the information that was gathered by previous queries. We
study how many queries are needed to discover the diameter, a minimal
dominating set, a maximal independent set, the minimum degree, and
the maximum degree of the network. We also study the problem of deciding with a minimum number of queries whether the network is 2-edge or
2-vertex connected. We use the usual competitive analysis to evaluate the
quality of online algorithms, i.e., we compare online algorithms with optimum offline algorithms. For all properties except maximal independent
set and 2-vertex connectivity we present and analyze online algorithms.
Furthermore we show, for all the aforementioned properties, that "many"
queries are needed in the worst case. As our query model delivers more
information about the network than the measurement heuristics that are
currently used in practise, these negative results suggest that a similar
behavior can be expected in realistic settings, or in more realistic models
derived from the all-shortest-paths query model.
2014-10-23T15:40:02Z
Asympotically optimal encodings for range selection
Navarro, Gonzalo
Raman, Rajeev
Satti, Srinivasa Rao
http://hdl.handle.net/2381/29193
2014-12-15T02:45:04Z
2014-10-22T14:00:04Z
Title: Asympotically optimal encodings for range selection
Authors: Navarro, Gonzalo; Raman, Rajeev; Satti, Srinivasa Rao
Abstract: We consider the problem of preprocessing an array A[1..n] to answer range selection and range top-k queries. Given a query interval [i..j] and a value k, the former query asks for the position of the kth largest value in A[i..j], whereas the latter asks for the positions of all the k largest values in A[i..j]. We consider the encoding version of the problem, where A is not available at query time, and an upper bound kappa on k, the rank that is to be selected, is given at construction time. We obtain data structures with asymptotically optimal size and query time on a RAM model with word size Θ(lg n) : our structures use O(n lg kappa) bits and answer range selection queries in time O(1+ lg k / lg lg n) and range top-k queries in time O(k), for any k ≤ kappa.
Description: The file associated with this record is embargoed until the date of the conference.
2014-10-22T14:00:04Z
Mining state-based models from proof corpora
Gransden, Thomas
Walkinshaw, Neil
Raman, Rajeev
http://hdl.handle.net/2381/29141
2014-10-07T14:59:44Z
2014-10-07T14:53:42Z
Title: Mining state-based models from proof corpora
Authors: Gransden, Thomas; Walkinshaw, Neil; Raman, Rajeev
Abstract: Interactive theorem provers have been used extensively to reason about various software/hardware systems and mathematical theorems. The key challenge when using an interactive prover is finding a suitable sequence of proof steps that will lead to a successful proof requires a significant amount of human intervention. This paper presents an automated technique that takes as input examples of successful proofs and infers an Extended Finite State Machine as output. This can in turn be used to generate proofs of new conjectures. Our preliminary experiments show that the inferred models are generally accurate (contain few false-positive sequences) and that representing existing proofs in such a way can be very useful when guiding new ones.
2014-10-07T14:53:42Z
Dynamizing succinct tree representations
Joannou, Stelios
Raman, Rajeev
http://hdl.handle.net/2381/29103
2014-09-19T01:01:28Z
2014-09-18T08:51:47Z
Title: Dynamizing succinct tree representations
Authors: Joannou, Stelios; Raman, Rajeev
Abstract: We consider succinct, or space-efficient, representations of ordinal trees. Representations exist that take 2n + o(n) bits to represent a static n-node ordinal tree - close to the information-theoretic minimum - and support navigational operations in O(1) time on a RAM model; and some implementations have good practical performance. The situation is different for dynamic ordinal trees. Although there is theoretical work on succinct dynamic ordinal trees, there is little work on the practical performance of these data structures. Motivated by applications to representing XML documents, in this paper, we report on a preliminary study on dynamic succinct data structures. Our implementation is based on representing the tree structure as a sequence of balanced parentheses, with navigation done using the min-max tree of Sadakane and Navarro (SODA '10). Our implementation shows promising performance for update and navigation, and our findings highlight two issues that we believe will be important to future implementations: the difference between the finger model of (say) Farzan and Munro (ICALP '09) and the parenthesis model of Sadakane and Navarro, and the choice of the balanced tree used to represent the min-max tree.
2014-09-18T08:51:47Z
A structured approach to VO reconfigurations through Policies
Reiff-Marganiec, Stephan
http://hdl.handle.net/2381/28881
2014-05-31T01:01:22Z
2014-05-30T10:52:19Z
Title: A structured approach to VO reconfigurations through Policies
Authors: Reiff-Marganiec, Stephan
Abstract: One of the strength of Virtual Organisations is their ability to dynamically and rapidly adapt in response
to changing environmental conditions. Dynamic adaptability has been studied in other system
areas as well and system management through policies has crystallized itself as a very prominent solution
in system and network administration. However, these areas are often concerned with very
low-level technical aspects. Previous work on the APPEL policy language has been aimed at dynamically
adapting system behaviour to satisfy end-user demands and – as part of STPOWLA – APPEL
was used to adapt workflow instances at runtime. In this paper we explore how the ideas of APPEL
and STPOWLA can be extended from workflows to the wider scope of Virtual Organisations. We will
use a Travel Booking VO as example.
2014-05-30T10:52:19Z
Pure Type Systems with Corecursion on Streams: From Finite to Infinitary Normalisation
Severi, Paula
de Vries, Fer-Jan
http://hdl.handle.net/2381/28332
2013-10-29T02:02:06Z
2013-10-28T15:24:43Z
Title: Pure Type Systems with Corecursion on Streams: From Finite to Infinitary Normalisation
Authors: Severi, Paula; de Vries, Fer-Jan
Abstract: In this paper, we use types for ensuring that programs involving streams are well-behaved.We extend pure type systems with a type constructor for streams, a modal operator next and a fixed point operator for expressing corecursion. This extension is called Pure Type Systems with Corecursion (CoPTS). The typed lambda calculus for reactive programs defined by Krishnaswami and Benton can be obtained as a CoPTS. CoPTSs allow us to study a wide range of typed lambda calculi extended with corecursion using only one framework. In particular, we study this extension for the calculus of constructions which is the underlying formal language of Coq. We use the machinery of infinitary rewriting and formalise the idea of well-behaved programs using the concept of infinitary normalisation. The set of finite and infinite terms is defined as a metric completion. We establish a precise connection between the modal operator (• A) and the metric at a syntactic level by relating a variable of type (• A) with the depth of all its occurrences in a term. This syntactic connection between the modal operator and the depth is the key to the proofs of infinitary weak and strong normalisation.
2013-10-28T15:24:43Z
Succinct Representations of Binary Trees for Range Minimum Queries
Davoodi, Pooya
Raman, Rajeev
Satti, Satti Srinivasa
http://hdl.handle.net/2381/28331
2013-10-29T02:02:11Z
2013-10-28T13:05:12Z
Title: Succinct Representations of Binary Trees for Range Minimum Queries
Authors: Davoodi, Pooya; Raman, Rajeev; Satti, Satti Srinivasa
Abstract: We provide two succinct representations of binary trees that can be used to represent the Cartesian tree of an array A of size n. Both the representations take the optimal 2n + o(n) bits of space in the worst case and support range minimum queries (RMQs) in O(1) time. The first one is a modification of the representation of Farzan and Munro (SWAT 2008); a consequence of this result is that we can represent the Cartesian tree of a random permutation in 1.92n + o(n) bits in expectation. The second one uses a well-known transformation between binary trees and ordinal trees, and ordinal tree operations to effect operations on the Cartesian tree. This provides an alternative, and more natural, way to view the 2D-Min-Heap of Fischer and Huen (SICOMP 2011). Furthermore, we show that the pre-processing needed to output the data structure can be performed in linear time using o(n) bits of extra working space, improving the result of Fischer and Heun who use n + o(n) bits working space.
2013-10-28T13:05:12Z
Dynamic Compressed Strings with Random Access
Grossi, Roberto
Raman, Rajeev
Rao, Satti Srinivasa
Venturini, Rossano
http://hdl.handle.net/2381/28247
2014-07-01T01:45:06Z
2013-10-04T12:14:04Z
Title: Dynamic Compressed Strings with Random Access
Authors: Grossi, Roberto; Raman, Rajeev; Rao, Satti Srinivasa; Venturini, Rossano
Abstract: We consider the problem of storing a string S in dynamic compressed form, while permitting operations directly on the compressed representation of S: access a substring of S; replace, insert or delete a symbol in S; count how many occurrences of a given symbol appear in any given prefix of S (called rank operation) and locate the position of the ith occurrence of a symbol inside S (called select operation). We discuss the time complexity of several combinations of these operations along with the entropy space bounds of the corresponding compressed indexes. In this way, we extend or improve the bounds of previous work by Ferragina and Venturini [TCS, 2007], Jansson et al. [ICALP, 2012], and Nekrich and Navarro [SODA, 2013].
2013-10-04T12:14:04Z
Computing Minimum Spanning Trees with Uncertainty
Erlebach, Thomas
Hoffmann, Michael
Krizanc, Danny
Mihal’ák, Matúš
Raman, Rajeev
http://hdl.handle.net/2381/28154
2013-09-11T01:02:02Z
2013-09-10T13:05:03Z
Title: Computing Minimum Spanning Trees with Uncertainty
Authors: Erlebach, Thomas; Hoffmann, Michael; Krizanc, Danny; Mihal’ák, Matúš; Raman, Rajeev
Editors: Albers, S.; Weil, P.
Abstract: We consider the minimum spanning tree problem in a setting where information about the edge weights of the given graph is uncertain. Initially, for each edge e of the graph only a set Aₑ, called an uncertainty area, that contains the actual edge weight wₑ is known. The algorithm can ‘update’ e to obtain the edge weight wₑ E Aₑ. The task is to output the edge set of a minimum spanning tree after a minimum number of updates.
An algorithm is k-update competitive if it makes at most k times as many updates as the optimum. We present a 2-update competitive algorithm if all areas Aₑ are open or trivial, which is the best possible among deterministic algorithms. The condition on the areas Aₑ is to exclude degenerate inputs for which no constant update competitive algorithm can exist.
Next, we consider a setting where the vertices of the graph correspond to points in Euclidean space and the weight of an edge is equal to the distance of its endpoints. The location of each point is initially given as an uncertainty area, and an update reveals the exact location of the point. We give a general relation between the edge uncertainty and the vertex uncertainty versions of a problem and use it to derive a 4-update competitive algorithm for the minimum spanning tree problem in the vertex uncertainty model. Again, we show that this is best possible among deterministic algorithms.
2013-09-10T13:05:03Z
Inferring Extended Finite State Machine Models from Software Executions
Walkinshaw, Neil
Taylor, Ramsay
Derrick, John
http://hdl.handle.net/2381/28128
2013-09-05T01:02:17Z
2013-09-04T09:00:20Z
Title: Inferring Extended Finite State Machine Models from Software Executions
Authors: Walkinshaw, Neil; Taylor, Ramsay; Derrick, John
Abstract: The ability to reverse-engineer models of software behaviour is valuable for a wide range of software maintenance, validation and verification tasks. Current reverse-engineering techniques focus either on control-specific behaviour (e.g. in the form of Finite State Machines), or data-specific behaviour (e.g. as pre/post-conditions or invariants). However, typical software behaviour is usually a product of the two; models must combine both aspects to fully represent the software’s operation. Extended Finite State Machines (EFSMs) provide such a model. Although attempts have been made to infer EFSMs, these have been problematic. The models inferred by these techniques can be non deterministic, the inference algorithms can be inflexible, and only applicable to traces with specific characteristics. This paper presents a novel EFSM inference technique that addresses the problems of inflexibility and non determinism. It also adapts an experimental technique from the field of Machine Learning to evaluate EFSM inference techniques, and applies it to two open-source software projects.
2013-09-04T09:00:20Z
Mining Sequential Patterns from Probabilistic Databases
Muzammal, Muhammad
Raman, Rajeev
http://hdl.handle.net/2381/28080
2013-07-25T10:48:53Z
2013-07-25T10:38:00Z
Title: Mining Sequential Patterns from Probabilistic Databases
Authors: Muzammal, Muhammad; Raman, Rajeev
Editors: Huang, J.Z.; Cao, L.; Srivastava, J.
Abstract: We consider sequential pattern mining in situations where there is uncertainty about which source an event is associated with. We model this in the probabilistic database framework and consider the problem of enumerating all sequences whose expected support is sufficiently large. Unlike frequent itemset mining in probabilistic databases [C. Aggarwal et al. KDD’09; Chui et al., PAKDD’07; Chui and Kao, PAKDD’08], we use dynamic programming (DP) to compute the probability that a source supports a sequence, and show that this suffices to compute the expected support of a sequential pattern. Next, we embed this DP algorithm into candidate generate-and-test approaches, and explore the pattern lattice both in a breadth-first (similar to GSP) and a depth-first (similar to SPAM) manner. We propose optimizations for efficiently computing the frequent 1-sequences, for re-using previously-computed results through incremental support computation, and for elmiminating candidate sequences without computing their support via probabilistic pruning. Preliminary experiments show that our optimizations are effective in improving the CPU cost.
Description: Full text of this item is not currently available on the LRA. The final published version may be available through the links above.
2013-07-25T10:38:00Z
Range Extremum Queries
Raman, Rajeev
http://hdl.handle.net/2381/28079
2013-07-26T01:01:46Z
2013-07-25T09:04:54Z
Title: Range Extremum Queries
Authors: Raman, Rajeev
Abstract: There has been a renewal of interest in data structures for range extremum queries. In such problems, the input comprises N points, which are either elements of a d-dimensional matrix, that is, their coordinates are specified by the 1D submatrices they lie in (row and column indices for d = 2), or they are points in ℝ[superscript d] . Furthermore, associated with each point is a priority that is independent of the point’s coordinate. The objective is to pre-process the given points and priorities to answer the range maximum query (RMQ): given a d-dimensional rectangle, report the points with maximum priority. The objective is to minimze the space used by the data structure and the time taken to answer the above query. This talk surveys a number of recent developments in this area, focussing on the cases d = 1 and d = 2.
2013-07-25T09:04:54Z
Random Access to Grammar-Compressed Strings
Bille, Philip
Landau, Gad M.
Raman, Rajeev
Sadakane, Kunihiko
Satti, Srinivasa Rao
Weimann, Oren
http://hdl.handle.net/2381/28052
2013-07-12T01:01:50Z
2013-07-11T12:05:53Z
Title: Random Access to Grammar-Compressed Strings
Authors: Bille, Philip; Landau, Gad M.; Raman, Rajeev; Sadakane, Kunihiko; Satti, Srinivasa Rao; Weimann, Oren
Abstract: Let S be a string of length N compressed into a context-free grammar S of size n. We present two representations of S achieving O(logN) random access time, and either O(n · α[subscript k](n)) construction time and space on the pointer machine model, or 0(n) construction time and space on the RAM. Here, α[subscript k](n) is the inverse of the k[superscript th] row of Ackermann's function. Our representations also efficiently support decompression of any substring in S: we can decompress any substring of length m in the same complexity as a single random access query and additional O(m) time. Combining these results with fast algorithms for uncompressed approximate string matching leads to several efficient algorithms for approximate string matching on grammar-compressed strings without decompression. For instance, we can find all approximate occurrences of a pattern P with at most k errors in time O(n(min{|P|k,k[superscript 4] + |P|}+logN)+occ), where occ is the number of occurrences of P in S. Finally, we are able to generalize our results to navigation and other operations on grammar-compressed trees. All of the above bounds significantly improve the currently best known results. To achieve these bounds, we introduce several new techniques and data structures of independent interest, including a predecessor data structure, two "biased" weighted ancestor data structures, and a compact representation of heavy-paths in grammars.
2013-07-11T12:05:53Z
Using Evidential Reasoning to Make Qualified Predictions of Software Quality
Walkinshaw, Neil
http://hdl.handle.net/2381/28050
2013-09-26T15:00:23Z
2013-07-04T15:35:16Z
Title: Using Evidential Reasoning to Make Qualified Predictions of Software Quality
Authors: Walkinshaw, Neil
Editors: Wagner, S
Abstract: Software quality is commonly characterised in a top-down manner. High-level notions such as quality are decomposed into hierarchies of sub-factors, ranging from abstract notions such as maintainability and reliability to lower-level notions such as test coverage or team-size. Assessments of abstract factors are derived from relevant sources of information about their respective lower-level sub-factors, by surveying sources such as metrics data and inspection reports. This can be difficult because (1) evidence might not be available, (2) interpretations of the data with respect to certain quality factors may be subject to doubt and intuition, and (3) there is no straightforward means of blending hierarchies of heterogeneous data into a single coherent and quantitative prediction of quality. This paper shows how Evidential Reasoning (ER) - a mathematical technique for reasoning about uncertainty and evidence - can address this problem. It enables the quality assessment to proceed in a bottom-up manner, by the provision of low-level assessments that make any uncertainty explicit, and automatically propagating these up to higher-level 'belief-functions' that accurately summarise the developer's opinion and make explicit any doubt or ignorance.
2013-07-04T15:35:16Z
Completeness of Conversion between Reactive Programs for Ultrametric Models
Severi, Paula
de Vries, Fer-Jan
http://hdl.handle.net/2381/27961
2014-06-28T01:45:04Z
2013-06-11T10:45:55Z
Title: Completeness of Conversion between Reactive Programs for Ultrametric Models
Authors: Severi, Paula; de Vries, Fer-Jan
Abstract: In 1970 Friedman proved completeness of beta eta conversion in the simply-typed lambda calculus for the set-theoretical model. Recently Krishnaswami and Benton have captured the essence of Hudak’s reactive programs in an extension of simply typed lambda calculus with causal streams and a temporal modality and provided this typed lambda calculus for reactive programs with a sound ultrametric semantics.
We show that beta eta conversion in the typed lambda calculus of reactive programs is complete for the ultrametric model.
2013-06-11T10:45:55Z
Policies: Giving users control over calls
Reiff-Marganiec, Stephan
http://hdl.handle.net/2381/10913
2012-08-02T01:01:30Z
2012-08-01T13:14:50Z
Title: Policies: Giving users control over calls
Authors: Reiff-Marganiec, Stephan
Editors: Ryan, Mark D.; Meyer, John-Jules Ch.; Ehrich, Hans-Dieter
Abstract: Features provide extensions to a basic service, but in new systems users require much greater flexibility oriented towards their needs. Traditional features do not easily allow for this. We propose policies as the features of the future. Policies can be defined by the end-user, and allow for the use of rich context information when controlling calls. This paper discusses an architecture to allow for policy definition and call control by policies and describes the operation of a system based on this architecture. One aspect is policy definition, the APPEL policy description language that serves this purpose. An important aspect of the architecture is integral feature interaction handling. It is in this last aspect that we foresee a role for agents, and hope that this paper will stimulate some collaboration between the two mostly distinct research areas of feature interaction and agent technologies.
2012-08-01T13:14:50Z
Mapping for activity recognition in the context-aware systems using software sensors
Pathan, Kamran Taj
Reiff-Marganiec, Stephan
Hong, Yi
http://hdl.handle.net/2381/10910
2012-08-01T01:01:28Z
2012-07-31T14:26:33Z
Title: Mapping for activity recognition in the context-aware systems using software sensors
Authors: Pathan, Kamran Taj; Reiff-Marganiec, Stephan; Hong, Yi
Abstract: Context-aware systems are concerned with identifying the context of a user and then to either provide that information based on queries or to automatically decide on appropriate actions to be taken. Some context aspects (such as location) are easy to sense through hardware, while the activity of a user has shown to be somewhat elusive to being sensed with hardware sensors. As users use web services more frequently they are exchanging messages with the services through the SOAP protocol. SOAP messages contain data, which is valuable if gathered and interpreted right - especially as this data can be shedding information on the activity of a user that goes beyond "sitting at the computer and typing". We have developed software sensors, essentially based on monitoring SOAP messages and inserting data for further reasoning and querying into a semantic context model. In this paper we consider a solution to map the data from a SOAP message to our OWL ontology model automatically. Specifically, we explain the methodology to map from SOAP messages to an existing structure of knowledge.
2012-07-31T14:26:33Z
Supervised Software Modularisation
Hall, Mathew
Walkinshaw, Neil
McMinn, Phil
http://hdl.handle.net/2381/10886
2012-07-13T01:45:06Z
2012-07-09T13:19:00Z
Title: Supervised Software Modularisation
Authors: Hall, Mathew; Walkinshaw, Neil; McMinn, Phil
Abstract: This paper is concerned with the challenge of reorganising a software system into modules that both obey sound design principles and are sensible to domain experts. The problem has given rise to several unsupervised automated approaches that use techniques such as clustering and Formal Concept Analysis. Although results are often partially correct, they usually require refinement to enable the developer to integrate domain knowledge. This paper presents the SUMO algorithm, an approach that is complementary to existing techniques and enables the maintainer to refine their results. The algorithm is guaranteed to eventually yield a result that is satisfactory to the maintainer, and the evaluation on a diverse range of systems shows that this occurs with a reasonably low amount of effort.
Description: Accepted for presentation at 28th IEEE International Conference on Software Maintenance, 23-30 September 2012, Riva del Garda, Trento, Italy and submitted to the IEEE for publication in the Proceedings of this Conference.
2012-07-09T13:19:00Z
Computing the Structural Difference between State-Based Models
Bogdanov, Kirill
Walkinshaw, Neil
http://hdl.handle.net/2381/10883
2012-07-05T01:01:29Z
2012-07-04T14:26:24Z
Title: Computing the Structural Difference between State-Based Models
Authors: Bogdanov, Kirill; Walkinshaw, Neil
Editors: Zaidman, A.; Antoniol, G.; Ducasee, S.
Abstract: Software behaviour models play an important role in software development. They can be manually generated to specify the intended behaviour of a system, or they can be reverse-engineered to capture the actual behaviour of the system. Models may differ when they correspond to different versions of the system, or they may contain faults or inaccuracies. In these circumstances, it is important to be able to concisely capture the differences between models a task that becomes increasingly challenging with complex models. This paper presents the PLTSDiff algorithm that addresses this problem. Given two state machines, the algorithm can identify which states and transitions are different. This can be used to generate a 'patch' with differences or to evaluate the extent of the differences between the machines. The paper also shows how the Precision and Recall measure can be adapted to quantify the similarity of two state machines.
2012-07-04T14:26:24Z
Inferring Finite-State Models with Temporal Constraints
Walkinshaw, Neil
Bogdanov, Kirill
http://hdl.handle.net/2381/10882
2012-07-05T01:01:28Z
2012-07-04T14:09:44Z
Title: Inferring Finite-State Models with Temporal Constraints
Authors: Walkinshaw, Neil; Bogdanov, Kirill
Abstract: Finite state machine-based abstractions of software behaviour are popular because they can be used as the basis for a wide range of (semi-) automated verification and validation techniques. These can however rarely be applied in practice, because the specifications are rarely kept up- to-date or even generated in the first place. Several techniques to reverse-engineer these specifications have been proposed, but they are rarely used in practice because their input requirements (i.e. the number of execution traces) are often very high if they are to produce an accurate result. An insufficient set of traces usually results in a state machine that is either too general, or incomplete. Temporal logic formulae can often be used to concisely express constraints on system behaviour that might otherwise require thousands of execution traces to identify. This paper describes an extension of an existing state machine inference technique that accounts for temporal logic formulae, and encourages the addition of new formulae as the inference process converges on a solution. The implementation of this process is openly available, and some preliminary results are provided.
2012-07-04T14:09:44Z
Evaluation and Comparison of Inferred Regular Grammars
Walkinshaw, Neil
Bogdanov, Kirill
Johnson, Ken
http://hdl.handle.net/2381/10881
2012-07-04T13:02:45Z
2012-07-04T12:59:49Z
Title: Evaluation and Comparison of Inferred Regular Grammars
Authors: Walkinshaw, Neil; Bogdanov, Kirill; Johnson, Ken
Editors: Clark, Alexander; Coste, François; Miclet, Laurent
Abstract: The accuracy of an inferred grammar is commonly computed by measuring the percentage of sequences that are correctly classified from a random sample of sequences produced by the target grammar. This approach is problematic because (a) it is unlikely that a random sample of sequences will adequately test the grammar and (b) the use of a single probability value provides little insight into the extent to which a grammar is (in-)accurate. This paper addresses these two problems by proposing the use of established model-based testing techniques from the field of software engineering to systematically generate test sets, along with the use of the Precision and Recall measure from the field of information retrieval to concisely represent the accuracy of the inferred machine.
Description: Metadata only entry
2012-07-04T12:59:49Z
Using Compression Algorithms to Support the Comprehension of Program Traces
Walkinshaw, Neil
Afshan, Sheeva
McMinn, Phil
http://hdl.handle.net/2381/10878
2012-07-04T01:01:27Z
2012-07-03T15:48:18Z
Title: Using Compression Algorithms to Support the Comprehension of Program Traces
Authors: Walkinshaw, Neil; Afshan, Sheeva; McMinn, Phil
Abstract: Several software maintenance tasks such as debugging, phase-identification, or simply the high-level exploration of system functionality, rely on the extensive analysis of program traces. These usually require the developer to manually discern any repeated patterns that may be of interest from some visual representation of the trace. This can be both time-consuming and inaccurate; there is always the danger that visually similar trace-patterns actually represent distinct program behaviours. This paper presents an automated phase-identification technique. It is founded on the observation that the challenge of identifying repeated patterns in a trace is analogous to the challenge faced by data-compression algorithms. This applies an established data compression algorithm to identify repeated phases in traces. The SEQUITUR compression algorithm not only compresses data, but organises the repeated patterns into a hierarchy, which is especially useful from a comprehension standpoint, because it enables the analysis of a trace at varying levels of abstraction.
2012-07-03T15:48:18Z
Iterative Refinement of Reverse-Engineered Models by Model-Based Testing
Walkinshaw, Neil
Derrick, John
Guo, Qiang
http://hdl.handle.net/2381/10877
2012-07-04T01:01:26Z
2012-07-03T15:11:38Z
Title: Iterative Refinement of Reverse-Engineered Models by Model-Based Testing
Authors: Walkinshaw, Neil; Derrick, John; Guo, Qiang
Editors: Cavalcanti, Ana; Dams, Dennis R.
Abstract: This paper presents an iterative technique to accurately reverse engineer models of the behaviour of software systems. A he novelty of the approach is the fact that it uses model-based testing to refine the hypothesised model. The process call in principle be entirely automated, and only requires a very small amount of manually generated information to begin with. We have implemented the technique for use in the development of Erlang systems and describe both the methodology as well as our implementation.
2012-07-03T15:11:38Z
Increasing Functional Coverage by Inductive Testing: A Case Study
Walkinshaw, Neil
Bogdanov, Kirill
Derrick, John
Paris, Javier
http://hdl.handle.net/2381/10876
2012-07-04T01:01:24Z
2012-07-03T14:55:33Z
Title: Increasing Functional Coverage by Inductive Testing: A Case Study
Authors: Walkinshaw, Neil; Bogdanov, Kirill; Derrick, John; Paris, Javier
Editors: Petrenko, Alexandre; Simao, Adenilso; Maldonado, José Carlos
Abstract: This paper addresses the challenge of generating test sets that achieve functional coverage, in the absence of a complete specification. The inductive testing technique works by probing the system behaviour with tests, and using the test results to construct an internal model of software behaviour, which is then used to generate further tests. The idea in itself is not new, but prior attempts to implement this idea have been hampered by expense and scalability, and inflexibility with respect to testing strategies. In the past, inductive testing techniques have tended to focus on the inferred models, as opposed to the suitability of the test sets that were generated in the process. This paper presents a flexible implementation of the inductive testing technique, and demonstrates its application with case-study that applies it to the Linux TCP stack implementation. The evaluation shows that the generated test sets achieve a much better coverage of the system than would be achieved by similar non-inductive techniques.
2012-07-03T14:55:33Z
Behaviourally Adequate Software Testing
Fraser, Gordon
Walkinshaw, Neil
http://hdl.handle.net/2381/10875
2012-07-04T01:01:21Z
2012-07-03T14:33:43Z
Title: Behaviourally Adequate Software Testing
Authors: Fraser, Gordon; Walkinshaw, Neil
Editors: Bertolino, A.; Labiche, Y.
Abstract: Identifying a finite test set that adequately captures the essential behaviour of a program such that all faults are identified is a well-established problem. Traditional adequacy metrics can be impractical, and may be misleading even if they are satisfied. One intuitive notion of adequacy, which has been discussed in theoretical terms over the past three decades, is the idea of behavioural coverage; if it is possible to infer an accurate model of a system from its test executions, then the test set must be adequate. Despite its intuitive basis, it has remained almost entirely in the theoretical domain because inferred models have been expected to be exact (generally an infeasible task), and have not allowed for any pragmatic interim measures of adequacy to guide test set generation. In this work we present a new test generation technique that is founded on behavioural adequacy, which combines a model evaluation framework from the domain of statistical learning theory with search-based white-box test generation strategies. Experiments with our BESTEST prototype indicate that such test sets not only come with a statistically valid measurement of adequacy, but also detect significantly more defects.
2012-07-03T14:33:43Z
Incrementally Discovering Testable Specifications from Program Executions
Walkinshaw, Neil
Derrick, John
http://hdl.handle.net/2381/10874
2012-07-04T01:01:30Z
2012-07-03T14:00:36Z
Title: Incrementally Discovering Testable Specifications from Program Executions
Authors: Walkinshaw, Neil; Derrick, John
Editors: DeBoer, Frank S.; Bonsangue, Marcello M.; Hallerstede, Stefan; Leuschel, Michael
Abstract: The Pro Test project(1) is an EU FP7 project to develop techniques that improve the testing and verification of concurrent and distributed software systems. One of the four main work packages is concerned with the automated identification of specifications that could serve as a suitable basis for testing; this is currently a tedious and error-prone manual task that tends to be neglected in practice. This paper describes how this problem has been addressed in the Pro Test project. It describes a technique that uses test executions to refine the specification from which they are generated. It shows how the technique has been implemented and applied to real Erlang systems. It also describes in detail the major challenges that remain to be addressed in future work.
2012-07-03T14:00:36Z
Assessing Test Adequacy for Black Box Systems without Specifications
Walkinshaw, Neil
http://hdl.handle.net/2381/10873
2012-07-04T01:01:31Z
2012-07-03T13:27:46Z
Title: Assessing Test Adequacy for Black Box Systems without Specifications
Authors: Walkinshaw, Neil
Editors: Wolff, Burkhart; Zaidi, Fatiha
Abstract: Testing a black-box system without recourse to a specification is difficult, because there is no basis for estimating how many tests will be required, or to assess how complete a given test set is. Several researchers have noted that there is a duality between these testing problems and the problem of inductive inference (learning a model of a hidden system from a given set of examples). It is impossible to tell how many examples will be required to infer an accurate model, and there is no basis for telling how complete a given set of examples is. These issues have been addressed in the domain of inductive inference by developing statistical techniques, where the accuracy of an inferred model is subject to a tolerable degree of error. This paper explores the application of these techniques to assess test sets of black-box systems. It shows how they can be used to reason in a statistically justified manner about the number of tests required to fully exercise a system without a specification, and how to provide a valid adequacy measure for black-box test sets in an applied context.
2012-07-03T13:27:46Z
Modular performance modelling for mobile applications
Arijo, Niaz
Heckel, Reiko
Tribastone, Mirco
Gilmore, Stephen
http://hdl.handle.net/2381/10206
2012-03-17T02:01:35Z
2012-03-16T10:15:01Z
Title: Modular performance modelling for mobile applications
Authors: Arijo, Niaz; Heckel, Reiko; Tribastone, Mirco; Gilmore, Stephen
Abstract: We propose a model-based approach to analysing the performance
of mobile applications where physical mobility and state changes
are modelled by graph transformations from which a model in the
Performance Evaluation Process Algebra (PEPA) is derived. To
fight scalability problems with state space generation we adopt a
modular solution where the graph transformation system is decomposed
into views, for which labelled transition systems (LTS) are
generated separately and later synchronised in PEPA. We demonstrate
that the result of this modular analysis is equivalent to that
of the monolithic approach and evaluate practicality and scalability
by means of a case study.
Description: This is the author's version of the work. It is posted here by permission
of ACM for your personal use. Not for redistribution. The definitive version was
published in CPE'11 - Proceedings of the 2nd Joint WOSP/SIPEW International
Conference on Performance Engineering, 2011, pp. 329-334. http://doi.acm.org/10.1145/1958746.1958793
2012-03-16T10:15:01Z
Maximising lifetime for fault-tolerant target coverage in sensor networks
Erlebach, Thomas
Grant, Tom
Kammer, Frank
http://hdl.handle.net/2381/10169
2012-03-13T02:01:43Z
2012-03-09T14:19:45Z
Title: Maximising lifetime for fault-tolerant target coverage in sensor networks
Authors: Erlebach, Thomas; Grant, Tom; Kammer, Frank
Abstract: We study the problem of maximising the lifetime of a sensor
network for fault-tolerant target coverage in a setting
with composite events. Here, a composite event is the simultaneous
occurrence of a combination of atomic events,
such as the detection of smoke and high temperature. We
are given sensor nodes that have an initial battery level
and can monitor certain event types, and a set of points
at which composite events need to be detected. The points
and sensor nodes are located in the Euclidean plane, and all
nodes have the same sensing radius. The goal is to compute
a longest activity schedule with the property that at any
point in time, each event point is monitored by at least two
active sensor nodes. We present a (6 + ε)-approximation
algorithm for this problem by devising an approximation
algorithm with the same ratio for the dual problem of minimising
the weight of a fault-tolerant sensor cover and applying
the Garg-Könemann algorithm. Our algorithm for the
minimum-weight fault-tolerant sensor cover problem generalises
previous approximation algorithms for geometric set
cover with weighted unit disks and is obtained by enumerating
properties of the optimal solution that guide a dynamic
programming approach.
2012-03-09T14:19:45Z
Matching Customer Requests to Service Offerings in Real-Time
Tilly, Marcel
Reiff-Marganiec, Stephan
http://hdl.handle.net/2381/9699
2011-09-17T01:02:12Z
2011-09-16T14:13:19Z
Title: Matching Customer Requests to Service Offerings in Real-Time
Authors: Tilly, Marcel; Reiff-Marganiec, Stephan
Abstract: Classic request-response Service-oriented architecture (SOA) has reached a level of maturity where SOA inspired extensions are enabling new and creative domains like the Internet of Things, real-time business or real-time Web. These new domains impose new requirements on SOA, such as a huge data volume, meditation between various data structures and a large number of sources that need to be procured, processed and provided with almost zero latency. Service selection is one of the areas where decisions have to be made based consumer requests and service offerings. Processing this data requires typical SOA behavior combined with more elaborate approaches to process large amounts of data with near-zero latency. The approach presented in this paper combines pub-sub approaches for processing service offerings and mediations with classical request-response SOA approaches for consumer requests facilitated by Complex Event Processing (CEP). This paper presents a novel approach for subscribing to dynamic service properties and receiving up-to-date information in real-time. Therefore, we are able to select services with zero latency since there is no need to pull for property values anymore. The paper shows how to map requests to streaming data, how to process and answer complex requests with low latency and how to enable real-time service selection.
Description: This paper received the SAC 2011 Best Paper award in the Engineering Category.
2011-09-16T14:13:19Z
Modeling business process of web services with an Extended STRIPS Operations to detection feature interaction problems runtime
Xu, Jiuyun
Chen, Kun
Duan, Youxiang
Reiff-Marganiec, Stephan
http://hdl.handle.net/2381/9687
2011-10-03T10:35:17Z
2011-09-13T15:19:30Z
Title: Modeling business process of web services with an Extended STRIPS Operations to detection feature interaction problems runtime
Authors: Xu, Jiuyun; Chen, Kun; Duan, Youxiang; Reiff-Marganiec, Stephan
Abstract: Service-Oriented Computing achieves its full potential when services interoperate. Current service-oriented computing research is concerned with the low level interoperation among services, such as service discovery, service composition etc. However, a high level research issue in form of the feature interaction problem is challenging the interoperation of services. Traditional feature interaction methods are focused on the service design phase using formal methods or pragmatic software engineering analysis. Autonomy and distribution of service development and deployment create needs for runtime detection and resolution of feature interactions in SOC. This paper investigates the detection of feature interactions in web services at runtime and proposes ESTRIPs, an extended STRIPS operation to ensure conflict-free services are identified to populate business processes, using a combination of OWL-S, SWRL and runtime data extracted from SOAP messages. First, we define the feature interaction problem in business process during their execution and then introduce the ESTRIPS method. The implementation of a prototype is illustrated and a real world scenario shows the plausibility of our method for detecting feature interactions in business processes.
Description: This paper was presented at the ICWS 2011, IEEE 9th International Conference on Web Services, 4–9 July 2011, Washington DC, USA and published in the proceedings.
2011-09-13T15:19:30Z
Towards A Collaborative Framework for Image Annotation and Search
Hong, Yi
Reiff-Marganiec, Stephan
http://hdl.handle.net/2381/9685
2011-09-14T01:01:45Z
2011-09-13T14:38:04Z
Title: Towards A Collaborative Framework for Image Annotation and Search
Authors: Hong, Yi; Reiff-Marganiec, Stephan
Abstract: Users tag images with plain text information, which is then used as the basis for search. For the large amount of digital images available on the web this becomes challenging because the tags are abstract concepts whose relationship is undefined. For effective search which requires reasoning on concepts and their relations one requires richer data structures for tagging and one needs to take into account the confidence and credibility of the tagging user. In this paper, we introduce a novel collaborative framework for image annotation, which allows users to create tags that are based on a concept repository which provides a hierarchical context for them as well as allowing to define relationships among said concepts. It also provides a new and systematic way to establish user credibility as well as to compute the truthfulness or reliability of a particular statement, which are used for ranking search results. A prototype has been implemented using this approach and we will show some examples to explain our methodology in detail.
Description: This paper was presented at CAiSE 2011 International Workshops, London, UK, June 20-24, 2011 and published in the proceedings.
2011-09-13T14:38:04Z
Structure and Behaviour of Virtual Organisation Breeding Environments
Bocchi, Laura
Fiadeiro, José
Rajper, Noor
Reiff-Marganiec, Stephan
http://hdl.handle.net/2381/9666
2011-09-09T01:02:02Z
2011-09-08T15:10:57Z
Title: Structure and Behaviour of Virtual Organisation Breeding Environments
Authors: Bocchi, Laura; Fiadeiro, José; Rajper, Noor; Reiff-Marganiec, Stephan
Abstract: This paper provides an outline of a formal approach that we are developing for modelling Virtual Organisations (VOs) and their Breeding Environments (VBEs). We propose different levels of representation for the functional structures and processes that VBEs and VOs involve, which are independent of the specificities of the infrastructures (organisational and technical) that support the functioning of VBEs. This allows us to reason about properties of tasks performed within VBEs and services provided through VOs without committing to the way in which they are implemented.
Description: This paper was presented at Formal Aspects of Virtual Organisations (FAVO 2009), 3 November 2009, Eindhoven, The Netherlands and published in the proceedings.
2011-09-08T15:10:57Z
Process Reservation for Service-Oriented Applications
Chen, HongHui
Ma, JianWei
Huangfu, Xianpeng
Guo, Deke
Reiff-Marganiec, Stephan
http://hdl.handle.net/2381/9661
2011-09-07T01:01:47Z
2011-09-06T11:26:39Z
Title: Process Reservation for Service-Oriented Applications
Authors: Chen, HongHui; Ma, JianWei; Huangfu, Xianpeng; Guo, Deke; Reiff-Marganiec, Stephan
Abstract: With an increasing use of services sustaining the resources which people need has become more important. In this paper, we propose an effective reservation method, called “BPSR” (Business Process Service Reservation), which aims for the process reservation which to the best of our knowledge has not been studied before. In particular we address for major jobs: service differentiation; service reservation; process reservation and QoS Control. We also describe a flexible policy-based reservation method which aims to increase the success rate of reservation and utilization of service resources. Experimental analysis shows that the BPSR reservation system achieves better results than other reservation methods.
Description: This paper was presented at the 6th World Congress on Services, Miami, Florida, July 05-July 10 2010 and published in the proceedings.
2011-09-06T11:26:39Z
Web Services Feature Interaction Detection based on Situation Calculus
Xu, Jiuyun
Yu, Wengong
Chen, Kun
Reiff-Marganiec, Stephan
http://hdl.handle.net/2381/9660
2011-09-07T01:01:49Z
2011-09-06T10:48:01Z
Title: Web Services Feature Interaction Detection based on Situation Calculus
Authors: Xu, Jiuyun; Yu, Wengong; Chen, Kun; Reiff-Marganiec, Stephan
Abstract: Feature interaction has been identified as a problem in the telecommunications domain in the 1980s, but since it has been shown to be a problem of systems that are composed of individually designed components. Clearly Web service composition is a way of building services from independently designed components and hence is subject to the same problem. This paper investigates the detection of feature interactions in Web services at runtime and proposes a novel detection method by taking inspiration from the Situation Calculus. Two case studies show that it is effective for detecting feature interactions in composite Web services.
Description: This paper was presented at the 6th World Congress on Services, Miami, Florida, July 05-July 10 2010 and published in the proceedings.
2011-09-06T10:48:01Z
Hybrid Solutions to the Feature Interaction Problem
Calder, Muffy
Kolberg, Mario
Magill, Evan
Marples, Dave
Reiff-Marganiec, Stephan
http://hdl.handle.net/2381/9464
2011-08-25T12:53:18Z
2011-06-14T14:36:10Z
Title: Hybrid Solutions to the Feature Interaction Problem
Authors: Calder, Muffy; Kolberg, Mario; Magill, Evan; Marples, Dave; Reiff-Marganiec, Stephan
Abstract: In this paper we assume a competitive marketplace where the features are
developed by different enterprises, which cannot or will not exchange information.
We present a classification of feature interaction in this setting and introduce an online
technique which serves as a basis for the two novel hybrid approaches presented.
These approaches are hybrid as they are neither strictly off-line nor on-line, but
combine aspects of both. The two approaches address different kinds of feature interactions,
and thus are complimentary. Together they provide a complete solution by
addressing interaction detection and resolution.We illustrate the techniques within the
communication networks domain.
Description: This paper was presented at FIW’03 - Seventh International Workshop on Feature Interactions in Telecommunication and Software Systems and published in the proceedings. The published version is available at http://www.iospress.nl/.
2011-06-14T14:36:10Z
A Policy Architecture for Enhancing and Controlling Features
Reiff-Marganiec, Stephan
Turner, Kenneth J.
http://hdl.handle.net/2381/9463
2011-08-25T12:54:22Z
2011-06-14T14:18:23Z
Title: A Policy Architecture for Enhancing and Controlling Features
Authors: Reiff-Marganiec, Stephan; Turner, Kenneth J.
Abstract: Features provide extensions to a basic service, but in new systems users
require much greater flexibility oriented towards their needs. Traditional features do
not easily allow for this. We propose policies as the features of the future. Policies
can be defined by the end-user, and allow for the use of rich context information when
controlling calls. This paper introduces an architecture for policy definition and call
control by policies.We discuss the operation of systems based on such an architecture.
An important aspect of the architecture is integral feature interaction handling.
Description: This paper was presented at FIW’03 - Seventh International Workshop on Feature Interactions in Telecommunication and Software Systems and published in the proceedings. The published version is available at http://www.iospress.nl/.
2011-06-14T14:18:23Z
Markov-HTN Planning Approach to Enhance Flexibility of Automatic Web Services Composition
Reiff-Marganiec, Stephan
Chen, Kun
Xu, Jiuyun
http://hdl.handle.net/2381/9133
2011-03-14T16:52:38Z
2011-03-04T10:58:55Z
Title: Markov-HTN Planning Approach to Enhance Flexibility of Automatic Web Services Composition
Authors: Reiff-Marganiec, Stephan; Chen, Kun; Xu, Jiuyun
Abstract: Automatic Web services composition can be
achieved by using AI planning techniques. HTN
planning has been adopted to handle the OWL-S Web
service composition problem. However, existing
composition methods based on HTN planning have not
considered the choice of decompositions available to a
problem which can lead to a variety of valid solutions.
In this paper, we propose a model of combining a
Markov decision process model and HTN planning to
address Web services composition. In the model, HTN
planning is enhanced to decompose a task in multiple
ways and hence be able to find more than one plan,
taking both functional and non-functional properties
into account. Furthermore, an evaluation method to
choose the optimal plan and some experimental results
illustrate that the proposed approach works effectively.
Description: This is the authors' final draft (accepted version), ©2009 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other users, including reprinting/ republishing this material for advertising or promotional purposes, creating new collective works for resale or redistribution to servers or lists, or reuse of any copyrighted components of this work in other works. The original published version can be found on the publisher's website at: http://www.ieee.org/index.html ; DOI: 10.1109/ICWS.2009.43
2011-03-04T10:58:55Z