DSpace Collection:
http://hdl.handle.net/2381/4072
20150706T14:00:53Z

Reducing Idle Listening during Data Collection in Wireless Sensor Networks
http://hdl.handle.net/2381/32370
Title: Reducing Idle Listening during Data Collection in Wireless Sensor Networks
Authors: Rasul, Aram; Erlebach, Thomas
Abstract: Data collection is one of the predominant operations in wireless sensor networks. This paper focuses on the problem of efficient data collection in a setting where some nodes may not possess data each time data is collected. In that case, idle listening slots may occur, which lead to a waste of energy and an increase in latency. To alleviate these problems, successiveslot schedules were proposed by Zhao and Tang (Infocom 2011). In this paper, we introduce a socalled extrabit technique to reduce idle listening further. Each packet includes an extra bit that informs the receiver whether further data packets will follow or not. The extrabit technique leads to significantly reduced idle listening and improved latency in many cases. We prove that every successiveslot schedule is also an extrabit schedule. We then consider the special case of linear networks and prove that the optimal length of a successiveslot schedule (or extrabit schedule) is 4N  6 time slots, where N ≥ 3 is the number of nodes excluding the sink. Then the proposed extrabit technique is compared with the successiveslot technique with respect to the expected amount of idle listening, and it is shown that the extrabit technique reduces idle listening substantially.
Description: timestamp: Thu, 05 Mar 2015 11:05:43 +0100 biburl: http://dblp.unitrier.de/rec/bib/conf/msn/RasulE14 bibsource: dblp computer science bibliography, http://dblp.org
20150612T09:26:08Z

Better Algorithms for Online Bin Stretching
http://hdl.handle.net/2381/32322
Title: Better Algorithms for Online Bin Stretching
Authors: Böhm, Martin; Sgall, Jin; Stee, Rob van; Veselý, Pavel
Abstract: Online Bin Stretching is a semionline variant of bin packing in which the algorithm has to use the same number of bins as the optimal packing, but is allowed to slightly
overpack the bins. The goal is to minimize the amount of overpacking, i.e., the maximum
size packed into any bin.
We give an algorithm for Online Bin Stretching with a stretching factor of 1:5 for
any number of bins. We also show a specialized algorithm for three bins with a stretching
factor of 11=8 = 1:375.
Description: The final publication is available at Springer via http://dx.doi.org/10.1007/9783319182636_3
20150601T10:31:30Z

The optimal absolute ratio for online bin packing
http://hdl.handle.net/2381/32320
Title: The optimal absolute ratio for online bin packing
Authors: Balogh, Janos; Békési, Jozsef; Dósa, Gyorgy; Sgall, Jiri; Stee, Rob van
Abstract: We present an online bin packing algorithm with absolute competitive ratio 5/3, which is optimal.
20150601T09:53:03Z

Performance and energy evaluation of RESTful web services in Raspberry Pi
http://hdl.handle.net/2381/32267
Title: Performance and energy evaluation of RESTful web services in Raspberry Pi
Authors: Nunes, L. H.; Nakamura, L. H. V.; De F. Vieira, H..; De O. Libardi, R. M.; De Oliveira, E. M.; Estrella, J. C.; ReiffMarganiec, Stephan
Abstract: Green computing has emerged as a hot topic leading to a need to understand energy consumption of computations. This need also extends to devices with limited resources as are common in the internet of things. RESTful services have shown their potential on such devices, but there are many choices of frameworks for their development and execution. Current research has analysed performance of the frameworks but no attention has been given to systematically studying their power consumption. In this paper we analyse the execution behaviour and power consumption of web services on devices with limited resources and make initial observations that should influence future development of web service frameworks. Specifically, we conduct experiments comparing web services in the Axis2 and CXF frameworks analysing the respective performance and power consumption. Bringing together the best features of small devices and SoC, it is possible to provide diverse, mobile and green applications  however careful selection of development environments can make significant differences in performance and energy consumption.
20150513T10:06:24Z

Observing access control policies using scrabble games
http://hdl.handle.net/2381/32266
Title: Observing access control policies using scrabble games
Authors: Ahmad, S.; Abidin, S. Z. Z.; Omar, N.; ReiffMarganiec, Stephan
Abstract: Access control is concerned with the policies that manage data sharing activities. Access control plays a crucial role in application areas such as education, health and business. However, most programming languages and programming environments do not naturally provide support for implementing access control policies requiring access control policies for systems to be coded as part of the development effort. Access control management policies are highlevel features so having to involve a computer programmer during deployment stages for making changes to policies is costly. In this paper, we present an abstraction of access control management policies in the form of extended scrabble in its rules. The needs of access control policies program construct for supporting this game are examined. New relevant program constructs are then incorporated into JACIE (Javabased Authoring language for Collaborative Interactive Environments). The usefulness of these program construct are being demonstrated through the extended scrabble.
20150513T09:55:17Z

Classifying Smart Objects using capabilities
http://hdl.handle.net/2381/32265
Title: Classifying Smart Objects using capabilities
Authors: Pérez Hernández, Marco E.; ReiffMarganiec, Stephan
Abstract: The Internet Of Things has emerged, providing an umbrella for the increasing number of heterogeneous Smart Objects that are becoming part of our daily activities. In this scenario, classification approaches are useful to understand differences and identify opportunities of generalization and common solutions, especially when different disciplines are coming together and bringing their individual terms and concepts. We propose a novel model for classifying Smart Objects using capabilities. This fivelevel model, inspired in the Capability Maturity Models, aims to be simple and inclusive, separating objects with basic capabilities from those with complex ones. In addition, examples of objects for each level are provided as validation of the proposal. The model is useful to identify requirements that Smart Objects have to cover externally as they cannot themselves support them and thus it allows for clear understanding of the external support system (or Middleware) into which the smart object is embedded.
20150513T09:46:00Z

Design and implementation of fault tolerance techniques to improve QoS in SOA
http://hdl.handle.net/2381/32264
Title: Design and implementation of fault tolerance techniques to improve QoS in SOA
Authors: Oliveira, E. M.; Estrella, J. C.; Kuehne, B. T.; Filho, D. M. L.; Adami, L. J.; Nunes, L. H.; Nakamura, L. H.; Libardi, R. M.; Souza, P. S. L.; ReiffMarganiec, Stephan
Abstract: Fault tolerance techniques can improve the trust of users in service oriented architectures as they can ensure data availability. This paper presents an implementation of a novel fault tolerance mechanism in a SOA architecture which simultaneously provides increased availability and better quality of service. In addition to this mechanism, a service selector using reputation ratings of the architecture components is discussed. The selection is based on information from past transactions of the components of the architecture, which allows to identify the best web services able to meet the requests of customers. The mechanisms are tested and a performance evaluation is presented to validate the results.
20150513T09:23:28Z

A semantic approach for efficient and customized management of IaaS resources
http://hdl.handle.net/2381/32263
Title: A semantic approach for efficient and customized management of IaaS resources
Authors: Nakamura, L. H. V.; Estrella, J. C.; Santana, R. H. C.; Santana, M. J.; ReiffMarganiec, Stephan
Abstract: This paper presents a semantic approach to custom management of IaaS (Infrastructure as a Service) resources in a cloud computing environment requiring minimal human intervention from both the cloud provider and the user. The proposal differs from other approaches by using autonomic computing and semantic web techniques together to provide a selfconfiguring and selfoptimizing environment that aims to satisfy SLAs (Service Level Agreements). The approach monitors the virtualized resources to guarantee a customized and optimized use based on financial criteria and energy consumption policies.
20150513T09:15:14Z

Managing access control policy from end user perspective in collaborative environment
http://hdl.handle.net/2381/32252
Title: Managing access control policy from end user perspective in collaborative environment
Authors: Ahmad, S.; Omar, N.; Abidin, S. Z. Z.; ReiffMarganiec, Stephan
Abstract: Currently, collaborative environments offer unlimited data sharing for users. Data owners are poorly involved in handling their data for such environment when it deals with data policy. Normally, data access control policy consists of a resource and authorization descriptions which are assigned by the administrator. It is the responsibility of the administrator to set and specify the policy for application services. The policy details are massive and complex for administrator to handle where most of the times there will be cases of unreview services. This paper proposes a framework that allows data owners to provision policies for storing and managing their shared data with third parties. By adapting RBAC model and adding owner's interest on permissions for data operations and objects, the proposed framework will facilitate data access control whereby owners have the freedom to set their own data access policy.
20150508T14:55:00Z

Performance Evaluation in a Cloud with the Provisioning of Different Resources Configurations
http://hdl.handle.net/2381/32250
Title: Performance Evaluation in a Cloud with the Provisioning of Different Resources Configurations
Authors: Batista, B. G.; Estrella, J. C.; Santana, M. J.; Santana, R. H. C.; ReiffMarganiec, Stephan
Abstract: Cloud computing is a computing style where resource providers can offer ondemand services in a transparent way and clients usually pay as they go. It introduces a new level of flexibility and scalability for IT users addressing challenges such as the rapid change in IT and the need to reduce cost and time of infrastructure management. However, to be able to offer QoS guarantees without limiting the number of accepted requests, providers must be able to dynamically adjust the available resources to serve requests. This dynamic resource management is not a trivial task, bringing its own challenges related to workload and performance modelling, and deployment and monitoring of applications on virtualised IT resources. An efficient mapping between resources and applications ensures workload balancing and good resource utilization and allows to meet the QoS levels required by clients. This paper presents a performance evaluation that considers different resource configurations in a cloud environment to define which dimension of resource scaling has real impact on client applications.
20150508T14:11:44Z

LowLatency Service Data Aggregation Using Policy Obligations
http://hdl.handle.net/2381/32249
Title: LowLatency Service Data Aggregation Using Policy Obligations
Authors: ReiffMarganiec, Stephan; Tilly, M.; Janicke, H.
Abstract: The Internet of Things, large scale sensor networks or even in social media, are now well established and their use is growing daily. Usage scenarios in these fields highlight the requirement to process, procure, and provide information with almost zero latency. This work is introducing new concepts for enabling fast communication by limiting information flow through filtering concepts combined with data processing techniques adopted from complex event processing. Specifically we introduce a novel mediation services architecture using filter policies to reduce latency. The filter policies define when and what data services need to provide to the mediator and thus save on bandwidth. The filter policies describe temporal conditions between two events removing the need to keep a complete history while still allowing temporal reasoning. Promising experimental results highlight the advantages to be gained from the approach.
20150508T13:59:21Z

PEESOS: A Web Tool for Planning and Execution of Experiments in Service Oriented Systems
http://hdl.handle.net/2381/32241
Title: PEESOS: A Web Tool for Planning and Execution of Experiments in Service Oriented Systems
Authors: Nunes, L. H.; Vasconcelos Nakamura, L. H.; Tardiole Kuehne, B.; De Oliveira, E. M.; De O Libardi, R. M.; Junqueira Adami, L.; Estrella, J. C.; ReiffMarganiec, Stephan
Abstract: Performing functionality testing in serviceoriented architectures is not a trivial task. The difficulty is especially the large number of components that may be present in a SOA such as brokers, providers, service registries, clients, monitoring tools, data storage tools, etc. Thus, in order to facilitate the process of conducting functional testing and capacity planning in serviceoriented systems, we present PEESOS. This first version is a functional prototype that offers facilities to assist researchers and industry to test their new applications, allowing collaborations that can be done between the participants to achieve an appropriate objective when developing a new application. The first results show that it is possible to make a planning environment easier to operate and to readily obtain results for performance evaluation of a target architecture. Since this is a first version of the prototype, it has interface and scalability limitations as well as needing improvements in performance of the logs repository and also in a core engine. We hope that such limitations can be corrected in the near future, including gathering information from the scientific community to make the prototype a useful and accessible tool. PEESOS is online and available at http://peesos.wsarch.lasdpc.icmc.usp.br.
Description: PEESOS is online and available at http://peesos.wsarch.lasdpc.icmc.usp.br.
20150508T10:56:39Z

Providing Quality of Service on Services Selection Using Anycast Techniques
http://hdl.handle.net/2381/32239
Title: Providing Quality of Service on Services Selection Using Anycast Techniques
Authors: Adami, L. J.; Estrella, J. C.; De Oliveira, E. M.; Reiff Marganiec, Stephan
Abstract: Over the last years, the complexity and variety of services available on the Internet increased. This fact is leading to the search for efficient techniques of routing client requests to the best server available. A known technique is the application layer anycast (ALA). The main goal of this work is to elaborate efficient ways to provide ALA with quality of service in the context of cloud computing. To achieve this goal, we proposed a new algorithm (GALA, Global Application Layer Anycast). It inherits characteristics from another existing system (GAA, Global Applicationlayer Anycast), and uses geolocation as a differential. The results of the experiments show that GALA, compared to the inherited algorithm, maintains the client requests efficiencies and substantially lowers their latencies.
20150508T10:48:38Z

A UtilityAware Runtime Conflict Resolver for Composite Web services
http://hdl.handle.net/2381/32237
Title: A UtilityAware Runtime Conflict Resolver for Composite Web services
Authors: Ning, X.; Xu, J.; Xu, N.; ReiffMarganiec, Stephan
Abstract: Web services are developed independently and
deployed in a distributed environment, new service can be
obtained by composing existing ones. The rapid introduction
of new services also results in undesirable interactions between
services. These conflicts are not mismatches of interfaces, but
are usually based on the data in the executing instance and
therefore runtime management of conflicts in Web services
should be considered. We study the problem from the perspective
of user’s revenue, and propose an online approach to
resolve conflicts is proposed.
20150508T10:40:16Z

Study Case of Restful Frameworks in Raspberry Pi: A Perfomance and Energy Overview
http://hdl.handle.net/2381/32236
Title: Study Case of Restful Frameworks in Raspberry Pi: A Perfomance and Energy Overview
Authors: Nunes, L. H.; Nakamura, L. H. V.; De F. Vieira, H.; De O. Libardi, R. M.; De Oliveira, E. M.; Adami, L. J.; Estrella, J. C.; ReiffMarganiec, Stephan
Abstract: This paper analyzes the execution behavior of web
services on devices with limited resources. The experiments
compare web services in the Axis2 and CXF frameworks
analyzing performance and power consumption. To determine
which framework is better suited for service provision, a testing
environment and a performance and energy evaluation between
them are presented. We show that the Raspberry Pi can be
useful in serviceoriented applications for different types of
tasks. Bringing together the best features of small devices
and SoC, it is possible to provide diverse, mobile and green
applications.
20150508T10:34:01Z

Fast Selection of Web Services with QoS using a Distributed Parallel Semantic Approach
http://hdl.handle.net/2381/32235
Title: Fast Selection of Web Services with QoS using a Distributed Parallel Semantic Approach
Authors: Nakamura, L. H. V.; do Prado, P. F.; de O. Libardi, R. M.; Nunes, L. H.; Estrella, J. C.; Santana, R. H. C.; Santana, M. J.; ReiffMarganiec, Stephan
Abstract: This paper presents a solution to performance
issues in the quality of service aware selection of Web services
using techniques of parallelism and mechanisms of inference
provided by Semantic Web. The results point to a significant
improvement in the speed of searching Web services and thus
makes the use of semantic resources viable in distributed
systems to provide better quality of service to the clients.
20150508T10:25:34Z

Encoding Data Structures
http://hdl.handle.net/2381/32217
Title: Encoding Data Structures
Authors: Raman, Rajeev
Abstract: In recent years, there has been an explosion of interest in
succinct data structures, which store the given data in compact or compressed
formats and answer queries on the data rapidly while it is still
in its compressed format. Our focus in this talk is to introduce encoding
data structures. Encoding data structures consider the data together
with the queries and aim to store only as much information about the
data as is needed to store the queries. Once this is done, the original data
can be deleted. In many cases, one can obtain spaceefficient encoding
data structures even when the original data is incompressible.
Description: Abstract of invited talk.
20150507T13:20:56Z

Compact Encodings and Indexes for the Nearest Larger Neighbor Problem
http://hdl.handle.net/2381/32215
Title: Compact Encodings and Indexes for the Nearest Larger Neighbor Problem
Authors: Jo, S.; Raman, Rajeev; Satti, S. R.
Abstract: Given a ddimensional array, for any integer d > 0, the nearest
larger value (NLV) query returns the position of the element which
is closest, in L1 distance, to the query position, and is larger than the
element at the query position. We consider the problem of preprocessing
a given array, to construct a data structure that can answer NLV queries
efficiently. In the 2D case, given an n × n array A, we give an asymptotically
optimal O(n
2
)bit encoding that answers NLV queries in O(1)
time. When A is a binary array, we describe a simpler O(n
2
)bit encoding
that also supports NLV queries in O(1) time. Using this, we obtain
an index of size O(n
2
/c) bits that supports NLV queries in O(c) time,
for any parameter c, where 1 ≤ c ≤ n, matching the lower bound. For
the 1D case we consider the nearest larger right value (NLRV) problem
where the nearest larger value to the right is sought. For an array of
length n, we obtain an index that takes O((n/c) log c) bits, and supports
NLRV queries in O(c) time, for any any parameter c, where 1 ≤ c ≤ n,
improving the earlier results of Fischer et al. and Jayapaul et al.
20150507T13:05:29Z

BlackBox Test Generation from Inferred Models
http://hdl.handle.net/2381/32206
Title: BlackBox Test Generation from Inferred Models
Authors: Papadopoulos, Petros; Walkinshaw, Neil
Abstract: Automatically generating test inputs for components
without source code (are ‘blackbox’) and specification is challenging.
One particularly interesting solution to this problem is to
use Machine Learning algorithms to infer testable models from
program executions in an iterative cycle. Although the idea has
been around for over 30 years, there is little empirical information
to inform the choice of suitable learning algorithms, or to show
how good the resulting test sets are. This paper presents an
openly available framework to facilitate experimentation in this
area, and provides a proofofconcept inferencedriven testing
framework, along with evidence of the efficacy of its test sets on
three programs.
20150507T10:44:39Z

CPT+: Decreasing the time/space complexity of the Compact Prediction Tree
http://hdl.handle.net/2381/32181
Title: CPT+: Decreasing the time/space complexity of the Compact Prediction Tree
Authors: Gueniche, T.; FournierViger, P.; Raman, Rajeev; Tseng, V.
Abstract: Predicting next items of sequences of symbols has many applications in a wide range of domains. Several sequence prediction models have been proposed such as DG, Allkorder markov and PPM. Recently, a model named Compact Prediction Tree (CPT) has been proposed. It relies on a tree structure and a more complex prediction algorithm to offer considerably more accurate predictions than many stateoftheart prediction models. However, an important limitation of CPT is its high time and space complexity. In this article, we address this issue by proposing three novel strategies to reduce CPT’s size and prediction time, and increase its accuracy. Experimental results on seven real life datasets show that the resulting model (CPT+) is up to 98 times more compact and 4.5 times faster than CPT, and has the best overall accuracy when compared to six stateoftheart models from the literature: AllKorder Markov, CPT, DG, Lz78, PPM and TDAG.
Description: Accepted as "REGULAR PAPER WITH LONG PRESENTATION" 27 of 405 submisisons accepted in this category.
20150507T10:25:53Z

Encoding Nearest Larger Values
http://hdl.handle.net/2381/32178
Title: Encoding Nearest Larger Values
Authors: Nicholson, P. K.; Raman, Rajeev
Abstract: In nearest larger value (NLV) problems, we are given an array A[1..n] of numbers, and need to preprocess A to answer queries of the following form: given any index i ∈ [1, n], return a “nearest” index j such that A[j] > A[i]. We consider the variant where the values in A are distinct, and we wish to return an index j such that A[j] > A[i] and j − i is minimized, the nondirectional NLV (NNLV) problem. We consider NNLV in the encoding model, where the array A is delete after preprocessing, and note that NNLV encoding problem has an unexpectedly rich structure: the effective entropy (optimal space usage) of the problem depends crucially on details in the definition of the problem. Using a new pathcompressed representation of binary trees, that may have other applications, we encode NNLV in 1.9n + o(n) bits, and answer queries in O(1) time.
20150507T10:24:32Z

MSSF: A step towards userfriendly multicloud data dispersal
http://hdl.handle.net/2381/32118
Title: MSSF: A step towards userfriendly multicloud data dispersal
Authors: Libardi, R. M. de O.; Bedo, M. V. N.; ReiffMarganiec, Stephan; Estrella, Julio C.
Abstract: With an increasing number of companies and individuals
adopting cloud computing for their data needs. Naturally,
there is a shift in financial and operational costs and and provided
services should meet users’ performance and cost expectations.
We focus on storage and propose MSSF, a Multicloud Storage
Selection Framework. MSSF contains a basic set of algorithms,
a set of security rules and a formal definition of user profiles
allowing to fit cloud storage services to user needs. Preliminary
experiments investigate the cost differences between two baseline
algorithms and user profile models. Considering the promising
initial results we provide several observations about the MSSF
decision making process that will help with future improvements.
20150506T12:06:35Z

Tractable Probabilistic μCalculus that Expresses Probabilistic Temporal Logics
http://hdl.handle.net/2381/32065
Title: Tractable Probabilistic μCalculus that Expresses Probabilistic Temporal Logics
Authors: Castro, P.; Kilmurray, C.; Piterman, Nir
Abstract: We revisit a recently introduced probabilistic μcalculus and study an expressive fragment of it. By using the probabilistic quantification as an atomic operation of the calculus we establish a connection between the calculus and obligation games. The calculus we consider is strong enough to encode wellknown logics such as pCTL and pCTL*. Its game semantics is very similar to the game semantics of the classical μcalculus (using parity obligation games instead of parity games). This leads to an optimal complexity of NP intersection coNP for its finite model checking procedure. Furthermore, we investigate a (relatively) wellbehaved fragment of this calculus: an extension of pCTL with fixed points. An important feature of this extended version of pCTL is that its model checking is only exponential w.r.t. the alternation depth of fixed points, one of the main characteristics of Kozen's μcalculus.
Description: 1998 ACM Subject Classification F.4.1 Mathematical Logic
20150424T16:40:02Z

Fairness for InfiniteState Systems
http://hdl.handle.net/2381/31957
Title: Fairness for InfiniteState Systems
Authors: Cook, B.; Khlaaf, H.; Piterman, Nir
Editors: Baier, C.; Tinelli , C.
Abstract: In this paper we introduce the first known tool for symbolically proving fairCTL properties of (infinitestate) integer programs. Our solution is based on a reduction to existing techniques for fairness free CTL model checking via the use of infinite nondeterministic branching to symbolically partition fair from unfair executions. We show the viability of our approach in practice using examples drawn from device drivers and algorithms utilizing shared resources.
Description: See also Research Note RN/14/11 UCL Department of Computer Science.
20150408T09:51:07Z

From Communicating Machines to Graphical Choreographies
http://hdl.handle.net/2381/31932
Title: From Communicating Machines to Graphical Choreographies
Authors: Lange, Julien; Tuosto, Emilio; Yoshida, Nobuko
Editors: Rajamani, S. K.; Walker, D.
Abstract: Graphical choreographies, or global graphs, are general multiparty session specifications featuring expressive constructs such as forking, merging, and joining for representing applicationlevel protocols. Global graphs can be directly translated into modelling notations such as BPMN and UML. This paper presents an algorithm whereby a global graph can be constructed from asynchronous interactions represented by communicating finitestate machines (CFSMs). Our results include: a sound and complete characterisation of a subset of safe CFSMs from which global graphs can be constructed; an algorithm to translate CFSMs to global graphs; a time complexity analysis; and an implementation of our theory, as well as an experimental evaluation.
20150327T15:38:12Z

Minimum Spanning Tree Verification under Uncertainty
http://hdl.handle.net/2381/31666
Title: Minimum Spanning Tree Verification under Uncertainty
Authors: Erlebach, Thomas R.; Hoffmann, Michael
Abstract: In the verification under uncertainty setting, an algorithm is given, for each input item, an uncertainty area that is guaranteed to contain the exact input value, as well as an assumed input value. An update of an input item reveals its exact value. If the exact value is equal to the assumed value, we say that the update verifies the assumed value. We consider verification under uncertainty for the minimum spanning tree (MST) problem for undirected weighted graphs, where each edge is associated with an uncertainty area and an assumed edge weight. The objective of an algorithm is to compute the smallest set of updates with the property that, if the updates of all edges in the set verify their assumed weights, the edge set of an MST can be computed. We give a polynomialtime optimal algorithm for the MST verification problem by relating the choices of updates to vertex covers in a bipartite auxiliary graph. Furthermore, we consider an alternative uncertainty setting where the vertices are embedded in the plane, the weight of an edge is the Euclidean distance between the endpoints of the edge, and the uncertainty is about the location of the vertices. An update of a vertex yields the exact location of that vertex. We prove that the MST verification problem in this vertex uncertainty setting is NPhard. This shows a surprising difference in complexity between the edge and vertex uncertainty settings of the MST verification problem.
20150213T11:59:12Z

Computational complexity of traffic hijacking under BGP and SBGP
http://hdl.handle.net/2381/31661
Title: Computational complexity of traffic hijacking under BGP and SBGP
Authors: Chiesa, M.; Di Battista, G.; Patrignani, M.; Erlebach, Thomas
Abstract: Harmful Internet hijacking incidents put in evidence how fragile the Border Gateway Protocol (BGP) is, which is used to exchange routing information between Autonomous Systems (ASes). As proved by recent research contributions, even SBGP, the secure variant of BGP that is being deployed, is not fully able to blunt traffic attraction attacks. Given a traffic flow between two ASes, we study how difficult it is for a malicious AS to devise a strategy for hijacking or intercepting that flow. We show that this problem marks a sharp difference between BGP and SBGP. Namely, while it is solvable, under reasonable assumptions, in polynomial time for the type of attacks that are usually performed in BGP, it is NPhard for SBGP. Our study has several byproducts. E.g., we solve a problem left open in the literature, stating when performing a hijacking in SBGP is equivalent to performing an interception.
20150212T15:21:43Z

Approximation algorithms for disjoint stpaths with minimum activation cost
http://hdl.handle.net/2381/31648
Title: Approximation algorithms for disjoint stpaths with minimum activation cost
Authors: Alqahtani, Hasna Mohsen; Erlebach, Thomas
Abstract: In network activation problems we are given a directed or undirected graph G = (V,E) with a family {f (x, x) : (u,v) ∈ E} of monotone nondecreasing activation functions from D to {0,1}, where D is a constantsize domain. The goal is to find activation values x for all v ∈ V of minimum total cost Σ x such that the activated set of edges satisfies some connectivity requirements. Network activation problems generalize several problems studied in the network literature such as power optimization problems. We devise an approximation algorithm for the fundamental problem of finding the Minimum Activation Cost Pair of NodeDisjoint stPaths (MA2NDP). The algorithm achieves approximation ratio 1.5 for both directed and undirected graphs. We show that a ρapproximation algorithm for MA2NDP with fixed activation values for s and t yields a ρapproximation algorithm for the Minimum Activation Cost Pair of EdgeDisjoint stPaths (MA2EDP) problem. We also study the MA2NDP and MA2EDP problems for the special case D = 2.
20150211T13:28:34Z

Establishing the Source Code Disruption Caused by Automated Remodularisation Tools
http://hdl.handle.net/2381/31618
Title: Establishing the Source Code Disruption Caused by Automated Remodularisation Tools
Authors: Hall, M.; Khojaye, Muhammad; Walkinshaw, Neil; McMinn, P.
Editors: Poshyvanik, D.; Zaidman, A.
Abstract: Current software remodularisation tools only operate on abstractions of a software system. In this paper, we investigate the actual impact of automated remodularisation on source code using a tool that automatically applies remodularisations as refactorings. This shows us that a typical remodularisation (as computed by the Bunch tool) will require changes to thousands of lines of code, spread throughout the system (typically no code files remain untouched). In a typical multideveloper project this presents a serious integration challenge, and could contribute to the low uptake of such tools in an industrial context. We relate these findings with our ongoing research into techniques that produce iterative commit friendly code changes to address this problem.
20150205T14:45:08Z

Minimum activation cost nodedisjoint paths in graphs with bounded treewidth
http://hdl.handle.net/2381/31524
Title: Minimum activation cost nodedisjoint paths in graphs with bounded treewidth
Authors: Alqahtani, Hasna Mohsen; Erlebach, Thomas
Abstract: In activation network problems we are given a directed or undirected graph G = (V,E) with a family {f uv : (u,v) ∈ E} of monotone nondecreasing activation functions from D2 to {0,1}, where D is a constantsize subset of the nonnegative real numbers, and the goal is to find activation values xv for all v ∈ V of minimum total cost ∑ v ∈ V x v such that the activated set of edges satisfies some connectivity requirements. We propose algorithms that optimally solve the minimum activation cost of k nodedisjoint stpaths (stMANDP) problem in O(tw ((5 + tw)D)2tw + 2V3) time and the minimum activation cost of nodedisjoint paths (MANDP) problem for k disjoint terminal pairs (s 1,t 1),…,(s k ,t k ) in O(tw ((4 + 3tw)D)2tw + 2V) time for graphs with treewidth bounded by tw.
20150130T12:01:13Z

Lem : Reusable engineering of realworld semantics
http://hdl.handle.net/2381/29328
Title: Lem : Reusable engineering of realworld semantics
Authors: Mulligan, Dominic P.; Gray, Kathryn E.; Sewell, Peter; Owens, Scott; Ridge, Tom
Abstract: Recent years have seen remarkable successes in rigorous engineering: using mathematically rigorous semantic models (not just idealised calculi) of realworld processors, programming languages, protocols, and security mechanisms, for testing, proof, analysis, and design. Building these models is challenging, requiring experimentation, dialogue with vendors or standards bodies, and validation; their scale adds engineering issues akin to those of programming to the task of writing clear and usable mathematics. But language and tool support for specification is lacking. Proof assistants can be used but bring their own difficulties, and a model produced in one, perhaps requiring many personyears effort and maintained over an extended period, cannot be used by those familiar with another. We introduce Lem, a language for engineering reusable largescale semantic models. The Lem design takes inspiration both from functional programming languages and from proof assistants, and Lem definitions are translatable into OCaml for testing, Coq, HOL4, and Isabelle/HOL for proof, and LaTeX and HTML for presentation. This requires a delicate balance of expressiveness, careful library design, and implementation of transformations  akin to compilation, but subject to the constraint of producing usable and humanreadable code for each target. Lem's effectiveness is demonstrated by its use in practice. © 2014 ACM.
20141209T10:33:34Z

Simple, efficient, sound and complete combinator parsing for all contextfree grammars, using an oracle
http://hdl.handle.net/2381/29327
Title: Simple, efficient, sound and complete combinator parsing for all contextfree grammars, using an oracle
Authors: Ridge, Tom
Abstract: Parsers for contextfree grammars can be implemented directly and naturally in a functional style known as “combinator parsing”, using recursion following the structure of the grammar rules. Traditionally parser combinators have struggled to handle all features of contextfree grammars, such as left recursion.
Previous work introduced novel parser combinators that could be used to parse all contextfree grammars. A parser generator built using these combinators was proved both sound and complete in the HOL4 theorem prover. Unfortunately the performance was not as good as other parsing methods such as Earley parsing.
In this paper, we build on this previous work, and combine it in novel ways with existing parsing techniques such as Earley parsing. The result is a soundandcomplete combinator parsing library that can handle all contextfree grammars, and has good performance.
Description: timestamp: Tue, 09 Sep 2014 10:57:11 +0200 biburl: http://dblp.unitrier.de/rec/bib/conf/sle/Ridge14 bibsource: dblp computer science bibliography, http://dblp.org
20141209T10:20:37Z

Succinct indices for range queries with applications to orthogonal range maxima
http://hdl.handle.net/2381/29206
Title: Succinct indices for range queries with applications to orthogonal range maxima
Authors: Farzan, Arash; Munro, J. Ian; Raman, Rajeev
Abstract: We consider the problem of preprocessing N points in 2D, each endowed with a priority, to answer the following queries: given a axisparallel rectangle, determine the point with the largest priority in the rectangle. Using the ideas of the effective entropy of range maxima queries and succinct indices for range maxima queries, we obtain a structure that uses O(N) words and answers the above query in O(logN loglogN) time. This a direct improvement of Chazelle's result from 1985 [10] for this problem  Chazelle required O(N/ε) words to answer queries in O((logN)[superscript 1+ε]) time for any constant ε > 0.
20141028T10:40:48Z

Succinct representations of ordinal trees
http://hdl.handle.net/2381/29205
Title: Succinct representations of ordinal trees
Authors: Raman, Rajeev; Rao, S. Srinivasa
Abstract: We survey succinct representations of ordinal, or rooted, ordered trees. Succinct representations use space that is close to the appropriate informationtheoretic minimum, but support operations on the tree rapidly, usually in O(1) time.
20141028T10:33:36Z

Encodings for range selection and topk queries
http://hdl.handle.net/2381/29203
Title: Encodings for range selection and topk queries
Authors: Grossi, Roberto; Iacono, John; Navarro, Gonzalo; Raman, Rajeev; Rao, S. Srinivasa
Abstract: We study the problem of encoding the positions the topk elements of an array A[1..n] for a given parameter 1 ≤ k ≤ n. Specifically, for any i and j, we wish create a data structure that reports the positions of the largest k elements in A[i..j] in decreasing order, without accessing A at query time. This is a natural extension of the wellknown encoding rangemaxima query problem, where only the position of the maximum in A[i..j] is sought, and finds applications in document retrieval and ranking. We give (sometimes tight) upper and lower bounds for this problem and some variants thereof.
20141028T10:16:57Z

Discovery of network properties with allshortestpaths queries
http://hdl.handle.net/2381/29198
Title: Discovery of network properties with allshortestpaths queries
Authors: Bilo, Davide; Erlebach, Thomas Rainer; Mihalak, Matus; Widmayer, Peter
Editors: Shvartsman, A. A.; Felber, P.
Abstract: We consider the problem of discovering properties (such as the diameter) of an unknown network G(V,E) with a minimum number of queries. Initially, only the vertex set V
of the network is known. Information about the edges and nonedges of the network can be obtained
by querying nodes of the network. A query at a node q∈V returns the
union of all shortest paths from q to all other nodes in V. We study the
problem as an online problem  an algorithm does not initially know the
edge set of the network, and has to decide where to make the next query
based on the information that was gathered by previous queries. We
study how many queries are needed to discover the diameter, a minimal
dominating set, a maximal independent set, the minimum degree, and
the maximum degree of the network. We also study the problem of deciding with a minimum number of queries whether the network is 2edge or
2vertex connected. We use the usual competitive analysis to evaluate the
quality of online algorithms, i.e., we compare online algorithms with optimum offline algorithms. For all properties except maximal independent
set and 2vertex connectivity we present and analyze online algorithms.
Furthermore we show, for all the aforementioned properties, that "many"
queries are needed in the worst case. As our query model delivers more
information about the network than the measurement heuristics that are
currently used in practise, these negative results suggest that a similar
behavior can be expected in realistic settings, or in more realistic models
derived from the allshortestpaths query model.
20141023T15:40:02Z

Asympotically optimal encodings for range selection
http://hdl.handle.net/2381/29193
Title: Asympotically optimal encodings for range selection
Authors: Navarro, Gonzalo; Raman, Rajeev; Satti, Srinivasa Rao
Abstract: We consider the problem of preprocessing an array A[1..n] to answer range selection and range topk queries. Given a query interval [i..j] and a value k, the former query asks for the position of the kth largest value in A[i..j], whereas the latter asks for the positions of all the k largest values in A[i..j]. We consider the encoding version of the problem, where A is not available at query time, and an upper bound kappa on k, the rank that is to be selected, is given at construction time. We obtain data structures with asymptotically optimal size and query time on a RAM model with word size Θ(lg n) : our structures use O(n lg kappa) bits and answer range selection queries in time O(1+ lg k / lg lg n) and range topk queries in time O(k), for any k ≤ kappa.
Description: The file associated with this record is embargoed until the date of the conference.
20141022T14:00:04Z

Mining statebased models from proof corpora
http://hdl.handle.net/2381/29141
Title: Mining statebased models from proof corpora
Authors: Gransden, Thomas; Walkinshaw, Neil; Raman, Rajeev
Abstract: Interactive theorem provers have been used extensively to reason about various software/hardware systems and mathematical theorems. The key challenge when using an interactive prover is finding a suitable sequence of proof steps that will lead to a successful proof requires a significant amount of human intervention. This paper presents an automated technique that takes as input examples of successful proofs and infers an Extended Finite State Machine as output. This can in turn be used to generate proofs of new conjectures. Our preliminary experiments show that the inferred models are generally accurate (contain few falsepositive sequences) and that representing existing proofs in such a way can be very useful when guiding new ones.
20141007T14:53:42Z

Dynamizing succinct tree representations
http://hdl.handle.net/2381/29103
Title: Dynamizing succinct tree representations
Authors: Joannou, Stelios; Raman, Rajeev
Abstract: We consider succinct, or spaceefficient, representations of ordinal trees. Representations exist that take 2n + o(n) bits to represent a static nnode ordinal tree  close to the informationtheoretic minimum  and support navigational operations in O(1) time on a RAM model; and some implementations have good practical performance. The situation is different for dynamic ordinal trees. Although there is theoretical work on succinct dynamic ordinal trees, there is little work on the practical performance of these data structures. Motivated by applications to representing XML documents, in this paper, we report on a preliminary study on dynamic succinct data structures. Our implementation is based on representing the tree structure as a sequence of balanced parentheses, with navigation done using the minmax tree of Sadakane and Navarro (SODA '10). Our implementation shows promising performance for update and navigation, and our findings highlight two issues that we believe will be important to future implementations: the difference between the finger model of (say) Farzan and Munro (ICALP '09) and the parenthesis model of Sadakane and Navarro, and the choice of the balanced tree used to represent the minmax tree.
20140918T08:51:47Z

A structured approach to VO reconfigurations through Policies
http://hdl.handle.net/2381/28881
Title: A structured approach to VO reconfigurations through Policies
Authors: ReiffMarganiec, Stephan
Abstract: One of the strength of Virtual Organisations is their ability to dynamically and rapidly adapt in response
to changing environmental conditions. Dynamic adaptability has been studied in other system
areas as well and system management through policies has crystallized itself as a very prominent solution
in system and network administration. However, these areas are often concerned with very
lowlevel technical aspects. Previous work on the APPEL policy language has been aimed at dynamically
adapting system behaviour to satisfy enduser demands and – as part of STPOWLA – APPEL
was used to adapt workflow instances at runtime. In this paper we explore how the ideas of APPEL
and STPOWLA can be extended from workflows to the wider scope of Virtual Organisations. We will
use a Travel Booking VO as example.
20140530T10:52:19Z

Pure Type Systems with Corecursion on Streams: From Finite to Infinitary Normalisation
http://hdl.handle.net/2381/28332
Title: Pure Type Systems with Corecursion on Streams: From Finite to Infinitary Normalisation
Authors: Severi, Paula; de Vries, FerJan
Abstract: In this paper, we use types for ensuring that programs involving streams are wellbehaved.We extend pure type systems with a type constructor for streams, a modal operator next and a fixed point operator for expressing corecursion. This extension is called Pure Type Systems with Corecursion (CoPTS). The typed lambda calculus for reactive programs defined by Krishnaswami and Benton can be obtained as a CoPTS. CoPTSs allow us to study a wide range of typed lambda calculi extended with corecursion using only one framework. In particular, we study this extension for the calculus of constructions which is the underlying formal language of Coq. We use the machinery of infinitary rewriting and formalise the idea of wellbehaved programs using the concept of infinitary normalisation. The set of finite and infinite terms is defined as a metric completion. We establish a precise connection between the modal operator (• A) and the metric at a syntactic level by relating a variable of type (• A) with the depth of all its occurrences in a term. This syntactic connection between the modal operator and the depth is the key to the proofs of infinitary weak and strong normalisation.
20131028T15:24:43Z

Succinct Representations of Binary Trees for Range Minimum Queries
http://hdl.handle.net/2381/28331
Title: Succinct Representations of Binary Trees for Range Minimum Queries
Authors: Davoodi, Pooya; Raman, Rajeev; Satti, Satti Srinivasa
Abstract: We provide two succinct representations of binary trees that can be used to represent the Cartesian tree of an array A of size n. Both the representations take the optimal 2n + o(n) bits of space in the worst case and support range minimum queries (RMQs) in O(1) time. The first one is a modification of the representation of Farzan and Munro (SWAT 2008); a consequence of this result is that we can represent the Cartesian tree of a random permutation in 1.92n + o(n) bits in expectation. The second one uses a wellknown transformation between binary trees and ordinal trees, and ordinal tree operations to effect operations on the Cartesian tree. This provides an alternative, and more natural, way to view the 2DMinHeap of Fischer and Huen (SICOMP 2011). Furthermore, we show that the preprocessing needed to output the data structure can be performed in linear time using o(n) bits of extra working space, improving the result of Fischer and Heun who use n + o(n) bits working space.
20131028T13:05:12Z

Dynamic Compressed Strings with Random Access
http://hdl.handle.net/2381/28247
Title: Dynamic Compressed Strings with Random Access
Authors: Grossi, Roberto; Raman, Rajeev; Rao, Satti Srinivasa; Venturini, Rossano
Abstract: We consider the problem of storing a string S in dynamic compressed form, while permitting operations directly on the compressed representation of S: access a substring of S; replace, insert or delete a symbol in S; count how many occurrences of a given symbol appear in any given prefix of S (called rank operation) and locate the position of the ith occurrence of a symbol inside S (called select operation). We discuss the time complexity of several combinations of these operations along with the entropy space bounds of the corresponding compressed indexes. In this way, we extend or improve the bounds of previous work by Ferragina and Venturini [TCS, 2007], Jansson et al. [ICALP, 2012], and Nekrich and Navarro [SODA, 2013].
20131004T12:14:04Z

Computing Minimum Spanning Trees with Uncertainty
http://hdl.handle.net/2381/28154
Title: Computing Minimum Spanning Trees with Uncertainty
Authors: Erlebach, Thomas; Hoffmann, Michael; Krizanc, Danny; Mihal’ák, Matúš; Raman, Rajeev
Editors: Albers, S.; Weil, P.
Abstract: We consider the minimum spanning tree problem in a setting where information about the edge weights of the given graph is uncertain. Initially, for each edge e of the graph only a set Aₑ, called an uncertainty area, that contains the actual edge weight wₑ is known. The algorithm can ‘update’ e to obtain the edge weight wₑ E Aₑ. The task is to output the edge set of a minimum spanning tree after a minimum number of updates.
An algorithm is kupdate competitive if it makes at most k times as many updates as the optimum. We present a 2update competitive algorithm if all areas Aₑ are open or trivial, which is the best possible among deterministic algorithms. The condition on the areas Aₑ is to exclude degenerate inputs for which no constant update competitive algorithm can exist.
Next, we consider a setting where the vertices of the graph correspond to points in Euclidean space and the weight of an edge is equal to the distance of its endpoints. The location of each point is initially given as an uncertainty area, and an update reveals the exact location of the point. We give a general relation between the edge uncertainty and the vertex uncertainty versions of a problem and use it to derive a 4update competitive algorithm for the minimum spanning tree problem in the vertex uncertainty model. Again, we show that this is best possible among deterministic algorithms.
20130910T13:05:03Z

Inferring Extended Finite State Machine Models from Software Executions
http://hdl.handle.net/2381/28128
Title: Inferring Extended Finite State Machine Models from Software Executions
Authors: Walkinshaw, Neil; Taylor, R.; Derrick, J.
Abstract: The ability to reverseengineer models of software behaviour is valuable for a wide range of software maintenance, validation and verification tasks. Current reverseengineering techniques focus either on controlspecific behaviour (e.g. in the form of Finite State Machines), or dataspecific behaviour (e.g. as pre/postconditions or invariants). However, typical software behaviour is usually a product of the two; models must combine both aspects to fully represent the software’s operation. Extended Finite State Machines (EFSMs) provide such a model. Although attempts have been made to infer EFSMs, these have been problematic. The models inferred by these techniques can be non deterministic, the inference algorithms can be inflexible, and only applicable to traces with specific characteristics. This paper presents a novel EFSM inference technique that addresses the problems of inflexibility and non determinism. It also adapts an experimental technique from the field of Machine Learning to evaluate EFSM inference techniques, and applies it to two opensource software projects.
Description: INSPEC Accession Number: 13916984
20130904T09:00:20Z

Mining Sequential Patterns from Probabilistic Databases
http://hdl.handle.net/2381/28080
Title: Mining Sequential Patterns from Probabilistic Databases
Authors: Muzammal, Muhammad; Raman, Rajeev
Editors: Huang, J.Z.; Cao, L.; Srivastava, J.
Abstract: We consider sequential pattern mining in situations where there is uncertainty about which source an event is associated with. We model this in the probabilistic database framework and consider the problem of enumerating all sequences whose expected support is sufficiently large. Unlike frequent itemset mining in probabilistic databases [C. Aggarwal et al. KDD’09; Chui et al., PAKDD’07; Chui and Kao, PAKDD’08], we use dynamic programming (DP) to compute the probability that a source supports a sequence, and show that this suffices to compute the expected support of a sequential pattern. Next, we embed this DP algorithm into candidate generateandtest approaches, and explore the pattern lattice both in a breadthfirst (similar to GSP) and a depthfirst (similar to SPAM) manner. We propose optimizations for efficiently computing the frequent 1sequences, for reusing previouslycomputed results through incremental support computation, and for elmiminating candidate sequences without computing their support via probabilistic pruning. Preliminary experiments show that our optimizations are effective in improving the CPU cost.
Description: Full text of this item is not currently available on the LRA. The final published version may be available through the links above.
20130725T10:38:00Z

Range Extremum Queries
http://hdl.handle.net/2381/28079
Title: Range Extremum Queries
Authors: Raman, Rajeev
Abstract: There has been a renewal of interest in data structures for range extremum queries. In such problems, the input comprises N points, which are either elements of a ddimensional matrix, that is, their coordinates are specified by the 1D submatrices they lie in (row and column indices for d = 2), or they are points in ℝ[superscript d] . Furthermore, associated with each point is a priority that is independent of the point’s coordinate. The objective is to preprocess the given points and priorities to answer the range maximum query (RMQ): given a ddimensional rectangle, report the points with maximum priority. The objective is to minimze the space used by the data structure and the time taken to answer the above query. This talk surveys a number of recent developments in this area, focussing on the cases d = 1 and d = 2.
20130725T09:04:54Z

Random Access to GrammarCompressed Strings
http://hdl.handle.net/2381/28052
Title: Random Access to GrammarCompressed Strings
Authors: Bille, Philip; Landau, Gad M.; Raman, Rajeev; Sadakane, Kunihiko; Satti, Srinivasa Rao; Weimann, Oren
Abstract: Let S be a string of length N compressed into a contextfree grammar S of size n. We present two representations of S achieving O(logN) random access time, and either O(n · α[subscript k](n)) construction time and space on the pointer machine model, or 0(n) construction time and space on the RAM. Here, α[subscript k](n) is the inverse of the k[superscript th] row of Ackermann's function. Our representations also efficiently support decompression of any substring in S: we can decompress any substring of length m in the same complexity as a single random access query and additional O(m) time. Combining these results with fast algorithms for uncompressed approximate string matching leads to several efficient algorithms for approximate string matching on grammarcompressed strings without decompression. For instance, we can find all approximate occurrences of a pattern P with at most k errors in time O(n(min{Pk,k[superscript 4] + P}+logN)+occ), where occ is the number of occurrences of P in S. Finally, we are able to generalize our results to navigation and other operations on grammarcompressed trees. All of the above bounds significantly improve the currently best known results. To achieve these bounds, we introduce several new techniques and data structures of independent interest, including a predecessor data structure, two "biased" weighted ancestor data structures, and a compact representation of heavypaths in grammars.
20130711T12:05:53Z

Using Evidential Reasoning to Make Qualified Predictions of Software Quality
http://hdl.handle.net/2381/28050
Title: Using Evidential Reasoning to Make Qualified Predictions of Software Quality
Authors: Walkinshaw, Neil
Editors: Wagner, S
Abstract: Software quality is commonly characterised in a topdown manner. Highlevel notions such as quality are decomposed into hierarchies of subfactors, ranging from abstract notions such as maintainability and reliability to lowerlevel notions such as test coverage or teamsize. Assessments of abstract factors are derived from relevant sources of information about their respective lowerlevel subfactors, by surveying sources such as metrics data and inspection reports. This can be difficult because (1) evidence might not be available, (2) interpretations of the data with respect to certain quality factors may be subject to doubt and intuition, and (3) there is no straightforward means of blending hierarchies of heterogeneous data into a single coherent and quantitative prediction of quality. This paper shows how Evidential Reasoning (ER)  a mathematical technique for reasoning about uncertainty and evidence  can address this problem. It enables the quality assessment to proceed in a bottomup manner, by the provision of lowlevel assessments that make any uncertainty explicit, and automatically propagating these up to higherlevel 'belieffunctions' that accurately summarise the developer's opinion and make explicit any doubt or ignorance.
20130704T15:35:16Z

Completeness of Conversion between Reactive Programs for Ultrametric Models
http://hdl.handle.net/2381/27961
Title: Completeness of Conversion between Reactive Programs for Ultrametric Models
Authors: Severi, Paula; de Vries, FerJan
Abstract: In 1970 Friedman proved completeness of beta eta conversion in the simplytyped lambda calculus for the settheoretical model. Recently Krishnaswami and Benton have captured the essence of Hudak’s reactive programs in an extension of simply typed lambda calculus with causal streams and a temporal modality and provided this typed lambda calculus for reactive programs with a sound ultrametric semantics.
We show that beta eta conversion in the typed lambda calculus of reactive programs is complete for the ultrametric model.
20130611T10:45:55Z