DSpace Community:
http://hdl.handle.net/2381/316
2016-02-12T07:48:45Z
2016-02-12T07:48:45Z
Positive Fragments Of Coalgebraic Logics
Balan, A.
Kurz, Alexander Herbert
Velebil, J.
http://hdl.handle.net/2381/36575
2016-02-05T03:27:16Z
2016-02-04T12:46:07Z
Title: Positive Fragments Of Coalgebraic Logics
Authors: Balan, A.; Kurz, Alexander Herbert; Velebil, J.
Abstract: Positive modal logic was introduced in an influential 1995 paper of Dunn as the positive fragment of standard modal logic. His completeness result consists of an axiomatization that derives all modal formulas that are valid on all Kripke frames and are built only from atomic propositions, conjunction, disjunction, box and diamond. In this paper, we provide a coalgebraic analysis of this theorem, which not only gives a conceptual proof based on duality theory, but also generalizes Dunn's result from Kripke frames to coalgebras for weak-pullback preserving functors. To facilitate this analysis we prove a number of category theoretic results on functors on the categories $mathsf{Set}$ of sets and $mathsf{Pos}$ of posets: Every functor $mathsf{Set} to mathsf{Pos}$ has a $mathsf{Pos}$-enriched left Kan extension $mathsf{Pos} to mathsf{Pos}$. Functors arising in this way are said to have a presentation in discrete arities. In the case that $mathsf{Set} to mathsf{Pos}$ is actually $mathsf{Set}$-valued, we call the corresponding left Kan extension $mathsf{Pos} to mathsf{Pos}$ its posetification. A $mathsf{Set}$-functor preserves weak pullbacks if and only if its posetification preserves exact squares. A $mathsf{Pos}$-functor with a presentation in discrete arities preserves surjections. The inclusion $mathsf{Set} to mathsf{Pos}$ is dense. A functor $mathsf{Pos} to mathsf{Pos}$ has a presentation in discrete arities if and only if it preserves coinserters of `truncated nerves of posets'. A functor $mathsf{Pos} to mathsf{Pos}$ is a posetification if and only if it preserves coinserters of truncated nerves of posets and discrete posets. A locally monotone endofunctor of an ordered variety has a presentation by monotone operations and equations if and only if it preserves $mathsf{Pos}$-enriched sifted colimits.
2016-02-04T12:46:07Z
Presenting Distributive Laws
Bonsangue, M. M.
Hansen, H. H.
Kurz, Alexander Herbert
Rot, J.
http://hdl.handle.net/2381/36574
2016-02-05T03:27:17Z
2016-02-04T12:43:36Z
Title: Presenting Distributive Laws
Authors: Bonsangue, M. M.; Hansen, H. H.; Kurz, Alexander Herbert; Rot, J.
Abstract: Distributive laws of a monad T over a functor F are categorical tools for specifying algebra-coalgebra interaction. They proved to be important for solving systems of corecursive equations, for the specification of well-behaved structural operational semantics and, more recently, also for enhancements of the bisimulation proof method. If T is a free monad, then such distributive laws correspond to simple natural transformations. However, when T is not free it can be rather difficult to prove the defining axioms of a distributive law. In this paper we describe how to obtain a distributive law for a monad with an equational presentation from a distributive law for the underlying free monad. We apply this result to show the equivalence between two different representations of context-free languages.
2016-02-04T12:43:36Z
Obligation Blackwell Games and p-Automata
Piterman, Nir
Chatterjee, Krishnendu
http://hdl.handle.net/2381/36416
2016-01-27T11:09:27Z
2016-01-27T11:09:15Z
Title: Obligation Blackwell Games and p-Automata
Authors: Piterman, Nir; Chatterjee, Krishnendu
Abstract: We generalize winning conditions in two-player games by adding a structural acceptance condition
called obligations. Obligations are orthogonal to the linear winning conditions that define whether a play
is winning. Obligations are a declaration that player 0 can achieve a certain value from a configuration. If the
obligation is met, the value of that configuration for player 0 is 1.
We define the value in such games and show that obligation games are determined. For Markov chains
with Borel objectives and obligations, and finite turn-based stochastic parity games with obligations we give an
alternative and simpler characterization of the value function. Based on this simpler definition we show that the
decision problem of winning finite turn-based stochastic parity games with obligations is in NP\co-NP.We also
show that obligation games provide a game framework for reasoning about p-automata.
Description: The file associated with this record is under a permanent embargo while publication is In Press in accordance with the publisher's self-archiving policy. The full text may be available through the publisher links provided above.
2016-01-27T11:09:15Z
Finding Recurrent Sets with Backward Analysis and Trace Partitioning
Piterman, Nir
Bakhirkin, Alexey
http://hdl.handle.net/2381/36303
2016-01-18T14:58:11Z
2016-01-18T14:57:54Z
Title: Finding Recurrent Sets with Backward Analysis and Trace Partitioning
Authors: Piterman, Nir; Bakhirkin, Alexey
Abstract: We propose an abstract-interpretation-based analysis for recurrent sets.
A recurrent set is a set of states from which the execution of a program cannot or
might not (as in our case) escape. A recurrent set is a part of a program’s nontermination
proof (that needs to be complemented by reachability analysis). We
find recurrent sets by performing a potentially over-approximate backward analysis
that produces an initial candidate. We then perform over-approximate forward
analysis on the candidate to check and refine it and ensure soundness. In practice,
the analysis relies on trace partitioning that predicts future paths through the
program that non-terminating executions will take. Using our technique, we were
able to find recurrent sets in many benchmarks found in the literature including
some that, to our knowledge, cannot be handled by existing tools. In addition,
we note that typically, analyses that search for recurrent sets are applied to linear
under-approximations of programs or employ some form of non-approximate
numeric reasoning. In contrast, our analysis uses standard abstract-interpretation
techniques and is potentially applicable to a larger class of abstract domains (and
therefore – programs).
Description: The file is under embargo for 12 months from publication.
2016-01-18T14:57:54Z
Drug target optimization in chronic myeloid leukemia using innovative computational platform.
Chuang, Ryan
Hall, Benjamin A.
Benque, David
Cook, Byron
Ishtiaq, Samin
Piterman, Nir
Taylor, Alex
Vardi, Moshe
Koschmieder, Steffen
Gottgens, Berthold
Fisher, Jasmin
http://hdl.handle.net/2381/36302
2016-01-19T03:08:06Z
2016-01-18T14:25:25Z
Title: Drug target optimization in chronic myeloid leukemia using innovative computational platform.
Authors: Chuang, Ryan; Hall, Benjamin A.; Benque, David; Cook, Byron; Ishtiaq, Samin; Piterman, Nir; Taylor, Alex; Vardi, Moshe; Koschmieder, Steffen; Gottgens, Berthold; Fisher, Jasmin
Abstract: Chronic Myeloid Leukemia (CML) represents a paradigm for the wider cancer field. Despite the fact that tyrosine kinase inhibitors have established targeted molecular therapy in CML, patients often face the risk of developing drug resistance, caused by mutations and/or activation of alternative cellular pathways. To optimize drug development, one needs to systematically test all possible combinations of drug targets within the genetic network that regulates the disease. The BioModelAnalyzer (BMA) is a user-friendly computational tool that allows us to do exactly that. We used BMA to build a CML network-model composed of 54 nodes linked by 104 interactions that encapsulates experimental data collected from 160 publications. While previous studies were limited by their focus on a single pathway or cellular process, our executable model allowed us to probe dynamic interactions between multiple pathways and cellular outcomes, suggest new combinatorial therapeutic targets, and highlight previously unexplored sensitivities to Interleukin-3.
2016-01-18T14:25:25Z
T2: Temporal Property Verification
Brockschmidt, Mark
Cook, Byron
Ishtiaq, Samin
Khlaaf, Heidy
Piterman, Nir
http://hdl.handle.net/2381/36301
2016-01-18T14:14:35Z
2016-01-18T14:14:13Z
Title: T2: Temporal Property Verification
Authors: Brockschmidt, Mark; Cook, Byron; Ishtiaq, Samin; Khlaaf, Heidy; Piterman, Nir
Abstract: We present the open-source tool T2, the first public release
from the TERMINATOR project. T2 has been extended over the past
decade to support automatic temporal-logic proving techniques and to
handle a general class of user-provided liveness and safety properties.
Input can be provided in a native format and in C, via the support of
the LLVM compiler framework. We briefly discuss T2’s architecture, its
underlying techniques, and conclude with an experimental illustration of
its competitiveness and directions for future extensions.
Description: The file is under embargo for 12 months from publication.
2016-01-18T14:14:13Z
Dynamic Software Project Scheduling through a Proactive-rescheduling Method
Shen, Xiao - Ning
Minku, Leandro L.
Bahsoon, Rami
Yao, Xin
http://hdl.handle.net/2381/36291
2016-01-16T03:12:20Z
2016-01-15T12:18:43Z
Title: Dynamic Software Project Scheduling through a Proactive-rescheduling Method
Authors: Shen, Xiao - Ning; Minku, Leandro L.; Bahsoon, Rami; Yao, Xin
Abstract: Software project scheduling in dynamic and uncertain environments is of significant importance to real-world software development. Yet most studies schedule software projects by considering static and deterministic scenarios only, which may cause performance deterioration or even infeasibility when facing disruptions. In order to capture more dynamic features of software project scheduling than the previous work, this paper formulates the project scheduling problem by considering uncertainties and dynamic events that often occur during software project development, and constructs a mathematical model for the resulting Multi-objective Dynamic Project Scheduling Problem (MODPSP), where the four objectives of project cost, duration, robustness and stability are considered simultaneously under a variety of practical constraints. In order to solve MODPSP appropriately, a multi-objective evolutionary algo-rithm (MOEA)based proactive-rescheduling method is proposed, which generates a robust schedule predictively and adapts the previous schedule in response to critical dynamic events during the project execution. Extensive experi-mental results on 21 problem instances, including three instances derived from real-world software projects, show that our novel method is very effective. By introducing the robustness and stability objectives, and incorporating the dynamic optimization strategies specifically designed for MODPSP, our proactive-rescheduling method achieves a very good overall performance in a dynamic environment.
2016-01-15T12:18:43Z
The Handbook of Engineering Self-Aware and Self-Expressive Systems
Chen, Tao
Faniyi, Funmilade
Bahsoon, Rami
Lewis, Peter R.
Yao, Xin
Minku, Leandro L.
Esterle, Lukas
http://hdl.handle.net/2381/36288
2016-02-08T09:56:09Z
2016-01-15T11:24:49Z
Title: The Handbook of Engineering Self-Aware and Self-Expressive Systems
Authors: Chen, Tao; Faniyi, Funmilade; Bahsoon, Rami; Lewis, Peter R.; Yao, Xin; Minku, Leandro L.; Esterle, Lukas
Abstract: When faced with the task of designing and implementing a new self-aware and self-expressive computing system, researchers and practitioners need a set of guidelines on how to use the concepts and foundations developed in the Engineering Proprioception in Computing Systems (EPiCS) project. This report provides such guidelines on how to design self-aware and self-expressive computing systems in a principled way. We have documented different categories of self-awareness and self-expression level using architectural patterns. We have also documented common architectural primitives, their possible candidate techniques and attributes for architecting self-aware and self-expressive systems. Drawing on the knowledge obtained from the previous investigations, we proposed a pattern driven methodology for engineering self-aware and self-expressive systems to assist in utilising the patterns and primitives during design. The methodology contains detailed guidance to make decisions with respect to the possible design alternatives, providing a systematic way to build self-aware and self-expressive systems. Then, we qualitatively and quantitatively evaluated the methodology using two case studies. The results reveal that our pattern driven methodology covers the main aspects of engineering self-aware and self-expressive systems, and that the resulted systems perform significantly better than the non-self-aware systems.
2016-01-15T11:24:49Z
Integrating user knowledge into design pattern detection
Alshira'H, Mohammad H.
http://hdl.handle.net/2381/36232
2016-01-09T03:11:56Z
2016-01-08T16:11:49Z
Title: Integrating user knowledge into design pattern detection
Authors: Alshira'H, Mohammad H.
Abstract: Design pattern detection is useful for a range of software comprehension and maintenance
tasks. Tools that rely on static or dynamic analysis alone can produce
inaccurate results, especially for patterns that rely on the run-time information.
Some tools provide facilities for the developer to refine the results by adding their
own knowledge. Currently, however, the ability of tools to accommodate this knowledge
is very limited; it can only pertain to the detected patterns and cannot provide
additional knowledge about the source code, or about its behaviour. In this thesis,
we propose an approach to combine existing pattern detection techniques with
a structured feedback mechanism. This enables the developer to refine the detection
results by feeding-in additional knowledge about pattern implementations and
software behaviour. The motivation is that a limited amount of user input can complement
the automated detection process, to produce results that are more accurate.
To evaluate the approach we applied it to a selection of openly available software
systems. The evaluation was carried in two parts. First, an evaluation case study
was carried out to detect pattern instances in the selected systems with the help
of the user knowledge. Second, a user study of a broader range of expert users of
design patterns was conducted in order to investigate the impact of their knowledge
on the detection process, and to see whether it is realistic that the user can identify
useful knowledge for the detection process. The evaluation results indicate that
the proposed approach can yield a significant improvement in the accuracy whilst
requiring a relatively small degree of user input from the developer. Moreover, the
results show that expert users can supplement the design pattern detection process
with a useful feedback that can enhance the detection of design pattern instances in
the source code.
2016-01-08T16:11:49Z
An Evidential Reasoning Approach for Assessing Confidence in Safety Evidence
Nair, S.
Walkinshaw, Neil
Kelly, T.
Vara, J.
http://hdl.handle.net/2381/36003
2015-12-08T03:06:36Z
2015-12-07T11:32:28Z
Title: An Evidential Reasoning Approach for Assessing Confidence in Safety Evidence
Authors: Nair, S.; Walkinshaw, Neil; Kelly, T.; Vara, J.
Abstract: Safety cases present the arguments and evidence that can be used to justify the acceptable safety of a system. Many secondary factors such as the tools
used, the techniques applied, and the experience of the people who created the evidence, can affect an assessor’s confidence in the evidence cited by a safety case. One means of reasoning about this confidence and its inherent uncertainties is to present a ‘confidence argument’ that explicitly justifies the provenance of the evidence used. In this paper, we propose a novel approach to automatically construct these confidence arguments by enabling assessors to provide individual judgements concerning the trustworthiness and the appropriateness of the evidence. The approach is based on Evidential Reasoning and enables the derivation of a quantified aggregate of the overall confidence. The proposed approach is supported by a prototype tool (EviCA) and has been evaluated using the Technology Acceptance Model.
2015-12-07T11:32:28Z
Combining Time Series Prediction Models Using Genetic Algorithm to Auto-scaling Web Applications Hosted in the Cloud Infrastructure
Messias, V. R.
Estrella, J. C.
Ehlers, R.
Santana, M. J.
Santana, R. C.
Reiff-Marganiec, Stephan
http://hdl.handle.net/2381/35972
2016-01-07T13:58:54Z
2015-12-02T09:22:30Z
Title: Combining Time Series Prediction Models Using Genetic Algorithm to Auto-scaling Web Applications Hosted in the Cloud Infrastructure
Authors: Messias, V. R.; Estrella, J. C.; Ehlers, R.; Santana, M. J.; Santana, R. C.; Reiff-Marganiec, Stephan
Abstract: In a cloud computing environment, companies have the ability to allocate resources according to demand. However, there is a delay that may take minutes between the request for a new resource and it being ready for using. This causes the reactive techniques, which request a new resource only when the system reaches a certain load threshold, to be not suitable for the resource allocation process. To address this problem, it is necessary to predict requests that arrive at the system in the next period of time to allocate the necessary resources, before the system becomes overloaded. There are several time series forecasting models to calculate the workload predictions based on history of monitoring data. However, it is difficult to know which is the best time series forecasting model to be used in each case. The work becomes even more complicated when the user does not have much historical data to be analyzed. Most related work considers only single methods to evaluate the results of the forecast. Other works propose an approach that selects suitable forecasting methods for a given context. But in this case, it is necessary to have a significant amount of data to train the classifier. Moreover, the best solution may not be a specific model, but rather a combination of models. In this paper we propose an adaptive prediction method using genetic algorithms to combine time series forecasting models. Our method does not require a previous phase of training, because it constantly adapts the extent to which the data are coming. To evaluate our proposal, we use three logs extracted from real Web servers. The results show that our proposal often brings the best result and is generic enough to adapt to various types of time series.
Description: The file associated with this record is under a 12-month embargo from publication in accordance with the publisher's self-archiving policy, available at http://www.springer.com/gp/open-access/authors-rights/self-archiving-policy/2124. The full text may be available through the publisher links provided above.
2015-12-02T09:22:30Z
How to Make Best Use of Cross-Company Data for Web Effort Estimation?
Minku, Leandro Lei
Sarro, Federica
Mendes, Emilia
Ferrucci, Filomena
http://hdl.handle.net/2381/35961
2015-12-08T03:05:42Z
2015-11-30T10:12:25Z
Title: How to Make Best Use of Cross-Company Data for Web Effort Estimation?
Authors: Minku, Leandro Lei; Sarro, Federica; Mendes, Emilia; Ferrucci, Filomena
Abstract: [Context]: The numerous challenges that can hinder software companies from gathering their own data have motivated over the past 15 years research on the use of cross-company (CC) datasets for software effort prediction. Part of this research focused on Web effort prediction, given the large increase worldwide in the development of Web applications. Some of these studies indicate that it may be possible to achieve better performance using CC models if some strategy to make the CC data more similar to the within-company (WC) data is adopted. [Goal]: This study investigates the use of a recently proposed approach called Dycom to assess to what extent Web effort predictions obtained using CC datasets are effective in relation to the predictions obtained using WC data when explicitly mapping the CC models to the WC context. [Method]: Data on 125 Web projects from eight different companies part of the Tukutuku database were used to build prediction models. We benchmarked these models against baseline models (mean and median effort) and a WC base learner that does not benefit of the mapping. We also compared Dycom against a competitive CC approach from the literature (NN-filtering). We report a company-by- company analysis. [Results]: Dycom usually managed to achieve similar or better performance than a WC model while using only half of the WC training data. These results are also an improvement over previous studies that investigated the use of different strategies to adapt CC models to the WC data for Web effort estimation. [Conclusions]: We conclude that the use of Dycom for Web effort prediction is quite promising and in general supports previous results when applying Dycom to conventional software datasets.
Description: Archived in accordance with the publisher's posting policy, available at http://www.ieee.org/publications_standards/publications/rights/rights_policies.html
2015-11-30T10:12:25Z
Query-competitive algorithms for cheapest set problems under uncertainty
Erlebach, Thomas R.
Hoffmann, Michael
Kammer, Frank
http://hdl.handle.net/2381/35960
2015-11-28T03:03:31Z
2015-11-27T16:53:41Z
Title: Query-competitive algorithms for cheapest set problems under uncertainty
Authors: Erlebach, Thomas R.; Hoffmann, Michael; Kammer, Frank
Description: This file is under embargo for 18 months from first date of publication.
2015-11-27T16:53:41Z
Optimal control of two distributed parameter systems.
Williams, V (Vaughan)
http://hdl.handle.net/2381/34933
2015-11-19T09:02:28Z
2015-11-19T09:02:28Z
Title: Optimal control of two distributed parameter systems.
Authors: Williams, V (Vaughan)
Abstract: Part I. A heat conduction problem is outlined, where the state of the system under consideration is described by a linear parabolic partial differential equation in one space dimension, with bounded controlling boundary conditions. A cost functional is defined in terms of an integral involving some specified "target function". Integral expressions describing the evolution of the system under the influence of various initial and boundary conditions are given; Butkovskii's Maximum Principle is formally applied to modified forms of these expressions, to give equations determining the optimal control function. Pontryagin's Maximum Principle is applied to a discrete approximation to the distributed system, and a connection is demonstrated between the optimal control thus obtained and that for the distributed system. A further connection is demonstrated with the solution obtained after applying calculus of variations techniques to a notionally unconstrained version of the problem. Difficulties of numerical solution are discussed, and the need for further constraints is pointed out. Solutions to some numerical examples are given. PART II. Equations governing one-dimensional non-turbulent compressible fluid flow in a pipe are given, and the waterhammer problem is described. The method known as "valve stroking" is outlined for a linearized version of this problem. A necessary condition on the optimal solution to the linearized problem is demonstrated, and it is shown that valve stroking does not satisfy this condition, and produces neither a time-optimal nor a mass-flux-optimal solution. A calculus of variations method applied to the non-linear model is shown to produce a non-linear hyperbolic split boundary value problem. A numerical approach, based on iterative gradient methods, is described and some numerical results are presented. Computer programmes for the numerical methods described in parts I and II are included as Appendices.
2015-11-19T09:02:28Z
Markov processes in adaptive control.
Longley, D.
http://hdl.handle.net/2381/34932
2015-11-19T09:02:27Z
2015-11-19T09:02:27Z
Title: Markov processes in adaptive control.
Authors: Longley, D.
Abstract: The self-adaptive control system continually seeks to improve the performance of the process by making certain parameter adjustments. This improvement is achieved by "hill-climbing" techniques which test the effect of parameter adjustments on the performance and then make computed adjustments towards optimum conditions. The application of these techniques is complicated by the effects of the process dynamics and "noise" in performance index measurements. This thesis is concerned with the study of "hill-climbing" strategies that will minimise the effect of performance index noise. Markov process techniques were applied to a sempled data, unidimensional search over a parabolic curve and the effects of process dynamics were ignored. The result of incorporating a dead-zone filter in an optimising loop was analysed for a stepped-rectangular noise probability density function with a stationary and drifting optimum. A strategy which estimated the gradient by measuring the performance index difference over two successive values of the parameter (instead of perturbation techniques) was analysed but no conclusive results were obtained. Finally a supervisory loop, to adjust the gain of the optimising loop, was postulated and analysed. Digital and analogue computer studies were performed to compare the above strategies and to obtain an insight into the practical problems associated with the implementation of the optimising loop.
2015-11-19T09:02:27Z
The structure of binary molten salt mixtures: A neutron diffraction study.
Badyal, Yaspal Singh.
http://hdl.handle.net/2381/34931
2015-11-20T03:37:07Z
2015-11-19T09:02:25Z
Title: The structure of binary molten salt mixtures: A neutron diffraction study.
Authors: Badyal, Yaspal Singh.
Abstract: Structural modification in a series of polyvalent metal chloride - alkali chloride binary molten salt mixtures has been investigated using the pulsed neutron diffraction technique. Structure factors have been measured for NiCl2- KCl, NiCl2-LiCl and ZnCl2-LiCl samples spanning the entire composition range. The key finding was that the degree of structural modification is dependent on the relative size and polarising power of the two cation species. The mixtures of NiCl2, and ZnCl2, with LiCl largely appear to be admixtures of the two pure salt structures, whereas adding KCl to NiCl2 results in a better ordered, more regularly tetrahedral local structure around the metal cation and enhancement of the first sharp diffraction peak (FSDP). A simple model involving charge ordering of discrete tetrahedral units by alkali counter-ions is proposed as an explanation for the enhanced intermediate range order. In order to identify some of the partial structure factor contributions to the enhanced FSDP, the scattering was measured for three isotopically-enriched NiCl2+2KCl samples. A complementary isotopic substitution experiment was performed on three ZnCl2+2KCl samples. The results generally confirm the findings of the composition study, with a strong similarity between the two molten salt systems also being evident. In addition, RMC modelling supports the proposed model for intermediate range order in the mixtures. Structure factors were also measured for AlCl3-LiCl and AlCl3-NaCl samples covering the entire composition range. Several features consistent with strong charge ordering of discrete tetrahedral units by alkali counter-ions were identified. In addition, RMC modelling of the data for pure AICl3 strongly challenges the 'established' view of the structure and an alternative 'sparse network liquid' model is proposed which emphasises the similarity to ZnCl2.
2015-11-19T09:02:25Z
Genetic variation in the response of mice to xenobiotics, in vitro.
Arranz, Maria J.
http://hdl.handle.net/2381/34930
2015-11-20T03:37:00Z
2015-11-19T09:02:23Z
Title: Genetic variation in the response of mice to xenobiotics, in vitro.
Authors: Arranz, Maria J.
Abstract: Adverse reactions to drugs and environmental chemicals are a serious problem with up to 30% of hospital patients experiencing such problems (Venning, 1983; Ludwig and Axelsen, 1983). There is evidence that many adverse reactions arise as a result of genetically controlled sensitivity (Festing, 1987). Large genetically determined differences in response to chemicals have also been recorded in laboratory animals. However, most toxicological screening involves a single strain and fails to detect genetically determined sensitivity. Should some animals show an adverse reaction, this is usually attributed to "biological variation". As the pedigree of such animals is not normally known at the time of use there is no way of showing whether these adverse reactions were inherited (Festing, 1975, 1979). The initial aim of this project was to develop a technique for studying genetic variation in sensitivity to treatment with drugs using in vitro screening methods. The techniques should not require hazardous or expensive chemicals and equipment, should require a small number of animals and should be reliable and easy to perform. Several end-points were studied, and a protocol for detecting genetic differences which included four end-points and two cell types was developed. In the second part of the project, the aim was to study genetic variation in sensitivity to Aspirin, Ethanol and Coumarin as model compounds, using the previously developed techniques in conjunction with suitable genetically-defined strains of mice. Two cell types (macrophages and hepatocytes) were studied and several end-points were used including neutral red uptake, total protein concentration, rate of phagocytosis and LDH activity in cells and supernatant. The study involved nine strains of mice. Although statistically significant differences among inbred strains were detected, in no experiment did strain distribution pattern suggest single-locus Mendelian control. There was no evidence that response to coumarin depended on the coumarin hydroxylase (Cyp2b) locus nor that response to alcohol depended on the alcohol dehydrogenase locus. It is concluded that further development would be necessary to develop these methods as a way of identifying genes associated with their type of genetic variation.
2015-11-19T09:02:23Z
Fast Data Processing in Hyper-Scale Systems
Tilly, Marcel
http://hdl.handle.net/2381/33565
2015-11-17T03:01:00Z
2015-11-16T15:53:25Z
Title: Fast Data Processing in Hyper-Scale Systems
Authors: Tilly, Marcel
Abstract: The deluge of intelligent objects that are providing continuous access
to data and services on one hand and the demand of developers and
consumers to handle these data on the other hand require us to think
about new communication paradigms and middleware.
Based on requirements collected from scenarios from connected car,
social networks, and factory of the future this thesis is developing new
concepts for fast data processing for hyper-scale systems. In hyperscale
systems, such as in the Internet of Things, one emerging requirement
is to process, procure, and provide information with almost zero
latency. This thesis is introducing new concepts for a middleware to
enable fast communication by limiting information flow with filtering
concepts using event policy obligations and combining data processing
techniques adopted from complex event processing.
Fast data processing has to deal with continuous data streams of
events, providing a set of operators to manipulate, aggregate, and
correlate data. This processing logic needs to be distributed. Distribution
helps us to scale on one hand in terms of numbers of data
sources (e.g. phones, cars, sensors) and on the other hand to parallelise
processing in terms of grouping and partitions (e.g. regional).
In our solution, event policies are injected as close as possible to the
place where the data is born to optimise traffic. Filters, aggregations
and rules help to process the data accordingly. Finally, communication
paradigms or interaction patterns support mediation between
classical service based request-response interaction and event-based
data exchange.
This all together builds a middleware enabling fast data processing
for hyper-scale systems.
2015-11-16T15:53:25Z
Maximizing Throughput in Energy-Harvesting Sensor Nodes
Fung, Ping Yuen
http://hdl.handle.net/2381/33484
2016-02-04T02:45:11Z
2015-11-03T12:15:43Z
Title: Maximizing Throughput in Energy-Harvesting Sensor Nodes
Authors: Fung, Ping Yuen
Abstract: We consider an online throughput maximization problem in sensor nodes that can harvest energy. The sensor nodes generate and forward packets, which cost energy; they can also harvest energy from the environment, but the amount of energy that can be harvested is not known in advance. We give a number of algorithms and lower bounds for the case of a single node. We consider both the general case and some types of ‘non-idling’ adversaries where we can get better bounds. We also consider the case of networks with multiple nodes and demonstrate that some very simple scenarios already admit no competitive algorithms.
2015-11-03T12:15:43Z
Performance Evaluation of Resource Management in Cloud Computing Environments
Batista, B. G.
Estrella, J. C.
Ferreira, C. H. G.
Filho, D. M. L.
Nakamura, L. H. V.
Reiff-Marganiec, Stephan
Santana, M. J.
Santana, R. H. C.
http://hdl.handle.net/2381/33451
2015-11-03T03:00:39Z
2015-11-02T11:44:33Z
Title: Performance Evaluation of Resource Management in Cloud Computing Environments
Authors: Batista, B. G.; Estrella, J. C.; Ferreira, C. H. G.; Filho, D. M. L.; Nakamura, L. H. V.; Reiff-Marganiec, Stephan; Santana, M. J.; Santana, R. H. C.
Abstract: Cloud computing is a computational model in which resource providers can offer
on-demand services to clients in a transparent way. However, to be able to guarantee
quality of service without limiting the number of accepted requests, providers must be
able to dynamically manage the available resources so that they can be processed. This
dynamic resource management is not a trivial task, since it involves meeting several
challenges related to workload modeling, virtualization, performance modeling,
deployment and monitoring of applications on virtualized resources. This paper carries
out a performance evaluation of a module for resource management in a cloud
environment that includes handling available resources during execution time and
ensuring the quality of service defined in the service level agreement. An analysis was
conducted of different resource configurations to define which dimension of resource
scaling has a real influence on client requests. The results were used to model and
implement a simulated cloud system, in which the allocated resource can be changed
on-the-fly, with a corresponding change in price. In this way, the proposed module seeks
to satisfy both the client by ensuring quality of service, and the provider by ensuring
the best use of resources at a fair price.
2015-11-02T11:44:33Z
HybridLF: a system for reasoning in higher-order abstract syntax
Furniss, Amy Elizabeth
http://hdl.handle.net/2381/33362
2015-10-23T02:01:09Z
2015-10-22T15:25:23Z
Title: HybridLF: a system for reasoning in higher-order abstract syntax
Authors: Furniss, Amy Elizabeth
Abstract: In this thesis we describe two new systems for reasoning about deductive
systems: HybridLF and Canonical HybridLF.
HybridLF brings together the Hybrid approach (due to Ambler, Crole
and Momigliano [15]) to higher-order abstract syntax (HOAS) in Isabelle/HOL
with the logical framework LF, a dependently-typed system for proving theorems
about logical systems. Hybrid provides a version of HOAS in the form of
the lambda calculus, in which Isabelle functions are automatically converted
to a nameless de Bruijn represenation. Hybrid allows untyped expressions to
be entered as human-readable functions, which are then translated into the
machine-friendly de Bruijn form. HybridLF uses and updates these techniques
for variable representation in the context of the dependent type theory
LF, providing a version of HOAS in the form of LF.
Canonical HybridLF unites the variable representation techniques of
Hybrid with Canonical LF, in which all terms are in canonical form and definitional
equality is reduced to syntactic equality. We extend the Hybrid approach
to HOAS to functions with multiple variables by introducing a family
of abstraction functions, and use the Isabelle option type to denote errors instead
of including an ERR element in the Canonical HybridLF expression
type.
In both systems we employ the meta-logic M2 to prove theorems about
deductive systems. M2 [28] is a first-order logic in which quantifiers range
over the objects and types generated by an LF signature (that encodes a
deductive system). As part of the implementation of M2 we explore higher-order
unification in LF, adapting existing approaches to work in our setting.
2015-10-22T15:25:23Z
Algorithms for Büchi Games
Chatterjee, K.
Henzinger, Thomas A.
Piterman, Nir
http://hdl.handle.net/2381/33349
2015-10-22T02:00:41Z
2015-10-21T10:59:42Z
Title: Algorithms for Büchi Games
Authors: Chatterjee, K.; Henzinger, Thomas A.; Piterman, Nir
Abstract: The classical algorithm for solving B\"uchi games requires time $O(n\cdot m)$ for game graphs with $n$ states and $m$ edges. For game graphs with constant outdegree, the best known algorithm has running time $O(n^2/\log n)$. We present two new algorithms for B\"uchi games. First, we give an algorithm that performs at most $O(m)$ more work than the classical algorithm, but runs in time O(n) on infinitely many graphs of constant outdegree on which the classical algorithm requires time $O(n^2)$. Second, we give an algorithm with running time $O(n\cdot m\cdot\log\delta(n)/\log n)$, where $1\le\delta(n)\le n$ is the outdegree of the game graph. Note that this algorithm performs asymptotically better than the classical algorithm if $\delta(n)=O(\log n)$.
Description: 11 Pages, Published in GDV 06 (Games in Design and Verification)
2015-10-21T10:59:42Z
Encoding 2-D Range Maximum Queries
Golin, M. J.
Iacono, J.
Krizanc, D.
Raman, Rajeev
Rao, S. S.
Shende, S.
http://hdl.handle.net/2381/33340
2016-01-07T14:04:12Z
2015-10-21T10:36:44Z
Title: Encoding 2-D Range Maximum Queries
Authors: Golin, M. J.; Iacono, J.; Krizanc, D.; Raman, Rajeev; Rao, S. S.; Shende, S.
Abstract: We consider the two-dimensional range maximum query (2D-RMQ) problem: given an array $A$ of ordered values, to pre-process it so that we can find the position of the smallest element in the sub-matrix defined by a (user-specified) range of rows and range of columns. We focus on determining the effective entropy of 2D-RMQ, i.e., how many bits are needed to encode $A$ so that 2D-RMQ queries can be answered without access to $A$. We give tight upper and lower bounds on the expected effective entropy for the case when $A$ contains independent identically-distributed random values, and new upper and lower bounds for arbitrary $A$, for the case when $A$ contains few rows. The latter results improve upon previous upper and lower bounds by Brodal et al. (ESA 2010). In some cases we also give data structures whose space usage is close to the effective entropy and answer 2D-RMQ queries rapidly.
Description: The file associated with this record is under a 24-month embargo from publication in accordance with the publisher's self-archiving policy, available at http://www.elsevier.com/about/company-information/policies/sharing. The full text may be available in the publisher links provided above.
2015-10-21T10:36:44Z
Detecting and Refactoring Operational Smells within the Domain Name System
Radwan, Marwan
Heckel, Reiko
http://hdl.handle.net/2381/33171
2015-10-03T02:10:10Z
2015-10-02T09:47:21Z
Title: Detecting and Refactoring Operational Smells within the Domain Name System
Authors: Radwan, Marwan; Heckel, Reiko
Abstract: The Domain Name System (DNS) is one of the most important components of the Internet infrastructure. DNS relies on a delegation-based architecture, where resolution of names to their IP addresses requires resolving the names of the servers responsible for those names. The recursive structures of the inter dependencies that exist between name servers associated with each zone are called dependency graphs. System administrators' operational decisions have far reaching effects on the DNSs qualities. They need to be soundly made to create a balance between the availability, security and resilience of the system. We utilize dependency graphs to identify, detect and catalogue operational bad smells. Our method deals with smells on a high-level of abstraction using a consistent taxonomy and reusable vocabulary, defined by a DNS Operational Model. The method will be used to build a diagnostic advisory tool that will detect configuration changes that might decrease the robustness or security posture of domain names before they become into production.
Description: In Proceedings GaM 2015, arXiv:1504.02448
2015-10-02T09:47:21Z
The Rabin index of parity games
Huth, M
Kuo, JHP
Piterman, N
http://hdl.handle.net/2381/33162
2015-10-02T02:00:51Z
2015-10-01T15:23:39Z
Title: The Rabin index of parity games
Authors: Huth, M; Kuo, JHP; Piterman, N
Abstract: We study the descriptive complexity of parity games by taking into account the coloring of their game graphs whilst ignoring their ownership structure. Colored game graphs are identified if they determine the same winning regions and strategies, for all ownership structures of nodes. The Rabin index of a parity game is the minimum of the maximal color taken over all equivalent coloring functions. We show that deciding whether the Rabin index is at least k is in PTIME for k=1 but NP-hard for all fixed k > 1. We present an EXPTIME algorithm that computes the Rabin index by simplifying its input coloring function. When replacing simple cycle with cycle detection in that algorithm, its output over-approximates the Rabin index in polynomial time. Experimental results show that this approximation yields good values in practice.
Description: In Proceedings GandALF 2013, arXiv:1307.4162
2015-10-01T15:23:39Z
Obligation Blackwell Games and p-Automata
Piterman, Nir
Chatterjee, Krishnendu
http://hdl.handle.net/2381/33160
2015-10-02T02:00:49Z
2015-10-01T15:17:15Z
Title: Obligation Blackwell Games and p-Automata
Authors: Piterman, Nir; Chatterjee, Krishnendu
Abstract: We recently introduced p-automata, automata that read discrete-time Markov chains. We used turn-based stochastic parity games to define acceptance of Markov chains by a subclass of p-automata. Definition of acceptance required a cumbersome and complicated reduction to a series of turn-based stochastic parity games. The reduction could not support acceptance by general p-automata, which was left undefined as there was no notion of games that supported it.
Here we generalize two-player games by adding a structural acceptance condition called obligations. Obligations are orthogonal to the linear winning conditions that define winning. Obligations are a declaration that player 0 can achieve a certain value from a configuration. If the obligation is met, the value of that configuration for player 0 is 1.
One cannot define value in obligation games by the standard mechanism of considering the measure of winning paths on a Markov chain and taking the supremum of the infimum of all strategies. Mainly because obligations need definition even for Markov chains and the nature of obligations has the flavor of an infinite nesting of supremum and infimum operators. We define value via a reduction to turn-based games similar to Martin's proof of determinacy of Blackwell games with Borel objectives. Based on this definition, we show that games are determined. We show that for Markov chains with Borel objectives and obligations, and finite turn-based stochastic parity games with obligations there exists an alternative and simpler characterization of the value function. Based on this simpler definition we give an exponential time algorithm to analyze finite turn-based stochastic parity games with obligations. Finally, we show that obligation games provide the necessary framework for reasoning about p-automata and that they generalize the previous definition.
2015-10-01T15:17:15Z
Fatal Attractors in Parity Games: Building Blocks for Partial Solvers
Huth, M.
Kuo, J. H. P.
Piterman, Nir
http://hdl.handle.net/2381/33159
2015-10-02T02:00:50Z
2015-10-01T15:12:37Z
Title: Fatal Attractors in Parity Games: Building Blocks for Partial Solvers
Authors: Huth, M.; Kuo, J. H. P.; Piterman, Nir
Abstract: Attractors in parity games are a technical device for solving "alternating" reachability of given node sets. A well known solver of parity games - Zielonka's algorithm - uses such attractor computations recursively. We here propose new forms of attractors that are monotone in that they are aware of specific static patterns of colors encountered in reaching a given node set in alternating fashion. Then we demonstrate how these new forms of attractors can be embedded within greatest fixed-point computations to design solvers of parity games that run in polynomial time but are partial in that they may not decide the winning status of all nodes in the input game. Experimental results show that our partial solvers completely solve benchmarks that were constructed to challenge existing full solvers. Our partial solvers also have encouraging run times in practice. For one partial solver we prove that its run-time is at most cubic in the number of nodes in the parity game, that its output game is independent of the order in which monotone attractors are computed, and that it solves all Buechi games and weak games. We then define and study a transformation that converts partial solvers into more precise partial solvers, and we prove that this transformation is sound under very reasonable conditions on the input partial solvers. Noting that one of our partial solvers meets these conditions, we apply its transformation on 1.6 million randomly generated games and so experimentally validate that the transformation can be very effective in increasing the precision of partial solvers.
2015-10-01T15:12:37Z
Modeling and reasoning over distributed systems using aspect-oriented graph grammars
Machado, Rodrigo
Heckel, Reiko
Ribeiro, Leila
http://hdl.handle.net/2381/33145
2015-10-02T02:00:52Z
2015-10-01T11:02:25Z
Title: Modeling and reasoning over distributed systems using aspect-oriented graph grammars
Authors: Machado, Rodrigo; Heckel, Reiko; Ribeiro, Leila
Abstract: Aspect-orientation is a relatively new paradigm that introduces abstractions to modularize the implementation of system-wide policies. It is based on a composition operation, called aspect weaving, that implicitly modifies a base system by performing related changes within the system modules. Aspect-oriented graph grammars (AOGG) extend the classic graph grammar formalism by defining aspects as sets of rule-based modifications over a base graph grammar. Despite the advantages of aspect-oriented concepts regarding modularity, the implicit nature of the aspect weaving operation may also introduce issues when reasoning about the system behavior. Since in AOGGs aspect weaving is characterized by means of rule-based rewriting, we can overcome these problems by using known analysis techniques from the graph transformation literature to study aspect composition. In this paper, we present a case study of a distributed client-server system with global policies, modeled as an aspect-oriented graph grammar, and discuss how to use the AGG tool to identify potential conflicts in aspect weaving.
2015-10-01T11:02:25Z
Towards an embedding of Graph Transformation in Intuitionistic Linear Logic
Torrini, Paolo
Heckel, Reiko
http://hdl.handle.net/2381/33144
2015-10-02T02:00:52Z
2015-10-01T10:46:22Z
Title: Towards an embedding of Graph Transformation in Intuitionistic Linear Logic
Authors: Torrini, Paolo; Heckel, Reiko
Abstract: Linear logics have been shown to be able to embed both rewriting-based approaches and process calculi in a single, declarative framework. In this paper we are exploring the embedding of double-pushout graph transformations into quantified linear logic, leading to a Curry-Howard style isomorphism between graphs and transformations on one hand, formulas and proof terms on the other. With linear implication representing rules and reachability of graphs, and the tensor modelling parallel composition of graphs and transformations, we obtain a language able to encode graph transformation systems and their computations as well as reason about their properties.
2015-10-01T10:46:22Z
From nondeterministic Buchi and Streett automata to deterministic parity automata
Piterman, Nir
http://hdl.handle.net/2381/33113
2015-09-29T02:00:43Z
2015-09-28T10:47:34Z
Title: From nondeterministic Buchi and Streett automata to deterministic parity automata
Authors: Piterman, Nir
Abstract: In this paper we revisit Safra's determinization constructions. We show how to construct deterministic automata with fewer states and, most importantly, parity acceptance conditions. Specifically, starting from a nondeterministic Buchi automaton with n states our construction yields a deterministic parity automaton with n2n+2 states and index 2n (instead of a Rabin automaton with (12)nn2n states and n pairs). Starting from a nondeterministic Streett automaton with n states and k pairs our construction yields a deterministic parity automaton with nn(k+2)+2(k+1)2n(K+1) states and index 2n(k+1) (instead of a Rabin automaton with (12)n(k+1)n n(k+2)(k+1)2n(k+1) states and n(k+1) pairs). The parity condition is much simpler than the Rabin condition. In applications such as solving games and emptiness of tree automata handling the Rabin condition involves an additional multiplier of n2n!(or(n(k+1))2(n(k+1))! in the case of Streett) which is saved using our construction.
2015-09-28T10:47:34Z
Cell-cycle regulation of NOTCH signaling during C. elegans vulval development.
Nusser-Stein, Stefanie
Beyer, Antje
Rimann, Ivo
Adamczyk, Magdalene
Piterman, Nir
Hajnal, Alex
Fisher, Jasmin
http://hdl.handle.net/2381/33107
2015-09-26T02:00:42Z
2015-09-25T15:22:42Z
Title: Cell-cycle regulation of NOTCH signaling during C. elegans vulval development.
Authors: Nusser-Stein, Stefanie; Beyer, Antje; Rimann, Ivo; Adamczyk, Magdalene; Piterman, Nir; Hajnal, Alex; Fisher, Jasmin
Abstract: C. elegans vulval development is one of the best-characterized systems to study cell fate specification during organogenesis. The detailed knowledge of the signaling pathways determining vulval precursor cell (VPC) fates permitted us to create a computational model based on the antagonistic interactions between the epidermal growth factor receptor (EGFR)/RAS/MAPK and the NOTCH pathways that specify the primary and secondary fates, respectively. A key notion of our model is called bounded asynchrony, which predicts that a limited degree of asynchrony in the progression of the VPCs is necessary to break their equivalence. While searching for a molecular mechanism underlying bounded asynchrony, we discovered that the termination of NOTCH signaling is tightly linked to cell-cycle progression. When single VPCs were arrested in the G1 phase, intracellular NOTCH failed to be degraded, resulting in a mixed primary/secondary cell fate. Moreover, the G1 cyclins CYD-1 and CYE-1 stabilize NOTCH, while the G2 cyclin CYB-3 promotes NOTCH degradation. Our findings reveal a synchronization mechanism that coordinates NOTCH signaling with cell-cycle progression and thus permits the formation of a stable cell fate pattern.
2015-09-25T15:22:42Z
The Sorting Buffer Problem is NP-hard
Chan, Ho-Leung
Megow, Nicole
Stee, Rob van
Sitters, Rene
http://hdl.handle.net/2381/33106
2015-09-26T02:00:40Z
2015-09-25T15:14:43Z
Title: The Sorting Buffer Problem is NP-hard
Authors: Chan, Ho-Leung; Megow, Nicole; Stee, Rob van; Sitters, Rene
Abstract: We consider the offline sorting buffer problem. The input is a sequence of items of different types. All items must be processed one by one by a server. The server is equipped with a random-access buffer of limited capacity which can be used to rearrange items. The problem is to design a scheduling strategy that decides upon the order in which items from the buffer are sent to the server. Each type change incurs unit cost, and thus, the cost minimizing objective is to minimize the total number of type changes for serving the entire sequence. This problem is motivated by various applications in manufacturing processes and computer science, and it has attracted significant attention in the last few years. The main focus has been on online competitive algorithms. Surprisingly little is known on the basic offline problem. In this paper, we show that the sorting buffer problem with uniform cost is NP-hard and, thus, close one of the most fundamental questions for the offline problem. On the positive side, we give an O(1)-approximation algorithm when the scheduler is given a buffer only slightly larger than double the original size. We also give a dynamic programming algorithm for the special case of buffer size two that solves the problem exactly in linear time, improving on the standard DP which runs in cubic time.
2015-09-25T15:14:43Z
Improved Practical Compact Dynamic Tries
Poyias, Andreas
Raman, Rajeev
http://hdl.handle.net/2381/33033
2015-09-11T02:00:34Z
2015-09-10T08:36:15Z
Title: Improved Practical Compact Dynamic Tries
Authors: Poyias, Andreas; Raman, Rajeev
Abstract: We consider the problem of implementing a dynamic trie with an emphasis on good practical performance. For a trie with n nodes with an alphabet of size σ, the information-theoretic lower bound is nlogσ+O(n) bits. The Bonsai data structure [1] supports trie operations in O(1) expected time (based on assumptions about the behaviour of hash functions). While its practical speed performance is excellent, its space usage of (1+ϵ)n(logσ+O(loglogn)) bits, where ϵ is any constant >0, is not asymptotically optimal. We propose an alternative, m-Bonsai, that uses (1+ϵ)n(logσ+O(1)) bits in expectation, and supports operations in O(1) expected time (again based on assumptions about the behaviour of hash functions). We give a heuristic implementation of m-Bonsai which uses considerably less memory and is slightly faster than the original Bonsai.
2015-09-10T08:36:15Z
The infinitary lambda calculus of the infinite eta Böhm trees
Severi, Paula
de Vries, Fer-Jan
http://hdl.handle.net/2381/32973
2015-08-25T02:00:47Z
2015-08-24T14:09:28Z
Title: The infinitary lambda calculus of the infinite eta Böhm trees
Authors: Severi, Paula; de Vries, Fer-Jan
Abstract: In this paper, we introduce a strong form of eta reduction called etabang that we use to construct a confluent and normalising infinitary lambda calculus, of which the normal forms correspond to Barendregt's infinite eta Böhm trees. This new infinitary perspective on the set of infinite eta Böhm trees allows us to prove that the set of infinite eta Böhm trees is a model of the lambda calculus. The model is of interest because it has the same local structure as Scott's D∞-models, i.e. two finite lambda terms are equal in the infinite eta Böhm model if and only if they have the same interpretation in Scott's D∞-models.
2015-08-24T14:09:28Z
Visualising Software as a Particle System
Scarle, Simon
Walkinshaw, Neil
http://hdl.handle.net/2381/32949
2015-08-25T01:45:09Z
2015-08-18T10:08:10Z
Title: Visualising Software as a Particle System
Authors: Scarle, Simon; Walkinshaw, Neil
Abstract: Current metrics-based approaches to visualise un-
familiar software systems face two key limitations: (1) They
are limited in terms of the number of dimensions that can
be projected, and (2) they use fixed layout algorithms where
the resulting positions of entities can be vulnerable to mis-
interpretation. In this paper we show how computer games
technology can be used to address these problems. We present
the PhysVis software exploration system, where software metrics
can be variably mapped to parameters of a physical model and
displayed via a particle system. Entities can be imbued with
attributes such as mass, gravity, and (for relationships) strength
or springiness, alongside traditional attributes such as position,
colour and size. The resulting visualisation is a dynamic scene;
the relative positions of entities are not determined by a fixed
layout algorithm, but by intuitive physical notions such as gravity,
mass, and drag. The implementation is openly available, and we
evaluate it on a selection of visualisation tasks for two openly-
available software systems.
2015-08-18T10:08:10Z
A Calculus of Mobility and Communication for Ubiquitous Computing
Gul, Nosheen
http://hdl.handle.net/2381/32898
2015-08-06T02:01:47Z
2015-08-05T14:17:23Z
Title: A Calculus of Mobility and Communication for Ubiquitous Computing
Authors: Gul, Nosheen
Abstract: Ubiquitous computing makes various computing devices available throughout the
physical setting. Ubiquitous computing devices are distributed and could be mobile,
and interactions among them are concurrent and often depend on the location of
the devices. Process calculi are formal models of concurrent and mobile systems.
The work in this thesis is inspired by the calculus of Mobile Ambients and other
process calculi such as Calculus of Communicating Systems which have proved to
be successful in the modelling of mobility, communication and structure of systems.
We start by developing operational semantics for the calculus of Mobile Ambients
and Push and Pull Ambient Calculus, and prove that the semantics are sound and
complete with respect to the corresponding underlying reduction semantics. This
thesis proposes a Calculus of Communication and Mobility, denoted by CMCPCA,
for the modelling of mobility, communication and context awareness in the setting
of ubiquitous computing. CMCPCA is an ambient calculus with the in and out
capabilities of Cardelli and Gordon as well the push and pull capabilities of Phillips
and Vigliotti. CMCPCA has a new form of global communication similar to that in
Milner’s CCS. We define a new notion of behavioural equivalence for our calculus
in terms of an observation predicate and action transitions. Thus, we define barbed
bisimulation and congruence, and capability barbed bisimulation and congruence.
We then prove that capability barbed congruence coincides with barbed congruence.
We also include in the calculus a new form of context awareness mechanism that
allows ambients to query their current location and surrounding. We then propose
reduction semantics and operational semantics for the context awareness primitives,
and show that the semantics coincide. Several case studies and a variety of small
examples show the expressiveness and usefulness of our calculus.
2015-08-05T14:17:23Z
Groups, formal language theory and decidability
Jones, Sam Anthony Mark
http://hdl.handle.net/2381/32520
2015-07-09T02:01:09Z
2015-07-08T14:01:15Z
Title: Groups, formal language theory and decidability
Authors: Jones, Sam Anthony Mark
Abstract: The first four chapters provide an introduction, background information and a
summary of results from some of the relevant literature. In these chapters a proof
is provided if the author was unable to find either a proof or the result itself stated
in the literature.
Chapter 5 focuses on syntactic monoids of languages, it introduces some background
material from the literature and then proves some characterisations of
monoids based on properties that the full preimage of certain subsets satisfy when
considered as a formal language over the generating set.
In Chapter 6 we examine some natural properties of formal languages which are
necessary conditions for a formal language to be a word problem of a group.
We look at which subsets of these conditions are sufficient for a formal language
satisfying them to be a word problem.
Chapter 7 focuses on decision problems. We generalise a theorem of Hartmanis
and Hopcroft and use it to settle the decidability for various language classes of
the conditions from Chapter 6.
Chapter 8 contains a brief exposition of some related areas. We first characterise
the co-word problem for groups and then examine a way of constructing groups
by intersecting their word problems. We conclude this chapter by proving some
simple results about the context-free subset membership problem for groups.
Finally, Chapter 9 contains a brief discussion of possible directions in which one
could extend the work in this thesis. The results in chapters 5, 6 and 7 are to be considered original unless stated
otherwise. Many of the results in chapter 7 have been published in [24]. Many of
the results of chapter 6 have been submitted for publication.
2015-07-08T14:01:15Z
A Forward Analysis for Recurrent Sets
Bakhirkin, Alexey
Berdine, J.
Piterman, Nir
http://hdl.handle.net/2381/32409
2015-09-17T08:54:29Z
2015-06-23T10:23:14Z
Title: A Forward Analysis for Recurrent Sets
Authors: Bakhirkin, Alexey; Berdine, J.; Piterman, Nir
Abstract: Non-termination of structured imperative programs is primarily due to infinite loops. An important class of non-terminating loop behaviors can be characterized using the notion of recurrent sets. A recurrent set is a set of states from which execution of the loop cannot or might not escape. Existing analyses that infer recurrent sets to our knowledge rely on one of: the combination of forward and backward analyses, quantifier elimination, or SMT-solvers. We propose a purely forward abstract interpretation–based analysis that can be used together with a possibly complicated abstract domain where none of the above is readily available. The analysis searches for a recurrent set of every individual loop in a program by building a graph of abstract states and analyzing it in a novel way. The graph is searched for a witness of a recurrent set that takes the form of what we call a recurrent component which is somewhat similar to the notion of an end component in a Markov decision process.
2015-06-23T10:23:14Z
SEPIA: Search for Proofs Using Inferred Automata
Gransden, Thomas
Walkinshaw, Neil
Raman, Rajeev
http://hdl.handle.net/2381/32390
2015-12-11T17:03:18Z
2015-06-18T15:48:49Z
Title: SEPIA: Search for Proofs Using Inferred Automata
Authors: Gransden, Thomas; Walkinshaw, Neil; Raman, Rajeev
Abstract: This paper describes SEPIA, a tool for automated proof
generation in Coq. SEPIA combines model inference with interactive
theorem proving. Existing proof corpora are modelled using state-based
models inferred from tactic sequences. These can then be traversed automatically
to identify proofs. The SEPIA system is described and its
performance evaluated on three Coq datasets. Our results show that
SEPIA provides a useful complement to existing automated tactics in
Coq.
2015-06-18T15:48:49Z
Reducing Idle Listening during Data Collection in Wireless Sensor Networks
Rasul, Aram
Erlebach, Thomas
http://hdl.handle.net/2381/32370
2015-06-13T02:00:43Z
2015-06-12T09:26:08Z
Title: Reducing Idle Listening during Data Collection in Wireless Sensor Networks
Authors: Rasul, Aram; Erlebach, Thomas
Abstract: Data collection is one of the predominant operations in wireless sensor networks. This paper focuses on the problem of efficient data collection in a setting where some nodes may not possess data each time data is collected. In that case, idle listening slots may occur, which lead to a waste of energy and an increase in latency. To alleviate these problems, successive-slot schedules were proposed by Zhao and Tang (Infocom 2011). In this paper, we introduce a so-called extra-bit technique to reduce idle listening further. Each packet includes an extra bit that informs the receiver whether further data packets will follow or not. The extra-bit technique leads to significantly reduced idle listening and improved latency in many cases. We prove that every successive-slot schedule is also an extra-bit schedule. We then consider the special case of linear networks and prove that the optimal length of a successive-slot schedule (or extra-bit schedule) is 4N - 6 time slots, where N ≥ 3 is the number of nodes excluding the sink. Then the proposed extra-bit technique is compared with the successive-slot technique with respect to the expected amount of idle listening, and it is shown that the extra-bit technique reduces idle listening substantially.
Description: timestamp: Thu, 05 Mar 2015 11:05:43 +0100 biburl: http://dblp.uni-trier.de/rec/bib/conf/msn/RasulE14 bibsource: dblp computer science bibliography, http://dblp.org
2015-06-12T09:26:08Z
Mobile learning in a human geography field course
Jarvis, Claire
Tate, Nicholas
Dickie, Jennifer
Brown, Gavin
http://hdl.handle.net/2381/32345
2015-06-04T02:00:18Z
2015-06-03T09:03:12Z
Title: Mobile learning in a human geography field course
Authors: Jarvis, Claire; Tate, Nicholas; Dickie, Jennifer; Brown, Gavin
Editors: Mitchell, J.
Abstract: This paper reports on reusable mobile digital learning resources designed to assist human geography undergraduate students in exploring the geographies of life in Dublin. Developing active learning that goes beyond data collection to encourage observation and thinking in the field is important. Achieving this in the context of large class sizes presents several challenges. Combining in-situ learning with spatially-accurate historical and contemporary multimedia, we developed a set of location-aware digital mobile tools or ‘mediascapes’. We explore how scaffolding can be achieved in such a context, focusing on the development of students’ observational, enquiry and thinking skills in the field.
2015-06-03T09:03:12Z
Online interval scheduling on uniformly related machines
Van Stee, Rob
Epstein, Leah
Jez, Lukasz
Sgall, Jiri
http://hdl.handle.net/2381/32329
2015-11-16T11:41:40Z
2015-06-01T15:36:42Z
Title: Online interval scheduling on uniformly related machines
Authors: Van Stee, Rob; Epstein, Leah; Jez, Lukasz; Sgall, Jiri
Abstract: We consider online preemptive scheduling of jobs with fixed starting times revealed at those times on
m uniformly related machines, with the goal of maximizing the total weight of completed jobs. Every job
has a size and a weight associated with it. A newly released job must be either assigned to start running
immediately on a machine or otherwise it is dropped. It is also possible to drop an already scheduled
job, but only completed jobs contribute their weights to the profit of the algorithm.
In the most general setting, no algorithm has bounded competitive ratio, and we consider a number of
standard variants. We give a full classification of the variants into cases which admit constant competitive
ratio (weighted and unweighted unit jobs, and C-benevolent instances, which is a wide class of instances
containing proportional-weight jobs), and cases which admit only a linear competitive ratio (unweighted
jobs and D-benevolent instances). In particular, we give a lower bound of m on the competitive ratio for
scheduling unit weight jobs with varying sizes, which is tight. For unit size and weight we show that
a natural greedy algorithm is 4/3-competitive and optimal on m = 2 machines, while for large m, its
competitive ratio is between 1.56 and 2. Furthermore, no algorithm is better than 1.5-competitive.
Description: 12 mth embargo
2015-06-01T15:36:42Z
A unified approach to truthful scheduling on related machines
Van Stee, Rob
Epstein, Leah
Levin, Asaf
http://hdl.handle.net/2381/32328
2015-11-20T14:56:40Z
2015-06-01T15:24:13Z
Title: A unified approach to truthful scheduling on related machines
Authors: Van Stee, Rob; Epstein, Leah; Levin, Asaf
Abstract: We present a unified framework for designing deterministic monotone polynomial time approximation
schemes (PTAS’s) for a wide class of scheduling problems on uniformly related machines. This
class includes (among others) minimizing the makespan, maximizing the minimum load, and minimizing
the lp-norm of the machine loads vector. Previously, this kind of result was only known for the
makespan objective. Monotone algorithms have the property that an increase in the speed of a machine
cannot decrease the amount of work assigned to it.
The idea of our novel method is to show that for goal functions that are sufficiently well-behaved
functions of the machine loads, it is possible to compute in polynomial time a highly structured nearly
optimal schedule. An interesting aspect of our approach is that, in contrast to all known approximation
schemes, we avoid rounding any job sizes or speeds throughout. We can therefore find the exact best
structured schedule using dynamic programming. The state space encodes a sufficient amount of information
such that no postprocessing is needed, allowing an elegant and relatively simple analysis without
any special cases. The monotonicity is a consequence of the fact that we find the best schedule in a
specific collection of schedules.
In the game-theoretical setting of these scheduling problems, there is a social goal, which is one of
the objective functions that we study. Each machine is controlled by a selfish single-parameter agent.
The private information of an agent is its cost of processing a unit-sized job, which is also the inverse
of the speed of its machine. Each agent wishes to maximize its own profit, defined as the payment
it receives from the mechanism minus its cost for processing all jobs assigned to it, and places a bid
which corresponds to its private information. Monotone approximation schemes have an important role
in the emerging area of algorithmic mechanism design, as in the case of single-parameter agents, a
necessary and sufficient condition for truthfulness with respect to the bids is that the allocation algorithm
be monotone. For each one of the problems, we show that we can calculate payments that guarantee
truthfulness in an efficient manner. Thus, there exists a dominant strategy where agents report their true
speeds, and we show the existence of a truthful mechanism which can be implemented in polynomial
time, where the social goal is approximated within a factor of 1 + ε for every ε > 0.
Description: 12 mth embargo
2015-06-01T15:24:13Z
The Price of Anarchy for Selfish Ring Routing is Two
Chen, Xujin
Doerr, Benjamin
Doerr, Carola
Hu, Xiaodong
Ma, Weidong
van Stee, Rob
http://hdl.handle.net/2381/32326
2015-06-01T14:54:05Z
2015-06-01T14:53:47Z
Title: The Price of Anarchy for Selfish Ring Routing is Two
Authors: Chen, Xujin; Doerr, Benjamin; Doerr, Carola; Hu, Xiaodong; Ma, Weidong; van Stee, Rob
Abstract: We analyze the network congestion game with atomic players, asymmetric strategies, and the maximum latency among all players as social cost. This important social cost function is much less understood than the average latency. We show that the price of anarchy is at most two, when the network is a ring and the link latencies are linear. Our bound is tight. This is the first sharp bound for the maximum latency objective.
Description: © ACM, 2014. This is the author's version of the work. It is posted here by permission of ACM for your personal use. Not for redistribution. The definitive version was published in ACM Transactions on Economics and Computation archive
Volume 2 Issue 2, June 2014 http://doi.acm.org/10.1145/2548545
2015-06-01T14:53:47Z
SIGACT news online algorithms column 24: 2014 so far
van Stee, Rob
http://hdl.handle.net/2381/32323
2015-06-01T10:56:07Z
2015-06-01T10:55:27Z
Title: SIGACT news online algorithms column 24: 2014 so far
Authors: van Stee, Rob
Abstract: In this column, I will discuss some recent papers in online algorithms. It is pleasing to see there
were a number of papers about online algorithms in the top conferences this year. If I have missed
your paper and you want to write about it or about any other topic in online algorithms, don't
hesitate to contact me!
2015-06-01T10:55:27Z
Better Algorithms for Online Bin Stretching
Böhm, Martin
Sgall, Jin
Stee, Rob van
Veselý, Pavel
http://hdl.handle.net/2381/32322
2015-12-04T12:17:10Z
2015-06-01T10:31:30Z
Title: Better Algorithms for Online Bin Stretching
Authors: Böhm, Martin; Sgall, Jin; Stee, Rob van; Veselý, Pavel
Abstract: Online Bin Stretching is a semi-online variant of bin packing in which the algorithm has to use the same number of bins as the optimal packing, but is allowed to slightly
overpack the bins. The goal is to minimize the amount of overpacking, i.e., the maximum
size packed into any bin.
We give an algorithm for Online Bin Stretching with a stretching factor of 1:5 for
any number of bins. We also show a specialized algorithm for three bins with a stretching
factor of 11=8 = 1:375.
Description: The final publication is available at Springer via http://dx.doi.org/10.1007/978-3-319-18263-6_3
2015-06-01T10:31:30Z
The optimal absolute ratio for online bin packing
Balogh, Janos
Békési, Jozsef
Dósa, Gyorgy
Sgall, Jiri
Stee, Rob van
http://hdl.handle.net/2381/32320
2015-06-01T09:53:41Z
2015-06-01T09:53:03Z
Title: The optimal absolute ratio for online bin packing
Authors: Balogh, Janos; Békési, Jozsef; Dósa, Gyorgy; Sgall, Jiri; Stee, Rob van
Abstract: We present an online bin packing algorithm with absolute competitive ratio 5/3, which is optimal.
2015-06-01T09:53:03Z
On Automation of CTL* Verification for Infinite-State Systems
Cook, B.
Khlaaf, H.
Piterman, Nir
http://hdl.handle.net/2381/32307
2015-11-20T15:15:47Z
2015-05-26T11:33:09Z
Title: On Automation of CTL* Verification for Infinite-State Systems
Authors: Cook, B.; Khlaaf, H.; Piterman, Nir
Abstract: In this paper we introduce the first known fully automated tool for symbolically proving CTL* properties of (infinite-state) integer programs. The method uses an internal encoding which facilitates reasoning about the subtle interplay between the nesting of path and state temporal operators that occurs within \ctlstar proofs. A precondition synthesis strategy is then used over a program transformation which trades nondeterminism in the transition relation for nondeterminism explicit in variables predicting future outcomes when necessary. We show the viability of our approach in practice using examples drawn from device drivers and various industrial examples.
2015-05-26T11:33:09Z
Synthesising Executable Gene Regulatory Networks from Single-Cell Gene Expression Data
Fisher, J.
Köksal, A. S.
Piterman, Nir
Woodhouse, S.
http://hdl.handle.net/2381/32306
2015-08-20T11:57:21Z
2015-05-26T11:29:52Z
Title: Synthesising Executable Gene Regulatory Networks from Single-Cell Gene Expression Data
Authors: Fisher, J.; Köksal, A. S.; Piterman, Nir; Woodhouse, S.
Abstract: Recent experimental advances in biology allow researchers to obtain gene expression profiles at single-cell resolution over hundreds, or even thousands of cells at once. These single-cell measurements provide snapshots of the states of the cells that make up a tissue, instead of the population-level averages provided by conventional high-throughput experiments. This new data therefore provides an exciting opportunity for computational modelling. In this paper we introduce the idea of viewing single-cell gene expression profiles as states of an asynchronous Boolean network, and frame model inference as the problem of reconstructing a Boolean network from its state space. We then give a scalable algorithm to solve this synthesis problem. We apply our technique to both simulated and real data. We first apply our technique to data simulated from a well established model of common myeloid progenitor differentiation. We show that our technique is able to recover the original Boolean network rules. We then apply our technique to a large dataset taken dur- ing embryonic development containing thousands of cell measurements. Our technique synthesises matching Boolean networks, and analysis of these models yields new predictions about blood development which our experimental collaborators were able to verify.
2015-05-26T11:29:52Z
The Description Logic SHIQ with a Flexible Meta-modelling Hierarchy
Severi, Paula G.
Motz, Regina
Rohrer, Edelweis
http://hdl.handle.net/2381/32284
2015-08-20T14:09:52Z
2015-05-19T10:58:05Z
Title: The Description Logic SHIQ with a Flexible Meta-modelling Hierarchy
Authors: Severi, Paula G.; Motz, Regina; Rohrer, Edelweis
Abstract: This work is motivated by a real-world case study where it is necessary to integrate and relate existing ontologies through meta-modelling. For this, we introduce the Description Logic SHIQM which is obtained from SHIQ byadding statements that equate individuals to concepts in a knowledge base. In this new extension, concepts can be individuals of another concept (called meta-concept) which itself can be an individual of yet another concept (called meta meta-concept ) and so on. We define an algorithm that checks consistency of SHIQM by modifying the Tableau algorithm for SHIQ. From the practical point of view, this has the advantage that we can reuse the code of existing OWL reasoners. From the theoretical point of view, it has a similar advantage of reuse. We make use of the existing results and proofs that lead to correctness of the algorithm for SHIQ in order to prove correctness of the algorithm for SHIQM.
2015-05-19T10:58:05Z