DSpace Collection:http://hdl.handle.net/2381/3852016-10-28T21:40:49Z2016-10-28T21:40:49ZOnline and Verification Problems under UncertaintyCharalambous, Georgehttp://hdl.handle.net/2381/380962016-09-27T02:20:44Z2016-09-26T11:40:14ZTitle: Online and Verification Problems under Uncertainty
Authors: Charalambous, George
Abstract: In the under uncertainty setting we study problems with imprecise input data for which precise data can be obtained. There exists an underlying problem with a feasible solution, but is computable only if the input is precise enough. We are interested in measuring how much of the imprecise input data has to be updated in order to be precise enough. We look at the problem for both the online and the offline (verification) cases. In the verification under uncertainty setting an algorithm is given imprecise input data and also an assumed set of precise input data. The aim of the algorithm is to update the smallest number of input data such that if the updated input data is the same as the corresponding assumed input data (i.e. verified), a solution for the underlying problem can be calculated. In the online adaptive under uncertainty setting the task is similar except the assumed set of precise data is not given to the algorithm, and the performance of the algorithm is measured by comparing the number of input data that have been updated against the result obtained in the verification setting of the same problem.
We have studied these settings for a few geometric and graph problems and found interesting results. Geometric problems include several variations of the maximal points problem where, in its classical form, given a set of points in the plane we want to compute the set of all points that are maximal. The uncertain element here is the actual location of each point. Graph problems include a few variations of the graph diameter problem where, in its classical form, given a graph we want to calculate a farthest pair of vertices. The uncertain element is the weight of each edge.2016-09-26T11:40:14ZCollaborative annotation, search and categorisationHong, Yihttp://hdl.handle.net/2381/380322016-09-10T04:03:17Z2016-09-09T10:59:24ZTitle: Collaborative annotation, search and categorisation
Authors: Hong, Yi
Abstract: The purpose of this research is to develop a collaborative framework for annotation,
search and categorisation. The basis of this research is to define an ontology-based data
model that allows users to create semantic tags, establish relationships among tags
and provide contextual information by hierarchical concepts and properties structure
derived from a lexical knowledge base. A computational model is introduced to record
uncertainty, establish user credibility and compute the truthfulness or reliability of the
statements, which can then be used for ranking search results. A method to transform
a relational database to the ontology-based repository is developed to populate the
proposed data model. The second stage of the research is to develop an expressive yet
intuitive querying technique for searching semantically annotated data. There are many
questions that arise when querying complex datasets. For example, how to help average
non-tech users to write queries without excessive reliance on external technical support?
How to utilise a rich knowledge base and how to enable members of a collaborative
team to construct queries collectively, considering their opinions on the importance of
searching criteria? Traditional keyword-based or form-based approaches fail to address
these issues due to lack of expressive power or flexibility. A visual querying technique is
presented for the collaborative team, based on graph pattern matching. This method
allows members of a collaborative team to collectively construct complex queries in
a more convenient manner. Then the possibility of applying various categorisation
techniques to help sort annotated objects is investigated. A new workflow model is
proposed that help a collaborative team build a universally-accepted categorisation
system and develop a systematic way for team members to create a training data
set, taking into account various criteria and degrees of uncertainty in human decisionmaking.
Eventually, a modified Naive Bayes classifier was built for storing a large
number of objects. In the end, in collaboration with members of an archaeological
research team, a series of experiments was conducted to evaluate our methodologies.2016-09-09T10:59:24ZActivation Network ProblemsAlqahtani, Hasna Mohsen H.http://hdl.handle.net/2381/379662016-08-16T02:34:29Z2016-08-15T15:39:46ZTitle: Activation Network Problems
Authors: Alqahtani, Hasna Mohsen H.
Abstract: Network design problems traditionally are modelled by a graph where each edge (or node) has a fixed cost. We investigate optimization problems in a realistic model for wireless network design called activation network. The activation network setting can be defined as follows. We are given a directed or undirected graph G = (V, E) together with a family {fuv : (u, v) E E} of monotone non-decreasing activation functions from D² to {0, 1}, where D is a constant-size subset of the non-negative real numbers, such that the activation of an edge depends on the chosen values from the domain D at its endpoints. An edge (u, v) E E is activated for chosen values xᵤ and xᵥ if fᵤᵥ(xᵤ, xᵥ) = 1, and the activation function fᵤᵥ is called monotone non-decreasing if fᵤᵥ (xᵤ, xᵥ) = 1 implies fᵤᵥ (yᵤ, yᵥ) = 1 for any yᵤ ≥ xᵤ, yᵥ ≥ xᵥ. The objective of activation network problems is to find activation values xᵥ E E for all v E V such that the total activation cost ∑ᵥEᵥ xᵥ is minimized and the activated set of edges satisfies some connectivity requirements. We give a 1:5-approximation algorithm for the minimum activation cost of k node-disjoint st-paths (st-MANDP) when k = 2. We also show that a p-approximation algorithm for the st-MANDP problem implies a p-approximation algorithm for solving the minimum activation cost of k edge-disjoint st-paths (st-MAEDP) problem when k = 2. We propose polynomial time algorithms that optimally solve the st-MANDP, st-MAEDP, minimum activation Steiner tree and the problem of finding minimum activation cost node-disjoint paths between k disjoint terminal pairs for graphs with treewidth bounded by a constant. We also study the st-MANDP, st-MAEDP, minimum spanning activation tree and minimum activation arborescence problems for the special case where |D| = 2 and all edges have the same activation function.2016-08-15T15:39:46ZGeneralised distributivity and the logic of metric spacesBabus, Octavian Vladuthttp://hdl.handle.net/2381/377012016-06-09T02:16:35Z2016-06-08T09:27:55ZTitle: Generalised distributivity and the logic of metric spaces
Authors: Babus, Octavian Vladut
Abstract: The aim of the thesis is to work towards a many-valued logic over a commutative unital quantale and, at the same time, towards a generalisation of coalgebraic logic enriched over a commutative unital quantale Ω. This is done by noticing that the contravariant powerset adjunction can be generalised to categories enriched over a commutative unital quantale. From here we define categorical algebras for the monad generated by this adjunction. We finish by showing that these categorical algebras are algebras over Set with operations and equations, and show that in some cases we can restrict the arity of those operations to be finite.2016-06-08T09:27:55ZDescriptions of Groups using Formal Language TheoryRino Nesin, Gabriela Aslihttp://hdl.handle.net/2381/376842016-06-03T02:18:13Z2016-06-02T12:13:04ZTitle: Descriptions of Groups using Formal Language Theory
Authors: Rino Nesin, Gabriela Asli
Abstract: This work treats word problems of finitely generated groups and variations thereof, such as word problems of pairs of groups and irreducible word problems of groups. These problems can be seen as formal languages on the generators of the group and as such they can be members of certain well-known language classes, such as the class of regular, one-counter, context-free, recursively enumerable or recursive languages, or less well known ones such as the class of terminal Petri net languages. We investigate what effect the class of these various problems has on the algebraic structure of the relevant group or groups.
We first generalize some results on pairs of groups, which were previously proven for context-free pairs of groups only. We then proceed to look at irreducible word problems, where our main contribution is the fact that a group for which all irreducible word problems are recursively enumerable must necessarily have solvable word problem. We then investigate groups for which membership of the irreducible word problem in the class of recursively enumerable languages is not independent of generating set. Lastly, we prove that groups whose word problem is a terminal Petri net language are exactly the virtually abelian groups.2016-06-02T12:13:04ZDigital Educational Games: Methodologies for Evaluating the Impact of Game TypeHeintz, Stephanie Alexandrahttp://hdl.handle.net/2381/376132016-05-20T02:16:26Z2016-05-19T15:22:27ZTitle: Digital Educational Games: Methodologies for Evaluating the Impact of Game Type
Authors: Heintz, Stephanie Alexandra
Abstract: The main research question addressed in this thesis is how the choice of game type influences the success of digital educational games (DEG), where success is defined as significant knowledge gain in combination with positive player experience. Games differ in type if they differ at least by one game feature.
As a first step we identified a comprehensive set of unique game features, summarised in the Game Elements-Attributes Model (GEAM), where elements are the defining components that all games share (e.g. Challenges) and attributes are their possible implementation (e.g. time pressure).
To deepen the understanding of relationships amongst game features, we conducted a survey based on the GEAM and received 321 responses. Using hierarchical clustering, we grouped 67 games, selected by the survey respondents, in terms of similarity and mapped the identified clusters on a 2D space to visualise their difference in distance from each other. On the resulting Game Genre Map, five main areas were detected, which proved to conform mostly to a selection of existing game genres. By specifying their GEAM attributes, we redefined these genres: Mini-game, Action, Adventure, Resource, and Role-play.
Based on the aforementioned groundwork, two empirical studies were conducted. Study 1 compared three DEGs of the Mini-game genre, differing in a single GEAM attribute - time pressure vs. puzzle solving and abstract vs. realistic graphics. Study 2 compared DEGs of different genres which vary in the implementation of several GEAM attributes. For both studies, statistically significant differences were found in learning outcome, for Study 2 also in the player experience dimensions: Flow, Tension, Challenge, and Negative Affect. However, the influences of the covariates - learning and play preconditions, learning style, and personality traits - were not confirmed. Further research based on the methodological frameworks developed is needed.2016-05-19T15:22:27ZData Collection in Wireless Sensor NetworksRasul, Aram Mohammedhttp://hdl.handle.net/2381/376062016-05-20T02:16:03Z2016-05-19T12:03:50ZTitle: Data Collection in Wireless Sensor Networks
Authors: Rasul, Aram Mohammed
Abstract: This thesis is principally concerned with effcient energy consumption in wireless
sensor networks from two distinct aspects from a theoretical point of view.
The thesis addresses the issue of reducing idle listening states in a restricted tree
topology to minimise energy consumption by proposing an optimisation technique:
the extra-bit technique. This thesis also focuses on showing lower bounds on the
optimal schedule length, which are derived for some special cases of the tree, such as
a single chain, balanced chains, imbalanced chains, three and four level k-ary trees
and Rhizome trees. Then, we propose an algorithm which can exactly match the
lower bound for a single chain, balanced chains and Rhizome trees individually and
which is a few steps away from the optimal solution for imbalanced chains. Finally,
we propose the use of two frequencies to further save energy and minimize latency.
Recent research has shown that significant energy improvements can be achieved
in WSNs by exploiting a mobile sink for data collection via single hop communications. A mobile sink approaches the transmission range of sensors to receive their
data and deposit the data at the base station. The thesis, as a second problem,
focuses on the design issues of an energy efficient restricted tour construction for
sink mobility. We propose two different techniques. The first one is heuristic and
uses a criterion based on maximum coverage and minimum energy consumption
called the "max-ratio". Although its time complexity is polynomial, this heuristic
algorithm cannot always produce a good solution. As a result, we propose the sec-
ond algorithm. Despite the time complexity of the second algorithm being pseudo
polynomial, the optimal solution can be found if one exists. For each algorithm men-
tioned, two scenarios are taken into account with regard to the transmission. In the
first scenario, one assumes that there is no upper bound on the transmission range
while in the second setting the nodes can adjust their transmission range between 0
and the maximum range. The algorithms have been implemented and simulated in
Matlab.2016-05-19T12:03:50ZDevelopment of Design Heuristics for Digital Educational Games for School Children of 7 to 11 Years OldKhanana, Kornchuleehttp://hdl.handle.net/2381/375262016-05-13T02:15:00Z2016-05-12T11:51:14ZTitle: Development of Design Heuristics for Digital Educational Games for School Children of 7 to 11 Years Old
Authors: Khanana, Kornchulee
Abstract: To design a digital educational game (DEG) for children aged 7-11, it is necessary to know which game features are powerful for motivating them to play and learn. In the Pilot Study of my research project, playability heuristics of the GameFlow model were employed as an analytic tool. The heuristics, which were translated into a set of understandable statements for children, were useful for identifying preferable as well as less preferable game features. Based on the reviews of relevant theoretical frameworks from psychology, pedagogy and design, gaps of the GameFlow model were analysed. This led to the development of a set of eight design heuristics named DEG-7-11 v1.
The heuristics were then applied to guide the creation of two new DEGs: FoodGroups-A following all the eight heuristics whereas FoodGroups-B following only two of them. To verify the hypotheses that FoodGroups-A was more educationally effective and enjoyable than FoodGroups-B, the Main Study involving two methods was conducted. For the first method, 182 participating children were randomly assigned to play FoodGroups-A or FoodGroups-B on an individual basis. By comparing the results of pre-tests and post-tests, the educational effect of FoodGroups-A was found to be higher than that of FoodGroups-B. Similarly, based on the results of the validated questionnaire KidsGEQ and the child-friendly statements derived from the GameFlow model, the experiential value of FoodGroups-A was perceived to be higher than that of FoodGroups-B. For the second method, the participating children were asked to rate their agreement with a set of child-friendly statements converted from the heuristics of DEG-7-11 v1, and the children agreed with most of them. The method of producing a child-friendly version of design heuristics originally meant for professional users was shown to be an alternative useful evaluation approach.
Furthermore, Heuristic Evaluation was also employed to evaluate fifteen existing DEGs. The results implied that if game designers considered DEG-7-11 v1 in designing DEGs, the games could have a higher level of user acceptance. Finally, the wording of some DEG-7-11 v1 heuristics was modified to improve their understandability, resulting in DEG-7-11 v2.2016-05-12T11:51:14ZEfficient Embedded Software Migration towards Clusterized Distributed-Memory ArchitecturesGaribotti, RafaelButko, AnastasiiaOst, LucianoGamatie, AbdoulayeSassatelli, GillesAdeniyi-Jones, Chrishttp://hdl.handle.net/2381/374512016-04-27T02:05:20Z2016-04-26T14:07:53ZTitle: Efficient Embedded Software Migration towards Clusterized Distributed-Memory Architectures
Authors: Garibotti, Rafael; Butko, Anastasiia; Ost, Luciano; Gamatie, Abdoulaye; Sassatelli, Gilles; Adeniyi-Jones, Chris
Abstract: A large portion of existing multithreaded embedded sofware has been programmed according to symmetric shared memory platforms where a monolithic memory block is shared by all cores. Such platforms accommodate popular parallel programming models such as POSIX threads and OpenMP. However with the growing number of cores in modern manycore embedded architectures, they present a bottleneck related to their centralized memory accesses. This paper proposes a solution tailored for an efficient execution of applications defined with shared-memory programming models onto on-chip distributed-memory multicore architectures. It shows how performance, area and energy consumption are significantly improved thanks to the scalability of these architectures. This is illustrated in an open-source realistic design framework, including tools from ASIC to microkernel.2016-04-26T14:07:53ZFinitary Logics for Coalgebras with BranchingKissig, Christianhttp://hdl.handle.net/2381/371902016-04-08T02:26:49Z2016-04-07T11:33:58ZTitle: Finitary Logics for Coalgebras with Branching
Authors: Kissig, Christian
Abstract: The purpose of this dissertation is to further previous work on coalgebras as infinite statebased
transition systems and their logical characterisation with particular focus on infinite
regular behaviour and branching.
Finite trace semantics is well understood [DR95] for nondeterministic labelled transition
systems, and has recently [Jac04, HJS06] been generalised to a coalgebraic level
where monads act as branching types for instance, of nondeterministic choice. Finite
trace semantics then arises through an inductive construction in the Kleisli-category of
the monad. We provide a more comprehensive definition of finite trace semantics, allowing
for finitary branching types in Chapter 5. In Chapter 6 we carry over the ideas behind
our definition of finite trace semantics to define infinite trace semantics.
Coalgebraic logics [Mos99] provide one approach to characterising states in coalgebras
up to bisimilarity. Coalgebraic logics are Boolean logics with the modality r. We
define the Boolean dual of r in the negation-free fragment of finitary coalgebraic logics
in Chapter 7, showing that finitary coalgebraic logics are essentially negation free. Our
proof is largely based on the previously established completeness of finitary coalgebraic
logics [KKV08].
Finite trace semantics induces the notion of finite trace equivalence. In Chapter 8 we
define coalgebraic logics for many relevant branching and transition types characterising
states of coalgebras with branching up to finite trace equivalence. Under further assumptions
we show that these logics are expressive.
Coalgebra automata allow us to state finitary properties over infinite structures essentially
by a fix-point style construction. We use the dualisation of r from Chapter 7 to prove that coalgebra automata are closed under complementation in Chapter 10. This result
completes a Rabin style [Rab69] correspondence between finitary coalgebraic logics
and coalgebra automata for finitary transition types, begun in [Ven04, KV05].
The semantics of coalgebra automata is given in terms of parity graph games [GTW02].
In Chapter 9 we show how to structure parity graph games into rounds using the notion
of players power [vB02] and how to normalise the interaction pattern between the players
per round. From the latter we obtain the coinductive principle of game bisimulation.
Languages accepted by coalgebra automata are called regular. Regularity is commonly
[Sip96, HMU03] disproved using the pumping lemma for regular languages. We
define regular languages of coalgebras and prove a pumping lemma for these languages
in Chapter 11.2016-04-07T11:33:58ZIntegrating user knowledge into design pattern detectionAlshira'H, Mohammad H.http://hdl.handle.net/2381/362322016-01-09T03:11:56Z2016-01-08T16:11:49ZTitle: Integrating user knowledge into design pattern detection
Authors: Alshira'H, Mohammad H.
Abstract: Design pattern detection is useful for a range of software comprehension and maintenance
tasks. Tools that rely on static or dynamic analysis alone can produce
inaccurate results, especially for patterns that rely on the run-time information.
Some tools provide facilities for the developer to refine the results by adding their
own knowledge. Currently, however, the ability of tools to accommodate this knowledge
is very limited; it can only pertain to the detected patterns and cannot provide
additional knowledge about the source code, or about its behaviour. In this thesis,
we propose an approach to combine existing pattern detection techniques with
a structured feedback mechanism. This enables the developer to refine the detection
results by feeding-in additional knowledge about pattern implementations and
software behaviour. The motivation is that a limited amount of user input can complement
the automated detection process, to produce results that are more accurate.
To evaluate the approach we applied it to a selection of openly available software
systems. The evaluation was carried in two parts. First, an evaluation case study
was carried out to detect pattern instances in the selected systems with the help
of the user knowledge. Second, a user study of a broader range of expert users of
design patterns was conducted in order to investigate the impact of their knowledge
on the detection process, and to see whether it is realistic that the user can identify
useful knowledge for the detection process. The evaluation results indicate that
the proposed approach can yield a significant improvement in the accuracy whilst
requiring a relatively small degree of user input from the developer. Moreover, the
results show that expert users can supplement the design pattern detection process
with a useful feedback that can enhance the detection of design pattern instances in
the source code.2016-01-08T16:11:49ZOptimal control of two distributed parameter systems.Williams, V (Vaughan)http://hdl.handle.net/2381/349332015-11-19T09:02:28Z2015-11-19T09:02:28ZTitle: Optimal control of two distributed parameter systems.
Authors: Williams, V (Vaughan)
Abstract: Part I. A heat conduction problem is outlined, where the state of the system under consideration is described by a linear parabolic partial differential equation in one space dimension, with bounded controlling boundary conditions. A cost functional is defined in terms of an integral involving some specified "target function". Integral expressions describing the evolution of the system under the influence of various initial and boundary conditions are given; Butkovskii's Maximum Principle is formally applied to modified forms of these expressions, to give equations determining the optimal control function. Pontryagin's Maximum Principle is applied to a discrete approximation to the distributed system, and a connection is demonstrated between the optimal control thus obtained and that for the distributed system. A further connection is demonstrated with the solution obtained after applying calculus of variations techniques to a notionally unconstrained version of the problem. Difficulties of numerical solution are discussed, and the need for further constraints is pointed out. Solutions to some numerical examples are given. PART II. Equations governing one-dimensional non-turbulent compressible fluid flow in a pipe are given, and the waterhammer problem is described. The method known as "valve stroking" is outlined for a linearized version of this problem. A necessary condition on the optimal solution to the linearized problem is demonstrated, and it is shown that valve stroking does not satisfy this condition, and produces neither a time-optimal nor a mass-flux-optimal solution. A calculus of variations method applied to the non-linear model is shown to produce a non-linear hyperbolic split boundary value problem. A numerical approach, based on iterative gradient methods, is described and some numerical results are presented. Computer programmes for the numerical methods described in parts I and II are included as Appendices.2015-11-19T09:02:28ZMarkov processes in adaptive control.Longley, D.http://hdl.handle.net/2381/349322015-11-19T09:02:27Z2015-11-19T09:02:27ZTitle: Markov processes in adaptive control.
Authors: Longley, D.
Abstract: The self-adaptive control system continually seeks to improve the performance of the process by making certain parameter adjustments. This improvement is achieved by "hill-climbing" techniques which test the effect of parameter adjustments on the performance and then make computed adjustments towards optimum conditions. The application of these techniques is complicated by the effects of the process dynamics and "noise" in performance index measurements. This thesis is concerned with the study of "hill-climbing" strategies that will minimise the effect of performance index noise. Markov process techniques were applied to a sempled data, unidimensional search over a parabolic curve and the effects of process dynamics were ignored. The result of incorporating a dead-zone filter in an optimising loop was analysed for a stepped-rectangular noise probability density function with a stationary and drifting optimum. A strategy which estimated the gradient by measuring the performance index difference over two successive values of the parameter (instead of perturbation techniques) was analysed but no conclusive results were obtained. Finally a supervisory loop, to adjust the gain of the optimising loop, was postulated and analysed. Digital and analogue computer studies were performed to compare the above strategies and to obtain an insight into the practical problems associated with the implementation of the optimising loop.2015-11-19T09:02:27ZThe structure of binary molten salt mixtures: A neutron diffraction study.Badyal, Yaspal Singh.http://hdl.handle.net/2381/349312015-11-20T03:37:07Z2015-11-19T09:02:25ZTitle: The structure of binary molten salt mixtures: A neutron diffraction study.
Authors: Badyal, Yaspal Singh.
Abstract: Structural modification in a series of polyvalent metal chloride - alkali chloride binary molten salt mixtures has been investigated using the pulsed neutron diffraction technique. Structure factors have been measured for NiCl2- KCl, NiCl2-LiCl and ZnCl2-LiCl samples spanning the entire composition range. The key finding was that the degree of structural modification is dependent on the relative size and polarising power of the two cation species. The mixtures of NiCl2, and ZnCl2, with LiCl largely appear to be admixtures of the two pure salt structures, whereas adding KCl to NiCl2 results in a better ordered, more regularly tetrahedral local structure around the metal cation and enhancement of the first sharp diffraction peak (FSDP). A simple model involving charge ordering of discrete tetrahedral units by alkali counter-ions is proposed as an explanation for the enhanced intermediate range order. In order to identify some of the partial structure factor contributions to the enhanced FSDP, the scattering was measured for three isotopically-enriched NiCl2+2KCl samples. A complementary isotopic substitution experiment was performed on three ZnCl2+2KCl samples. The results generally confirm the findings of the composition study, with a strong similarity between the two molten salt systems also being evident. In addition, RMC modelling supports the proposed model for intermediate range order in the mixtures. Structure factors were also measured for AlCl3-LiCl and AlCl3-NaCl samples covering the entire composition range. Several features consistent with strong charge ordering of discrete tetrahedral units by alkali counter-ions were identified. In addition, RMC modelling of the data for pure AICl3 strongly challenges the 'established' view of the structure and an alternative 'sparse network liquid' model is proposed which emphasises the similarity to ZnCl2.2015-11-19T09:02:25ZGenetic variation in the response of mice to xenobiotics, in vitro.Arranz, Maria J.http://hdl.handle.net/2381/349302015-11-20T03:37:00Z2015-11-19T09:02:23ZTitle: Genetic variation in the response of mice to xenobiotics, in vitro.
Authors: Arranz, Maria J.
Abstract: Adverse reactions to drugs and environmental chemicals are a serious problem with up to 30% of hospital patients experiencing such problems (Venning, 1983; Ludwig and Axelsen, 1983). There is evidence that many adverse reactions arise as a result of genetically controlled sensitivity (Festing, 1987). Large genetically determined differences in response to chemicals have also been recorded in laboratory animals. However, most toxicological screening involves a single strain and fails to detect genetically determined sensitivity. Should some animals show an adverse reaction, this is usually attributed to "biological variation". As the pedigree of such animals is not normally known at the time of use there is no way of showing whether these adverse reactions were inherited (Festing, 1975, 1979). The initial aim of this project was to develop a technique for studying genetic variation in sensitivity to treatment with drugs using in vitro screening methods. The techniques should not require hazardous or expensive chemicals and equipment, should require a small number of animals and should be reliable and easy to perform. Several end-points were studied, and a protocol for detecting genetic differences which included four end-points and two cell types was developed. In the second part of the project, the aim was to study genetic variation in sensitivity to Aspirin, Ethanol and Coumarin as model compounds, using the previously developed techniques in conjunction with suitable genetically-defined strains of mice. Two cell types (macrophages and hepatocytes) were studied and several end-points were used including neutral red uptake, total protein concentration, rate of phagocytosis and LDH activity in cells and supernatant. The study involved nine strains of mice. Although statistically significant differences among inbred strains were detected, in no experiment did strain distribution pattern suggest single-locus Mendelian control. There was no evidence that response to coumarin depended on the coumarin hydroxylase (Cyp2b) locus nor that response to alcohol depended on the alcohol dehydrogenase locus. It is concluded that further development would be necessary to develop these methods as a way of identifying genes associated with their type of genetic variation.2015-11-19T09:02:23ZFast Data Processing in Hyper-Scale SystemsTilly, Marcelhttp://hdl.handle.net/2381/335652015-11-17T03:01:00Z2015-11-16T15:53:25ZTitle: Fast Data Processing in Hyper-Scale Systems
Authors: Tilly, Marcel
Abstract: The deluge of intelligent objects that are providing continuous access
to data and services on one hand and the demand of developers and
consumers to handle these data on the other hand require us to think
about new communication paradigms and middleware.
Based on requirements collected from scenarios from connected car,
social networks, and factory of the future this thesis is developing new
concepts for fast data processing for hyper-scale systems. In hyperscale
systems, such as in the Internet of Things, one emerging requirement
is to process, procure, and provide information with almost zero
latency. This thesis is introducing new concepts for a middleware to
enable fast communication by limiting information flow with filtering
concepts using event policy obligations and combining data processing
techniques adopted from complex event processing.
Fast data processing has to deal with continuous data streams of
events, providing a set of operators to manipulate, aggregate, and
correlate data. This processing logic needs to be distributed. Distribution
helps us to scale on one hand in terms of numbers of data
sources (e.g. phones, cars, sensors) and on the other hand to parallelise
processing in terms of grouping and partitions (e.g. regional).
In our solution, event policies are injected as close as possible to the
place where the data is born to optimise traffic. Filters, aggregations
and rules help to process the data accordingly. Finally, communication
paradigms or interaction patterns support mediation between
classical service based request-response interaction and event-based
data exchange.
This all together builds a middleware enabling fast data processing
for hyper-scale systems.2015-11-16T15:53:25ZHybridLF: a system for reasoning in higher-order abstract syntaxFurniss, Amy Elizabethhttp://hdl.handle.net/2381/333622015-10-23T02:01:09Z2015-10-22T15:25:23ZTitle: HybridLF: a system for reasoning in higher-order abstract syntax
Authors: Furniss, Amy Elizabeth
Abstract: In this thesis we describe two new systems for reasoning about deductive
systems: HybridLF and Canonical HybridLF.
HybridLF brings together the Hybrid approach (due to Ambler, Crole
and Momigliano [15]) to higher-order abstract syntax (HOAS) in Isabelle/HOL
with the logical framework LF, a dependently-typed system for proving theorems
about logical systems. Hybrid provides a version of HOAS in the form of
the lambda calculus, in which Isabelle functions are automatically converted
to a nameless de Bruijn represenation. Hybrid allows untyped expressions to
be entered as human-readable functions, which are then translated into the
machine-friendly de Bruijn form. HybridLF uses and updates these techniques
for variable representation in the context of the dependent type theory
LF, providing a version of HOAS in the form of LF.
Canonical HybridLF unites the variable representation techniques of
Hybrid with Canonical LF, in which all terms are in canonical form and definitional
equality is reduced to syntactic equality. We extend the Hybrid approach
to HOAS to functions with multiple variables by introducing a family
of abstraction functions, and use the Isabelle option type to denote errors instead
of including an ERR element in the Canonical HybridLF expression
type.
In both systems we employ the meta-logic M2 to prove theorems about
deductive systems. M2 [28] is a first-order logic in which quantifiers range
over the objects and types generated by an LF signature (that encodes a
deductive system). As part of the implementation of M2 we explore higher-order
unification in LF, adapting existing approaches to work in our setting.2015-10-22T15:25:23ZA Calculus of Mobility and Communication for Ubiquitous ComputingGul, Nosheenhttp://hdl.handle.net/2381/328982015-08-06T02:01:47Z2015-08-05T14:17:23ZTitle: A Calculus of Mobility and Communication for Ubiquitous Computing
Authors: Gul, Nosheen
Abstract: Ubiquitous computing makes various computing devices available throughout the
physical setting. Ubiquitous computing devices are distributed and could be mobile,
and interactions among them are concurrent and often depend on the location of
the devices. Process calculi are formal models of concurrent and mobile systems.
The work in this thesis is inspired by the calculus of Mobile Ambients and other
process calculi such as Calculus of Communicating Systems which have proved to
be successful in the modelling of mobility, communication and structure of systems.
We start by developing operational semantics for the calculus of Mobile Ambients
and Push and Pull Ambient Calculus, and prove that the semantics are sound and
complete with respect to the corresponding underlying reduction semantics. This
thesis proposes a Calculus of Communication and Mobility, denoted by CMCPCA,
for the modelling of mobility, communication and context awareness in the setting
of ubiquitous computing. CMCPCA is an ambient calculus with the in and out
capabilities of Cardelli and Gordon as well the push and pull capabilities of Phillips
and Vigliotti. CMCPCA has a new form of global communication similar to that in
Milner’s CCS. We define a new notion of behavioural equivalence for our calculus
in terms of an observation predicate and action transitions. Thus, we define barbed
bisimulation and congruence, and capability barbed bisimulation and congruence.
We then prove that capability barbed congruence coincides with barbed congruence.
We also include in the calculus a new form of context awareness mechanism that
allows ambients to query their current location and surrounding. We then propose
reduction semantics and operational semantics for the context awareness primitives,
and show that the semantics coincide. Several case studies and a variety of small
examples show the expressiveness and usefulness of our calculus.2015-08-05T14:17:23ZGroups, formal language theory and decidabilityJones, Sam Anthony Markhttp://hdl.handle.net/2381/325202015-07-09T02:01:09Z2015-07-08T14:01:15ZTitle: Groups, formal language theory and decidability
Authors: Jones, Sam Anthony Mark
Abstract: The first four chapters provide an introduction, background information and a
summary of results from some of the relevant literature. In these chapters a proof
is provided if the author was unable to find either a proof or the result itself stated
in the literature.
Chapter 5 focuses on syntactic monoids of languages, it introduces some background
material from the literature and then proves some characterisations of
monoids based on properties that the full preimage of certain subsets satisfy when
considered as a formal language over the generating set.
In Chapter 6 we examine some natural properties of formal languages which are
necessary conditions for a formal language to be a word problem of a group.
We look at which subsets of these conditions are sufficient for a formal language
satisfying them to be a word problem.
Chapter 7 focuses on decision problems. We generalise a theorem of Hartmanis
and Hopcroft and use it to settle the decidability for various language classes of
the conditions from Chapter 6.
Chapter 8 contains a brief exposition of some related areas. We first characterise
the co-word problem for groups and then examine a way of constructing groups
by intersecting their word problems. We conclude this chapter by proving some
simple results about the context-free subset membership problem for groups.
Finally, Chapter 9 contains a brief discussion of possible directions in which one
could extend the work in this thesis. The results in chapters 5, 6 and 7 are to be considered original unless stated
otherwise. Many of the results in chapter 7 have been published in [24]. Many of
the results of chapter 6 have been submitted for publication.2015-07-08T14:01:15ZNominal Lambda CalculusNebel, Frankhttp://hdl.handle.net/2381/313962015-01-09T02:01:53Z2015-01-08T15:07:38ZTitle: Nominal Lambda Calculus
Authors: Nebel, Frank
Abstract: Since their introduction, nominal techniques have been widely applied in computer
science to reason about syntax of formal systems involving name-binding operators.
The work in this thesis is in the area of “nominal" type theory, or more precisely the
study of “nominal" simple types.
We take Nominal Equational Logic (NEL), which augments equational logic with
freshness judgements, as our starting point to introduce the Nominal Lambda Calculus
(NLC), a typed lambda calculus that provides a simple form of name-dependent
function types. This is a key feature of NLC, which allows us to encode freshness in
a novel way.
We establish meta-theoretic properties of NLC and introduce a sound model theoretic
semantics. Further, we introduce NLC[A], an extension of NLC that captures
name abstraction and concretion, and provide pure NLC[A] with a strongly
normalising and confluent βη-reduction system.
A property that has not yet been studied for “nominal" typed lambda calculi is
completeness of βη-conversion for a nominal analogue of full set-theoretic hierarchies.
Aiming towards such a result, we analyse known proof techniques and identify various
issues. As an interesting precursor, we introduce full nominal hierarchies and
demonstrate that completeness holds for βη-conversion of the ordinary typed lambda
calculus.
The notion of FM-categories was developed by Ranald Clouston to demonstrate
that FM-categories correspond precisely to NEL-theories. We augment FM-categories
with equivariant exponentials and show that they soundly model NLC-theories. We
then outline why NLC is not complete for such categories, and discuss in detail an
approach towards extending NLC which yields a promising framework from which we
aim to develop a future (sound and complete) categorical semantics and a categorical
type theory correspondence.
Moreover, in pursuit of a categorical conservative extension result, we study (enriched/
internal) Yoneda isomorphisms for “nominal" categories and some form of
“nominal" gluing.2015-01-08T15:07:38ZCMMI-CM compliance checking of formal BPMN models using MaudeEl-Saber, Nissreen A. S.http://hdl.handle.net/2381/313902015-01-09T02:02:03Z2015-01-08T12:29:00ZTitle: CMMI-CM compliance checking of formal BPMN models using Maude
Authors: El-Saber, Nissreen A. S.
Abstract: From the perspective of business process improvement models, a business process
which is compliant with best practices and standards (e.g. CMMI) is necessary for defining
almost all types of contracts and government collaborations. In this thesis, we propose
a formal pre-appraisal approach for Capability Maturity Model Integration (CMMI)
compliance checking based on a Maude-based formalization of business processes in
Business Process Model and Notation (BPMN). The approach can be used to assess
the designed business process compliance with CMMI requirements as a step leading
to a full appraisal application. In particular, The BPMN model is mapped into Maude,
and the CMMI compliance requirements are mapped into Linear Temporal Logic (LTL)
then the Maude representation of the model is model checked against the LTL properties
using the Maude’s LTL model checker.
On the process model side, BPMN models may include structural issues that hinder
their design. In this thesis, we propose a formal characterization and semantics specification
of well-formed BPMN processes using the formalization of rewriting logic (Maude)
with a focus on data-based decision gateways and data objects semantics. Our formal
specification adheres to the BPMN standards and enables model checking using Maude’s
LTL model checker. The proposed semantics is formally proved to be sound based on the
classical workflow model soundness definition. On the compliance requirements side,
CMMI configuration management process is used as a source of compliance requirements
which then are mapped through compliance patterns into LTL properties. Model
checking results of Maude based implementation are explained based on a compliance
grading scheme. Examples of CMMI configuration management processes are used to
illustrate the approach.2015-01-08T12:29:00ZManagement concerns in service-driven applicationsAlghamdi, Ahmed Musferhttp://hdl.handle.net/2381/301072014-12-16T02:17:29Z2014-12-15T10:36:12ZTitle: Management concerns in service-driven applications
Authors: Alghamdi, Ahmed Musfer
Abstract: With the abundance of functionally-similar Web-Services, the offered or agreed-on qualities are becoming decisive factors in attracting private as well as corporate customers to a given service, among all others. Nevertheless, the state-of-art in handling qualities, in this emerging service paradigm, remains largely bound to the aspects of technology and their standards (e.g. time-response, availability, throughputs). However, current approaches still ignore capital domain-based business qualities and management concerns (e.g. customer profiles, business deadlines). The main objective of this thesis is to leverage the handling of quality and management issues in service-driven business applications toward the intuitive business level supported by a precise and flexible conceptualisation. Thus, instead of addressing qualities using just rigid IT-SLA (service-level agreements) as followed by Web Services technology and standards, we propose to cope with more abstract and domain-dependent and adaptive qualities in an intuitive, yet conceptual, manner. The approach is centred on evolving business rules and policies for management, with a clean separation of functionalities as specific rules. At the conceptual level, we propose specialised architectural connectors called management laws that we also separate from coordination laws for functionality issues. We further propose a smooth and compliant mapping of the conceptualisation toward service technology, using existing rule-based standards.2014-12-15T10:36:12ZFlexibo : language and its application to static analysisZhou, Jianguohttp://hdl.handle.net/2381/301062014-12-16T02:17:28Z2014-12-15T10:36:12ZTitle: Flexibo : language and its application to static analysis
Authors: Zhou, Jianguo
Abstract: This thesis introduces a new object-based language FlexibO to support prototype development paradigm and more importantly, program static analysis. FlexibO offers extreme flexibility and hence enables developers to write programs that contain rich information for further analysis and optimization. FlexibO interpreter's seamless integration with Java (including direct access to Java classes and methods and direct inheritance of Java classes) makes it a suitable tool for fast prototype software development. FlexibO's extreme flexibility allows developers to redefine the behavior of program evaluation by overriding its default evaluation method. This mechanism can be used to translate FlexibO to other efficient languages. In this thesis we design a translator in FlexibO to translate Bulk-Synchronous Parallel specifications (expressed in FlexibO) to executable C pro grams linked with BSPLib. Before translation, the tool first checks syntax and type, then statically analyzes potential communication conflicts, and finally generates C code. The translation process can accurately analyze primitive commands but require approximation (using abstract interpretation) for more advanced commands such as loops. The appropriateness of the translator and the associated static analysis can be formally analyzed using the technique of normal form.2014-12-15T10:36:12ZAutomatic presentations of groups and semigroupsOliver, Grahamhttp://hdl.handle.net/2381/301052014-12-16T02:17:27Z2014-12-15T10:36:10ZTitle: Automatic presentations of groups and semigroups
Authors: Oliver, Graham
Abstract: Effectively deciding the satisfiability of logical sentences over structures is an area well-studied in the case of finite structures. There has been growing work towards considering this question for infinite structures. In particular the theory of automatic structures, considered here, investigates structures representable by finite automata. The closure properties of finite automata lead naturally to algorithms for deciding satisfiability for some logics. The use of finite automata to investigate infinite structures has been inspired by the interplay between the theory of finite automata and the theory of semigroups. This inspiration has come in particular from the theory of automatic groups and semigroups, which considers (semi)groups with regular sets of normal forms over their generators such that generator-composition is also regular. The work presented here is a contribution to the foundational problem for automatic structures: given a class of structures, classify those members that have an automatic presentation. The classes considered here are various interesting subclasses of the class of finitely generated semigroups, as well as the class of Cayley Graphs of groups. Although similar, the theories of automatic (semi)groups and automatic presentation differ in their construction. A classification for finitely generated groups allows a direct comparison of the theory of automatic presentations with the theory of automatic groups.2014-12-15T10:36:10ZOptimization problems in communication networksMihalak, Matushttp://hdl.handle.net/2381/301042014-12-16T02:17:26Z2014-12-15T10:36:09ZTitle: Optimization problems in communication networks
Authors: Mihalak, Matus
Abstract: We study four problems arising in the area of communication networks. The minimum-weight dominating set problem in unit disk graphs asks, for a given set D of weighted unit disks, to find a minimum-weight subset D' âŠ† D such that the disks D' intersect all disks D. The problem is NP-hard and we present the first constant-factor approximation algorithm. Applying our techniques to other geometric graph problems, we can obtain better (or new) approximation algorithms. The network discovery problem asks for a minimum number of queries that discover all edges and non-edges of an unknown network (graph). A query at node v discovers a certain portion of the network. We study two different query models and show various results concerning the complexity, approximability and lower bounds on competitive ratios of online algorithms. The OVSF-code assignment problem deals with assigning communication codes (nodes) from a complete binary tree to users. Users ask for codes of a certain depth and the codes have to be assigned such that (i) no assigned code is an ancestor of another assigned code and (ii) the number of (previously) assigned codes that have to be reassigned (in order to satisfy (i)) is minimized. We present hardness results and several algorithms (optimal, approximation, online and fixed-parameter tractable). The joint base station scheduling problem asks for an assignment of users to base stations (points in the plane) and for an optimal colouring of the resulting conflict graph: user u with its assigned base station b is in conflict with user v, if a disk with center at 6, and u on its perimeter, contains v. We study the complexity, and present and analyse optimal, approximation and greedy algorithms for general and various special cases.2014-12-15T10:36:09ZCategories of containersAbbott, Michael Gordonhttp://hdl.handle.net/2381/301022014-12-16T02:17:23Z2014-12-15T10:36:09ZTitle: Categories of containers
Authors: Abbott, Michael Gordon
Abstract: This thesis develops a new approach to the theory of datatypes based on separating data and storage resulting in a class of datatype called a container. The extension of a container is a functor which can be regarded as a generalised polynomial functor in type variables. A representation theorem allows every natural transformation between container functors to be represented as a unique pair of morphisms in a category.;Under suitable assumptions on the ambient category container functors are closed under composition, limits, coproducts and the construction of initial algebras and final coalgebras. These closure properties allow containers to provide a functorial semantics for an important class of polymorphic types including the strictly positive types.;Since polymorphic functions between functorial polymorphic types correspond to natural transformations, every polymorphic function can be represented as a container morphism; this simplifies reasoning about such functions and provides a framework for developing concepts of generic programming.;Intuitionistic well-founded trees or W-types are the initial algebras of container functors in one parameter; the construction of the initial algebra of a container in more than one parameter leads to the solution of a problem left incomplete by earlier work of Dybjer.;We also find that containers provide a suitable framework to define the derivative of a functor as a kind of linear exponential. We show that the use of containers is essential for this approach.;The theory is developed in the context of a fairly general category to allow for a wide choice of applications. We use the language of dependent type theory to develop the theory of containers in an arbitrary extensive locally cartesian closed category; much of the development in this thesis can also be generalised to display map categories. We develop the appropriate internal language and its interpretation in a category with families.2014-12-15T10:36:09ZFormal languages and the irreducible word problem in groupsFonseca, Anahttp://hdl.handle.net/2381/301032014-12-16T02:17:23Z2014-12-15T10:36:09ZTitle: Formal languages and the irreducible word problem in groups
Authors: Fonseca, Ana
Abstract: There exist structural classifications of groups with a regular, one-counter or context-free word problem. Following on from this, the main object of the work presented here has been the irreducible word problem of a group, a notion introduced by Haring-Smith, who denned it as the subset of the word problem consisting of the non-empty words which have no non-empty proper subword equal to the identity. He proved that the groups with a finite irreducible word problem with respect to some group generating set are exactly the plain groups.;We know that the class of groups with a context-free irreducible word problem is a proper subclass of the virtually free groups. We look at direct products of finitely generated free groups by finite groups and also at the plain groups and consider their irreducible word problems with respect to minimal group generating sets. We prove that, of all the direct products of the infinite cyclic group by a non-trivial finite group, only Z x Z2 and Z x Z 3 have context-free irreducible word problem (and only with respect to a few group generating sets). We also exhibit a plain group that has context-free irreducible word problem with respect to every minimal group generating set.;Looking at the direct products of finitely generated free groups by non-trivial finite groups, we have found that the only irreducible word problem that is one-counter is that of Z x Z2 with respect to the canonical group generating set.;As for irreducible word problems lying in classes of languages above context-free, on one hand, we prove that having a recursive irreducible word problem is equivalent to having a recursive word problem. On the other hand, we prove that, while there are groups such that the fact that their irreducible word problem is recursively enumerable implies that they are recursive, that is not always the case.2014-12-15T10:36:09ZAlternative approaches to optophonic mappingsCapp, Michael.http://hdl.handle.net/2381/301012014-12-16T02:17:22Z2014-12-15T10:36:08ZTitle: Alternative approaches to optophonic mappings
Authors: Capp, Michael.
Abstract: This thesis presents a number of modifications to a blind aid, known as the video optophone, which enables a blind user to more readily interpret that local environment for enhanced mobility and navigation. Versions of this form of blind aid are generally both difficult to use and interpret, and are therefore inadequate for safe mobility. The reason for this severe problem lies in the complexity and excessive bandwidth of the optophonic output after the conversion from scene-to-sound.;The work herein describes a number of modifications that can be applied to the current optophonic process to make more efficient use of the limited bandwidth provided by the auditory system when converting scene images to sound. Various image processing and stereo techniques have been employed to artificially emulate the human visual system through the use of depth maps that successfully fade out the quantity of relatively unimportant image features, whilst emphasising the more significant regions such as nearby obstacles.;A series of experiments was designed to test these various modifications to the optophonic mapping by studying important factors of mobility and subject response whilst going about everyday life. The devised system, labelled DeLIA for the Detection, Location, Identification, and Avoidance (or Action) of obstacles, provided a means for gathering statistical data on users' interpretation of the optophonic output. An analysis of this data demonstrated a significant improvement when using the stereo cartooning technique, developed as part of this work, over the more conventional plain image as an input to an optophonic mapping from scene-to-sound.;Lastly, conclusions were drawn from the results, which indicated that the use of a stereo depth map as an input to a video optophone would improve its usefulness as an aid to general mobility. For the purposes of detecting and determining text or similar detail, either a plain unmodified image or some form of edge (depth) image were found to produce the best results.2014-12-15T10:36:08ZMultimodal human-computer interaction for enhancing customers’ decision-making and experience on B2C e-commerce websitesAl Sokkar, Abdullah Ahmad Musahttp://hdl.handle.net/2381/293302014-12-10T02:01:46Z2014-12-09T11:49:14ZTitle: Multimodal human-computer interaction for enhancing customers’ decision-making and experience on B2C e-commerce websites
Authors: Al Sokkar, Abdullah Ahmad Musa
Abstract: The main aim of this thesis was to identify, complement and refine the factors that contribute to users’ intention to purchase, satisfaction and attitude toward using a particular B2C online environment, as well as the causal relationships between these factors. A systematic literature review on Information System (IS), Market Research, and User Experience (UX), which has informed the design and development of a pilot study, has been conducted. Results have led to the conception of an online shopping decision-making (OSDM) model called ‘Episodic UX Model on Decision-Making’ (EUX-DM). It has been developed by integrating the established Technology Acceptance Model (TAM) as well as Information System Success Model (ISSM), and emerging UX models, and Expectation-Confirmation Theory (ECT). Results from analysing 305 responses to the web-based questionnaire aimed to evaluate EUX-DM verified its validity. In addition, after investigating the users’ preferences for the possible modifications related to the use of visual avatar in a particular B2C e-Commerce website for information presentation, another research focus has been placed on identifying the real conversational functions and their related communicational behaviour in designing male and female visual avatars’ facial expressions and body gestures. Following this, four different types of information presentations have been developed to be used in a contrived B2C online shopping environment, namely: (i) 2D static graphical and textual information, (ii) non-expressive avatars, (iii) avatars with facial expressions, (iv) and avatars with facial expressions and body gestures information presentations. Consequently, these information presentations were empirically investigated through two experimental studies. The outcomes of these studies indicated that the gender of the avatar and participants were found to be insignificant factors for any of the measured qualities, and the use of visual avatars with animated facial expressions and body gestures positively influenced customers’ usage attitude, intention to purchase and satisfaction.2014-12-09T11:49:14ZOn the Synthesis of ChoreographiesLange, Julienhttp://hdl.handle.net/2381/293102014-12-05T02:01:48Z2014-12-04T12:10:48ZTitle: On the Synthesis of Choreographies
Authors: Lange, Julien
Abstract: The theories based on session types stand out as effective methodologies
to specify and verify properties of distributed systems. A key result in the area shows the
suitability of choreography languages and session types as a basis for a choreography-driven
methodology for distributed software development. The methodology it advocates
is as follows: a team of programmers designs a global view of the interactions to be
implemented (i.e., a choreography), then the choreography is projected onto each role.
Finally, each program implementing one or more roles in the choreography is validated
against its corresponding projection(s).
This is an ideal methodology but it may not always be possible to design one set of
choreographies that will drive the overall development of a distributed system. Indeed,
software needs maintenance, specifications may evolve (sometimes also during the development),
and issues may arise during the implementation phase. Therefore, there is a
need for an alternative approach whereby it is possible to infer a choreography from local
behavioural specifications (i.e., session types).
We tackle the problem of synthesising choreographies from local behavioural specifications
by introducing a type system which assigns – if possible – a choreography to
set of session types. We demonstrate the importance of obtaining a choreography from
local specifications through two applications. Firstly, we give three algorithms and a
methodology to help software designers refine a choreography into a global assertion,
i.e., a choreography decorated with logical formulae specifying senders’ obligations and
receivers’ requirements. Secondly, we introduce a formal model for distributed systems
where each participant advertises its requirements and obligations as behavioural contracts
(in the form of session types), and where multiparty sessions are started when a set
of contracts allows to synthesise a choreography.2014-12-04T12:10:48ZA model for supporting electrical engineering with e-learningAkaslan, Dursunhttp://hdl.handle.net/2381/290662014-09-09T01:01:29Z2014-09-08T14:27:39ZTitle: A model for supporting electrical engineering with e-learning
Authors: Akaslan, Dursun
Abstract: The overall goal of this research work was developing and evaluating a model for supporting electrical engineering with e-learning. The model development was based on the survey data collected from representative teachers and students in Turkey whereas the model evaluation was conducted in the relevant HEIs in Turkey and the United Kingdom. To develop the model, the study investigated the attitudes of representative key stakeholders towards e-learning in Turkey by administrating questionnaires and interviews with teachers and students. Then the responses of the teachers and students were compared. Based on the results, I proposed a model with a multi-dimensional approach to e-learning: (1) self-directed learning by studying e-book, (2) self-assessment by solving e-exercises, (3) teacher-directed learning by attending classroom sessions as an integral part of the blended learning (4) teacher-assessment by solving e-exercises, (5) computer-directed learning by playing e-games and (6) computer-assessment by solving e-exercises.
To evaluate the applicability of the model in different conditions, a case-control study was conducted to determine whether the model had the intended effect on the participating students in HEIs in Turkey and the United Kingdom. As the result of the case-control study, the effects of e-learning, blended learning and traditional learning were verified. However, there were significant differences among the groups. The overall scores indicated that e-learning and blended learning was more effective as compared to the traditional learning. The results of our study indicated that the knowledge increase in e-learners seemed to be gradual because they tended to study daily by completing each activity on time. However, the traditional learners did not have the same pattern because they usually did not read the core text and did not solve e-exercise regularly before the classroom sessions. The results of pre-placement, post-placement tests and middle tests also justified these assumptions.2014-09-08T14:27:39ZCompressed representation of XML documents with rapid navigationKharabsheh, Mohammad Kamel Ahmadhttp://hdl.handle.net/2381/290622014-09-06T01:01:27Z2014-09-05T15:30:29ZTitle: Compressed representation of XML documents with rapid navigation
Authors: Kharabsheh, Mohammad Kamel Ahmad
Abstract: XML(Extensible Markup Language) is a language used in data representation and
storage, and transmission and manipulation of data. Excessive memory consumption
is an important challenge when representing XML documents in main memory.
Document Object Model (DOM) APIs are used in a processing level that provides
access to all parts of XML documents through the navigation operations. Although
DOM serves as a a general purpose tool that can be used in different applications,
it has high memory cost particularly if using na¨ıve. The space usage of DOM has
been reduced significantly while keeping fast processing speeds, by use of succinct
data structures in SiXDOM [1]. However, SiXDOM does not explore in depth XML
data compression principles to improve in-memory space usage. Such XML data
compression techniques have been proven to be very effective in on-disk compression
of XML document. In this thesis we propose a new approach to represent XML
documents in-memory using XML data compression ideas to further reduce space
usage while rapidly supporting operations of the kind supported by DOM.
Our approach is based upon a compression method [2] which represents an XML
document as a directed acyclic graph (DAG) by sharing common subtrees. However,
this approach does not permit the representation of attributes and textual data,
and furthermore, a naive implementation of this idea gives very poor space usage
relative to other space-efficient DOM implementations [1]. In order to realise the
potential of this compression method as an in-memory representation, a number
of optimisations are made by application of succinct data structures and variablelength
encoding. Furthermore, a framework for supporting attribute and textual
data nodes is introduced. Finally, we propose a novel approach to representing the
textual data using Minimal Perfect Hashing(MPH).
We have implemented our ideas in a software library called DAGDOMand performed
extensive experimental evaluation on a number of standard XML files. DAGDOM
yields a good result and we are able to obtain significant space reductions over existing
space-efficient DOM implementations (typically 2 to 5 times space reduction),
with very modest degradations in CPU time for navigational operations.2014-09-05T15:30:29ZDesign-by-contract for software architecturesPoyias, Kyriakoshttp://hdl.handle.net/2381/289242014-06-17T01:01:58Z2014-06-16T15:40:37ZTitle: Design-by-contract for software architectures
Authors: Poyias, Kyriakos
Abstract: We propose a design by contract (DbC) approach to specify and maintain architectural
level properties of software. Such properties are typically relevant in the design
phase of the development cycle but may also impact the execution of systems. We give a
formal framework for specifying software architectures (and their refi nements) together
with contracts that architectural con figurations abide by. In our framework, we can
specify that if an architecture guarantees a given pre- condition and a refi nement rule
satisfi es a given contract, then the refi ned architecture will enjoy a given post-condition.
Methodologically, we take Architectural Design Rewriting (ADR) as our architectural
description language. ADR is a rule-based formal framework for modelling (the
evolution of) software architectures. We equip the recon figuration rules of an ADR
architecture with pre- and post-conditions expressed in a simple logic; a pre-condition
constrains the applicability of a rule while a post-condition specifi es the properties
expected of the resulting graphs. We give an algorithm to compute the weakest precondition
out of a rule and its post-condition. Furthermore, we propose a monitoring
mechanism for recording the evolution of systems after certain computations, maintaining
the history in a tree-like structure. The hierarchical nature of ADR allows us to
take full advantage of the tree-like structure of the monitoring mechanism. We exploit
this mechanism to formally defi ne new rewriting mechanisms for ADR reconfi guration
rules. Also, by monitoring the evolution we propose a way of identifying which part of
a system has been a ffected when unexpected run-time behaviours emerge. Moreover,
we propose a methodology that allows us to select which rules can be applied at the
architectural level to reconfigure a system so to regain its architectural style when it
becomes compromised by unexpected run-time recon figurations.2014-06-16T15:40:37ZActivity awareness in context-aware systems using software sensorsPathan, Kamran Tajhttp://hdl.handle.net/2381/283792013-11-09T02:02:34Z2013-11-08T15:49:07ZTitle: Activity awareness in context-aware systems using software sensors
Authors: Pathan, Kamran Taj
Abstract: Context-aware systems being a component of ubiquitous or pervasive computing environment sense the users’ physical and virtual surrounding to adapt their behaviour accordingly. To achieve activity context tracking devices are common practice. Service Oriented Architecture is based on collections of services that communicate with each other. The communication between users and services involves data that can be used to sense the activity context of the user. SOAP is a simple protocol to let applications exchange their information over the web. Semantic Web provides standards to express the relationship between data to allow machines to process data more intelligently.
This work proposes an approach for supporting context-aware activity sensing using software sensors. The main challenges in the work are specifying context information in a machine processable form, developing a mechanism that can understand the data extracted from exchanges of services, utilising the data extracted from these services, and the architecture that supports sensing with software sensors. To address these issues, we have provided a bridge to combine the traditional web services with the semantic web technologies, a knowledge structure that supports the activity context information in the context-aware environments and mapping methods that extract the data out of exchanges occurring between user and services and map it into a context model. The Direct Match, the Synonym Match and the Hierarchical Match methods are developed to put the extracted data from services to the knowledge structure.
This research will open doors to further develop automated and dynamic context-aware systems that can exploit the software sensors to sense the activity of the user in the context-aware environments.2013-11-08T15:49:07ZMaintaining Transactional Integrity in Long Running Workflow Services: A Policy-Driven FrameworkAli, Manar Sayed Salamahhttp://hdl.handle.net/2381/281682013-09-13T01:01:53Z2013-09-12T10:44:21ZTitle: Maintaining Transactional Integrity in Long Running Workflow Services: A Policy-Driven Framework
Authors: Ali, Manar Sayed Salamah
Abstract: Business to Business integration is enhanced by Workflow structures, which allow for aggregating web services as interconnected business tasks to achieve a business outcome. Business processes naturally involve long running activities, and require transactional behavior across them addressed through general management, failure handling and compensation mechanisms. Loose coupling and the asynchronous nature of Web Services make an LRT subject to a wider range of communication failures. Two basic requirements of transaction management models are reliability and consistency despite failures. This research presents a framework to provide autonomous handling of long running transactions, based on dependencies which are derived from the workflow. The framework presents a solution for forward recovery from errors and compensations automatically applied to executing instances of workflows. The failure handling mechanism is based on the propagation of failures through a recursive hierarchical structure of transaction components (nodes and execution paths). The management system of transactions (COMPMOD) is implemented as a reactive system controller, where system components change their states based on rules in response to triggering of execution events. One practical feature of the model is the distinction of vital and non-vital components, allowing the process designer to express the cruciality of activities in the workflow with respect to the business logic. A novel feature of this research is that the approach permits the workflow designer to specify additional compensation dependencies which will be enforced. A notable feature is the extensibility of the model that is eased by the simple and declarative based formalism. In our approach, the main concern is the provision of flexible and reliable underlying control flow mechanisms supported by management policies. The main idea for incorporating policies is to manage the static structure of the workflow, as well as handling arbitrary failure and compensation events. Thus, we introduce new techniques and architectures to support enterprise integration solutions that support the dynamics of business needs.2013-09-12T10:44:21ZAlgorithms for Wireless Communication and Sensor NetworksGrant, Thomashttp://hdl.handle.net/2381/281002013-08-30T01:02:02Z2013-08-29T10:44:20ZTitle: Algorithms for Wireless Communication and Sensor Networks
Authors: Grant, Thomas
Abstract: In this thesis we will address four problems concerned with algorithmic issues that arise from communication and sensor networks.
The problem of scheduling wireless transmissions under SINR constraints has received much attention for unicast (one to one) transmissions. We consider the scheduling problem for multicast requests of one sender to many receivers, and present a logarithmic approximation algorithm and an online lower bound for arbitrary power assignments.
We study the problem of maximising the lifetime of a sensor network for fault-tolerant target coverage in a setting with composite events, where a composite event is the simultaneous occurrence of one or more atomic events. We are the first to study this variation of the problem from a theoretical perspective, where each event must be covered twice and there are several event types, and we present a (6 + ɛ)-approximation algorithm for the problem.
The online strongly connected dominating set problem concerns the construction of a dominating set that is strongly connected at all times, and for every vertex not in the dominating set, there exists an edge to some vertex in the dominating set, and an edge from a vertex in the dominating set. We present a lower bound for deterministic online algorithms and present an algorithm that achieves competitive ratio matching the lower bound.
The monotone barrier resilience problem is to determine how many sensors must be removed from a sensor network, such that a monotone path can exist between two points that does not intersect any sensor. We present a polynomial time algorithm that can determine the monotone barrier resilience for sensor networks of convex pseudo-disks of equal width.2013-08-29T10:44:20ZAnt Colony Optimization in Stationary and Dynamic EnvironmentsMavrovouniotis, Michalishttp://hdl.handle.net/2381/279712013-06-15T01:02:28Z2013-06-14T09:45:27ZTitle: Ant Colony Optimization in Stationary and Dynamic Environments
Authors: Mavrovouniotis, Michalis
Abstract: The ant colony optimization (ACO) metaheuristic is inspired by the foraging behaviour of real ant colonies. Similarly with other metaheuristics, ACO suffers from stagnation behaviour, where all ants construct the same solution from early stages.
In result, the solution quality may be degraded because the population may get trapped on local optima. In this thesis, we propose a novel approach, called direct communication (DC) scheme, that helps ACO algorithms to escape from a local optimum if they get trapped. The experimental results on two routing problems showed that the DC scheme is effective.
Usually, researchers are focused on problems in which they have static environment.
In the last decade, there is a growing interest to apply nature-inspired metaheuristics in optimization problems with dynamic environments. Usually, dynamic optimization problems (DOPs) are addressed using evolutionary algorithms. In this thesis, we apply several novel ACO algorithms in two routing DOPs. The proposed ACO algorithms are integrated with immigrants schemes in which immigrant ants are generated, either randomly or with the use of knowledge from previous environment(s), and replace other ants in the current population. The experimental results showed that each proposed algorithm performs better in different dynamic cases, and that they have better performance than other peer ACO algorithms in general.
The existing benchmark generators for DOPs are developed for binary-encoded combinatorial problems. Since routing problems are usually permutation-encoded combinatorial problems, the dynamic environments used in the experiments are generated using a novel benchmark generator that converts a static problem instance to a dynamic one. The specific dynamic benchmark generator changes the fitness landscape of the problem, which causes the optimum to change in every environmental change. Furthermore in this thesis, another benchmark generator is proposed which moves the population to another location in the fitness landscape, instead of modifying it. In this way, the optimum is known and one can see how close to the optimum an algorithm performs during the environmental changes.2013-06-14T09:45:27ZBroadcasting, Coverage, Energy Efficiency and Network Capacity in Wireless NetworksHenna, Shaguftahttp://hdl.handle.net/2381/278082013-03-14T02:02:02Z2013-03-13T11:14:56ZTitle: Broadcasting, Coverage, Energy Efficiency and Network Capacity in Wireless Networks
Authors: Henna, Shagufta
Abstract: Broadcasting, coverage, duty cycling, and capacity improvement are some of the important areas of interest in Wireless Networks. We address different problems related with broadcasting, duty cycling, and capacity improvement by sensing different network conditions and dynamically adapting to them. We propose two cross layer broadcasting protocols called CASBA and CMAB which dynamically adapt to network conditions of congestion and mobility. We also propose a broadcasting protocol called DASBA which dynamically adapts to local node density. CASBA, CMAB, and DASBA improve the reachability while minimizing the broadcast cost. Duty cycling is an efficient mechanism to conserve energy in Wireless Sensor Networks (WSNs). Existing duty cycling techniques are unable to handle the contention under dynamic traffic loads. Our proposed protocol called SA-RI-MAC handles traffic contention much more efficiently than RI-MAC without sacrificing the energy efficiency. It improves the delivery ratio with a significant reduction in the latency and energy consumption. Due to limited battery life and fault tolerance issues posed by WSNs, efficient methods which ensure reliable coverage are highly desirable. One solution is to use disjoint set covers to cover the targets. We formulate a problem called MDC which addresses the maximum coverage by using disjoint set covers S1 and S2. We prove that MDC is NP-complete and propose a √n-approximation algorithm for the MDC problem to cover n targets. The use of multi-channel MAC protocols improves the capacity of wireless networks. Efficient multi-channel MAC protocols aim to utilize multiple channels effectively. Our proposed multi-channel MAC protocol called LCV-MMAC effectively utilizes the multiple channels by handling the control channel saturation. LCV-MMAC demonstrates significantly better throughput and fairness compared to DCA, MMAC, and AMCP in different network scenarios.2013-03-13T11:14:56ZZooming out of Membrane Graph Transformation SystemsBapodra, Mayurhttp://hdl.handle.net/2381/277912013-03-13T02:02:33Z2013-03-12T11:26:40ZTitle: Zooming out of Membrane Graph Transformation Systems
Authors: Bapodra, Mayur
Abstract: Living cells offer a rich variety of complex interactions and interesting structures to those wishing to model processes in systems biology. Of particular interest is the hierarchical nature of cell configurations, the compartmentalized reactions that can occur within individual cells, and the interaction between different levels of this hierarchy. Graph transformation systems are an intuitive and readable modelling paradigm that lends itself to representing such systems since graphs can be utilised to represent this rich structural information, while graph rewriting rules can concisely describe cell reactions. We formulate a generic graph transformation model that captures many functional properties of membrane (or P) systems that take inspiration from such cell biological processes. The main focus is then on abstraction of systems defined as instances of this metamodel, which we refer to as membrane graph transformation systems. Often, such systems are analysed by stochastic simulation, as this allows us to examine their overall, emergent behaviour, incorporating the effect that randomness may have on the results. Stochastic simulation can be resource intensive, limiting the applicability of many modelling languages to real biological systems. To improve performance and the scalability of modelling, we formalize a methodology that hides detail in the lowest level of the hierarchy, but retains any important information as attributes. We then train the parameter of the abstract model using Bayesian networks so that the local, per-rule behaviour of the original, concrete model is preserved. Consequently, trends in global properties are preserved, such as the way in which they change with respect to the stochastic parameters of certain rules. The methodology is demonstrated and evaluated against two case studies: a hypothetical immunological response and a peer-to-peer voice over IP network.2013-03-12T11:26:40ZMatching of Service Feature Diagrams based on Linear LogicNaeem, Muhammadhttp://hdl.handle.net/2381/277412013-03-14T15:01:19Z2013-02-07T10:46:02ZTitle: Matching of Service Feature Diagrams based on Linear Logic
Authors: Naeem, Muhammad
Abstract: Managing variability is essential for an efficient implementation of end-user services that can be customised to individual needs. Apart from variations in selection and orchestration, also third-party services may have to be customisable. Although feature diagrams provide a high-level visual notation for variability, their use for specifying variability of services raises the problem of matching a required feature diagram against a set of provided ones.
In particular, the established interpretation of feature diagrams in Propositional Logic is not expressive enough for matching in the context of service variability. The problem becomes more visible when a certain requirement is going to be satisfied by a combination of multiple offers with overlapping features, which is a consequence of idempotence in Propositional Logic.
To address this problem, we propose service feature diagrams with semantics in Linear Logic. Linear Logic only allows the use of idempotence on the propositions with modalities. The permissible selection of features of a service feature diagram is called an instance diagram. We provide rules to obtain instance diagrams from the service feature diagram. The semantics of instance diagrams are also supported by Linear Logic.
This thesis not only introduces service feature diagrams, but also formalises their matching as linear deduction. We propose two categories of rules to verify diagrammatically if a collection of service descriptions satisfy the requirements. First, graphical matching rules are used to match service feature diagrams of requestor and provider. Second, graphical merging rules are used to merge multiple feature diagrams contributing to satisfy the requestor’s demands. We prove the correctness of these rules using the inference system of Linear Logic. We also provide the analysis of graphical rules and show that the application of the graphical rules is independent of the context in service feature diagram, i.e., graphical rule can be applied anywhere in a service feature diagram.2013-02-07T10:46:02ZOnline Algorithms for Temperature Aware Job Scheduling ProblemsBirks, Martin Davidhttp://hdl.handle.net/2381/276862013-03-14T15:02:27Z2013-01-17T11:55:28ZTitle: Online Algorithms for Temperature Aware Job Scheduling Problems
Authors: Birks, Martin David
Abstract: Temperature is an important consideration when designing microprocessors. When exposed to high temperatures component reliability can be reduced, while some components completely fail over certain temperatures. We consider the design and analysis of online algorithms; in particular algorithms that use knowledge of the amount of heat a job will generate. We consider algorithms with two main objectives. The first is maximising job throughput. We show upper and lower bounds for the case where jobs are unit length, both when jobs are weighted and unweighted. Many of these bounds are matching for all cooling factors in the single and multiple machine case.
We extend this to consider the single machine case where jobs have longer than unit length. When all jobs are equal length we show matching bounds for the case without preemption. We also show that both models of pre-emption enable at most a slight reduction in the competitive ratio of algorithms. We then consider when jobs have variable lengths. We analyse both the models of unweighted jobs and the jobs with weights proportional to their length. We show bounds that match within constant factors, in the non-preemptive and both preemptive models.
The second objective we consider is minimising flow time. We consider the objective of minimising the total flow time of a schedule. We show NP-hardness and inapproximability results for the offline case, as well as giving an approximation algorithm for the case where all release times are equal. For the online case we give some negative results for the case where maximum job heats are bounded. We also give some results for a resource augmentation model that include a 1-competitive algorithm when the extra power for the online algorithm is high enough. Finally we consider the objective of minimising the maximum flow time of any job in a schedule.2013-01-17T11:55:28ZPartner-Based Scheduling and Routing for Grid WorkflowsAshraf, Jawadhttp://hdl.handle.net/2381/276832013-03-14T15:02:45Z2013-01-16T14:15:54ZTitle: Partner-Based Scheduling and Routing for Grid Workflows
Authors: Ashraf, Jawad
Abstract: The Grid has enabled the scientific community to make faster progress. Scientific experiments and data analyses once spanning several years can now be completed in a matter of hours. With the advancement of technology, the execution of scientific experiments, often represented as workflows, has become more demanding. Thus, there is a vital need for improvements in the scheduling of scientific workflows. Efficient execution of scientific workflows can be achieved by the timely allocation of the resources. Advance reservation can ensure the future availability of heterogeneous resources and help a scheduler to produce better schedules.
We propose a novel resource mapping technique for jobs of a Grid workflow in an advance reservation environment. Using a dynamic critical path based job selection method, our proposed technique considers the conditional mapping of parent and child jobs to the same resource, trying to minimise the communication duration between jobs and thus optimising the workflow completion time. The proposed method is analysed in both static and dynamic environments, and the simulation results show encouraging performance especially for workflows where the communication costs are higher than the computation costs.
We also propose a hybrid of multiple scheduling heuristics for the aforementioned problem, which chooses the best among multiple schedules computed by different algorithms. Simulation results show a significant improvement over well known scheduling heuristics in terms of workflow completion time.
Considering the advance reservation environment, a better schedule for the earliest completion of a workflow can be achieved if better paths can be found for the transfer of data files between jobs executed on different resources. We propose a K-shortest path based routing algorithm for finding good paths in the advance reservation environment. The results show that our proposed algorithm performs very well in terms of the earliest arrival time of the data.
Finally, we also study a modified partner based scheduling heuristic for non-advance reservation environments. The results demonstrate that our proposed algorithm is a promising candidate for adoption in such Grid environments.2013-01-16T14:15:54ZMining Sequential Patterns from Probabilistic DataMuzammal, Muhammadhttp://hdl.handle.net/2381/276382013-03-14T15:01:37Z2012-12-20T11:37:59ZTitle: Mining Sequential Patterns from Probabilistic Data
Authors: Muzammal, Muhammad
Abstract: Sequential Pattern Mining (SPM) is an important data mining problem. Although it is assumed in classical SPM that the data to be mined is deterministic, it is now recognized that data obtained from a wide variety of data sources is inherently noisy or uncertain, such as data from sensors or data being collected from the web from different (potentially conflicting) data sources. Probabilistic databases is a popular framework for modelling uncertainty. Recently, several data mining and ranking problems have been studied in probabilistic databases. To the best of our knowledge, this is the first systematic study of mining sequential patterns from probabilistic databases.
In this work, we consider the kind of uncertainties that could arise in SPM. We propose four novel uncertainty models for SPM, namely tuple-level uncertainty, event-level uncertainty, source-level uncertainty and source-level uncertainty in deduplication, all of which fit into the probabilistic databases framework, and motivate them using potential real-life scenarios. We then define the interestingness predicate for two measures of interestingness, namely expected support and probabilistic frequentness. Next, we consider the computational complexity of evaluating the interestingness predicate, for various combinations of uncertainty models and interestingness measures, and show that different combinations have very different outcomes from a complexity theoretic viewpoint: whilst some cases are computationally tractable, we show other cases to be computationally intractable.
We give a dynamic programming algorithm to compute the source support probability and hence the expected support of a sequence in a source-level uncertain database. We then propose optimizations to speedup the support computation task. Next, we propose probabilistic SPM algorithms based on the candidate generation and pattern growth frameworks for the source-level uncertainty model and the expected support measure. We implement these algorithms and give an empirical evaluation of the probabilistic SPM algorithms and show the scalability of these algorithms under different parameter settings using both real and synthetic datasets. Finally, we demonstrate the effectiveness of the probabilistic SPM framework at extracting meaningful patterns in the presence of noise.2012-12-20T11:37:59ZModel-Based Testing Using Visual ContractsKhan, Tamim Ahmedhttp://hdl.handle.net/2381/275712013-03-14T15:02:00Z2012-11-02T12:37:51ZTitle: Model-Based Testing Using Visual Contracts
Authors: Khan, Tamim Ahmed
Abstract: Web services only expose interface level information, abstracting away implementation details. Testing is a time consuming and resource-intensive activity. Therefore, it is important to minimize the set of test cases executed without compromising quality. Since white-box testing techniques and traditional structural coverage criteria require access to code, we require a model-based approach for web service testing. Testing relies on oracles to provide expected outcomes for test cases and, if implemented manually, they depend on testers’ understanding of functional requirements to decide the correct response of the system on every given test case. As a result, they are costly in creation and maintenance and their quality depends on the correct interpretation of the requirements. Alternatively, if suitable specifications are available, oracles can be generated automatically at lower cost and with better quality. We propose to specify service operations as visual contracts with executable formal specifications as rules of a typed attributed graph transformation system. We associate operation signatures with these rules for providing test oracles.
We analyze dependencies and conflicts between visual contracts to develop a dependency graph. We propose model-based coverage criteria, considering this dependency graph, to assess the completeness of test suites. We also propose a mechanism to find out which of the potential dependencies and the conflicts were exercised by a given test case. While executing the tests, the model is simulated and coverage is recorded as well as measured against the criteria. The criteria are formalized and the dynamic detection of conflicts and dependencies is developed. This requires keeping track of occurrences and overlaps of pre- and post-conditions, their enabling and disabling, in successive model states, and interpreting these in terms of the static dependency graph.
Systems evolve over time and need retesting each time there is a change. In order to verify that the quality of the system is maintained, we use regression testing. Since regression test suites tend to be large, we isolate the affected part in the system only retesting affected parts by rerunning a selected subset of the total test suite. We analyze the test cases that were executed on both versions and propose a mechanism to transfer the coverage provided by these test cases. This information helps us to assess the completeness of the test suite on the new version without executing all of it.2012-11-02T12:37:51ZSearch-Based and Goal-Oriented Refactoring Using Unfolding of Graph Transformation SystemsQayum, Fawadhttp://hdl.handle.net/2381/109992015-06-22T01:45:07Z2012-09-06T13:17:42ZTitle: Search-Based and Goal-Oriented Refactoring Using Unfolding of Graph Transformation Systems
Authors: Qayum, Fawad
Abstract: To improve automation and traceability of search-based refactoring, in this thesis we propose a formulation of using graph transformation, where graphs represent object-oriented software architectures at the class level and rules describe refactoring operations. This formalisation allows us to make use of partial order semantics and an associated analysis technique, the approximated unfolding of graph transformation systems. In the unfolding we can identify dependencies and conflicts between refactoring steps leading to an implicit and therefore more scalable representation of the search space by sets of transformation steps equipped with relations of causality and conflict.
To implement search based refactoring we make use of the approximated unfolding of graph transformation systems. An optimisation algorithm based on the Ant Colony paradigm is used to explore the search space, aiming to find a sequence of refactoring steps that leads to the best design at a minimal cost.
Alternatively, we propose a more targeted approach, aiming at the removal of design flaws. The idea is that such sequences should be relevant to the removal of the flaw identified, i.e., contain only steps which are directly or indirectly contributes to the desired goal.2012-09-06T13:17:42ZGenetic Algorithms for University Course Timetabling ProblemsNaseem Jat, Sadafhttp://hdl.handle.net/2381/109972014-05-01T01:45:04Z2012-09-06T12:41:18ZTitle: Genetic Algorithms for University Course Timetabling Problems
Authors: Naseem Jat, Sadaf
Abstract: The university course timetabling problem is a difficult optimisation problem due to its highly-constrained nature. Finding an optimal, or even a high quality, timetable is a challenging task, especially when resources (e.g., rooms and time slots) are limited. In the literature, many approaches have been studied to solve this problem. In this thesis, we investigate genetic algorithms to solve the problem because they have been successfully used for a wide range of real-world problems. However, for university course timetabling problems, traditional genetic algorithms are not usually considered as efficient solvers.
In this thesis, we investigate genetic algorithms to acquire good solutions for university course timetabling problems. Several ideas are introduced to increase the general performance of genetic algorithms on these problems. Local search strategies are introduced into the traditional genetic algorithm to enhance its performance for the university course timetabling problem. This differs from many works in the literature because it works on time slots of the timetable rather than events directly.
A guided search approach is also introduced into genetic algorithms to produce high quality individuals into the population. In the guided search technique, the best parts of selected individuals from the current population are stored in an extra memory (or data structure) and are re-used to guide the generation of new individuals for subsequent populations.
In addition to solving university course timetabling problems as a single-objective optimisation problem, we also tackle the multi-objective university course timetabling problem. We integrate the above proposed approaches into multi-objective evolutionary algorithms and propose a framework of multi-objective evolutionary algorithms based on local search and guided search strategies for the multi-objective university course timetabling problem. This framework is then instantiated into a set of multi-objective evolutionary algorithms for the multi-objective university course timetabling problem based on a set of multi-objective evolutionary algorithms that are typically used for general multi-objective optimisation problems.
Computational results based on a set of well-known university course timetabling benchmark instances, show the effectiveness of the proposed approaches for both single- and multi-objective university course timetabling problems.2012-09-06T12:41:18ZStructural Domain Modelling for Policy Language Specialization with Conflict AnalysisKhowaja, Zohra Ahsanhttp://hdl.handle.net/2381/109942015-06-22T01:45:07Z2012-09-06T11:47:41ZTitle: Structural Domain Modelling for Policy Language Specialization with Conflict Analysis
Authors: Khowaja, Zohra Ahsan
Abstract: Policies are descriptive and provide information which can be used to modify the behaviour of a system without the need of recompilation and redeployment. They are usually written in a policy definition language which allows end users to specify their requirements, preferences and constraints. Policies are used in many software application areas: network management, telecommunications, security, and access control are some typical examples. Ponder, KAoS, Rein, XACML, and WSPL are examples of policy definition languages. These languages are usually targeted at a specific domain, hence there is a plethora of languages. APPEL (the Adaptable Programmable Policy Environment Language) [69] has followed a different approach: It is a generic policy description language conceived with a clear separation between core language and its specialization for concrete domains.
So far, there has not been any formal method for the extension and domain specialization of the APPEL policy language.
Policy conflict can occur when a new or a modified policy is deployed in a policy server, which leads to unspecified behaviour. To make policy based systems conflict free it is necessary to detect and resolve conflicts before they occur, otherwise the intended behaviour of a policy cannot be guaranteed.
We introduce a structural modelling approach to specialize the policy language for different domains, implemented in the VIATRA2 graph transformation tool. This approach is applied to APPEL. Our method for conflict analysis is based on the modelling methodology. As conflicts depend on domain knowledge, it is sensible to use this knowledge for conflict analysis. The identified conflicting actions are then encoded in the ALLOY model checker that confirm the existence of actual and potential conflicts.2012-09-06T11:47:41ZVOML: Virtual Organization Modelling LanguageRajper, Noor Jehanhttp://hdl.handle.net/2381/109422013-03-14T15:03:59Z2012-08-16T10:24:00ZTitle: VOML: Virtual Organization Modelling Language
Authors: Rajper, Noor Jehan
Abstract: Virtual organizations (VOs) and their breeding environments are an emerging approach for developing systems as a consortium of autonomous entities formed to share costs and resources, better respond to opportunities, achieve shorter time-to-market and exploit fast changing market opportunities. VOs cater for those demands by incorporating reconfigurations making VOs highly resilient and agile by design. Reconfiguration of systems is an active research area. Many policy and specification languages have been dedicated for the purpose. However, all these approaches consider reconfiguration of a system as somewhat isolated from its business and operational model; it is usually assumed that the latter two remain unaffected through such reconfigurations and the reconfiguration is usually limited to dynamic binding of components the system consists of. However the demands of VO reconfiguration go beyond dynamic binding and reach the level where it becomes crucial to keep changing the organizational structure (process model) of the system as well, which leads to changes of the operational/functional model. This continuous reconfiguration of the operational model emphasizes the need of a modelling language that allows specification and validation of such systems.
This thesis approaches the problem of formal specification of VOs through the Virtual Organization Modelling Language (VOML) framework. The core of this framework are three languages each capturing a specific aspect. The first language named Virtual Organization Structural modelling language (VO-S), focuses on structural aspects and many of the characteristics particular to VOs such as relationship between the members expressed in domain terminology. The second language named VO Reconfiguration (VO-R for short), permits different reconfigurations on the structure of the VO. This language is an extension of APPEL for the domain of VOs. The third language named VO Operational modelling language (VO-O) describes the operational model of a VO in more details.
This language is an adaptation and extension of the Sensoria Reference Modelling
Language for service oriented architecture (SRML).
Our framework models VOs using the VO-S and the VO-R which are at a high level of abstraction and independent of a specific computational model. Mapping rules provide guidelines to generate operational models, thus ensuring that the two models conform to each other.
The usability and applicability of VOML is validated through two cases studies one of which offers travel itineraries as a VO service and is a running example.
The other case study is an adaptation of a case study on developing a chemical plant from [14].2012-08-16T10:24:00ZApproximating Node-Weighted Steiner Subgraphs for Multicast Communication in Wireless NetworksShahnaz, Ambreenhttp://hdl.handle.net/2381/108372013-03-14T15:00:39Z2012-06-12T15:23:31ZTitle: Approximating Node-Weighted Steiner Subgraphs for Multicast Communication in Wireless Networks
Authors: Shahnaz, Ambreen
Abstract: We are motivated by the problem of computing multicast routing structures in wireless ad-hoc networks modelled by special classes of graphs including unit disk graphs, quasi-unit disk graphs and (λ + 1)-claw-free graphs. Multicast communication can be established by a tree known as Steiner tree. Wireless ad-hoc networks must operate using limited resources, therefore, the suitability of nodes for inclusion in a Steiner tree can vary widely between different nodes. We model this by assuming that each node of the network is assigned a weight that represents the cost of including it in the Steiner tree. Our goal is to compute a Steiner tree with minimum total node weight. However, in scenarios where nodes and links are not reliable, a tree has the drawback that it can be disconnected by the failure of even a single link or node in the network. Therefore, we also consider various fault-tolerant routing structures called 2-edge-connected Steiner subgraphs, k-edge-connected Steiner subgraphs, 2-vertex-connected Steiner subgraphs, and 2-edge-connected group Steiner subgraphs. The problems we consider are NP-hard, so we are interested in algorithms that compute provably good approximate solutions in polynomial time. We present a generalization of Steiner subgraph problems referred to as the node-weighted δ-Steiner subgraph problem, where δ represents connectivity requirements. We present an algorithm with approximation ratio 0.5dρ for the node-weighted δ-Steiner subgraph problem, where d is the bounded maximum degree of the solution subgraph, and ρ is the approximation ratio of the edge-weighted version of the δ-Steiner subgraph problem. We then shown how to construct solution subgraphs of bounded maximum degree d in several graph classes for our problem variants. As a result, we obtain algorithms for the problems we consider, on graph classes that admit subgraphs of small degree, whose approximation ratios are better than the best known ratios for the same problems on general graphs.2012-06-12T15:23:31ZModular Performance Modelling of Mobile Applications using Graph TransformationArijo, Niaz Hussainhttp://hdl.handle.net/2381/108312016-10-11T07:04:22Z2012-06-12T12:25:35ZTitle: Modular Performance Modelling of Mobile Applications using Graph Transformation
Authors: Arijo, Niaz Hussain
Abstract: Graph transformation provides a visual and formal notation for modelling systems of dynamic nature. We use graph transformation for modelling mobility and performance, and provide a methodology for modular system modelling to handle scalbility issues of large systems. In our methodology we have distinguished three approaches for system modelling, monolithic, topdown and bottom-up. In the monolithic approach, a system is modelled as a global or whole-world system. In the top-down approach, a global system is projected to its views based on their local type graphs. In the bottom-up approach, a system is modelled as a set of subsystems with shared interface. A whole system is composed from its subsystems.
We generate labelled transition systems (LTSs) from graph transformation systems/views in GROOVE and transform them into Continuous Time Markov Chains (CTMCs). These CTMCs are further translated into the Performance Evalution Process Algebra (PEPA) or PRISM. In PEPA and PRISM subsystems are synchronized over shared labels to compose a global system. We demonstrate that the composed model is bisimilar to its original global model. In addition stochastic analysis of models are also carried out in PEPA or PRISM for performance checking.
We have given tool support for view generation from a global graph transformation system in GROOVE and transforming LTSs generated from graph transformation systems, into CTMCs, and CTMCs into PEPA or PRISM models.2012-06-12T12:25:35Z