Search Constraints
Number of results to display per page
Search Results
-
- Resource Type:
- Conference Proceeding
- Creator:
- Barbeau, Michel, Kranakis, Evangelos, and Garcia-Alfaro, Joaquin
- Abstract:
- The design and implementation of security threat mitigation mechanisms in RFID systems, specially in low-cost RFID tags, are gaining great attention in both industry and academia. One main focus of research interests is the authentication and privacy techniques to prevent attacks targeting the insecure wireless channel of these systems. Cryptography is a key tool to address these threats. Nevertheless, strong hardware constraints, such as production costs, power consumption, time of response, and regulations compliance, makes the use of traditional cryptography in these systems a very challenging problem. The use of low-overhead procedures becomes the main approach to solve these challenging problems where traditional cryptography cannot fit. Recent results and trends, with an emphasis on lightweight techniques for addressing critical threats against low-cost RFID systems, are surveyed.
- Date Created:
- 2010-05-03
-
- Resource Type:
- Conference Proceeding
- Creator:
- Czyzowicz, Jurek, Opatrny, Jaroslav, Kranakis, Evangelos, Narayanan, Lata, Krizanc, Danny, Stacho, Ladislav, Urrutia, Jorge, Yazdani, Mohammadreza, and Lambadaris, Ioannis
- Abstract:
- A set of sensors establishes barrier coverage of a given line segment if every point of the segment is within the sensing range of a sensor. Given a line segment I, n mobile sensors in arbitrary initial positions on the line (not necessarily inside I) and the sensing ranges of the sensors, we are interested in finding final positions of sensors which establish a barrier coverage of I so that the sum of the distances traveled by all sensors from initial to final positions is minimized. It is shown that the problem is NP complete even to approximate up to constant factor when the sensors may have different sensing ranges. When the sensors have an identical sensing range we give several efficient algorithms to calculate the final destinations so that the sensors either establish a barrier coverage or maximize the coverage of the segment if complete coverage is not feasible while at the same time the sum of the distances traveled by all sensors is minimized. Some open problems are also mentioned.
- Date Created:
- 2010-12-13
-
- Resource Type:
- Conference Proceeding
- Creator:
- Cervera, Gimer, Barbeau, Michel, Garcia-Alfaro, Joaquin, and Kranakis, Evangelos
- Abstract:
- The Hierarchical Optimized Link State Routing (HOLSR) protocol enhances the scalability and heterogeneity of traditional OLSR-based Mobile Ad-Hoc Networks (MANETs). It organizes the network in logical levels and nodes in clusters. In every cluster, it implements the mechanisms and algorithms of the original OLSR to generate and to distribute control traffic information. However, the HOLSR protocol was designed with no security in mind. Indeed, it both inherits, from OLSR, and adds new security threats. For instance, the existence of misbehaving nodes can highly affect important HOLSR operations, such as the cluster formation. Cluster IDentification (CID) messages are implemented to organize a HOLSR network in clusters. In every message, the hop count field indicates to the receiver the distance in hops to the originator. An attacker may maliciously alter the hop count field. As a consequence, a receiver node may join a cluster head farther away than it appears. Then, the scalability properties in a HOLSR network is affected by an unbalanced distribution of nodes per cluster. We present a solution based on the use of hash chains to protect mutable fields in CID messages. As a consequence, when a misbehaving node alters the hop count field in a CID message, the receiver nodes are able of detecting and discarding the invalid message.
- Date Created:
- 2012-01-27
-
- Resource Type:
- Conference Proceeding
- Creator:
- Van Walderveen, Freek, Davoodi, Pooya, and Smid, Michiel
- Abstract:
- Given a set of n points in the plane, range diameter queries ask for the furthest pair of points in a given axis-parallel rectangular range. We provide evidence for the hardness of designing space-efficient data structures that support range diameter queries by giving a reduction from the set intersection problem. The difficulty of the latter problem is widely acknowledged and is conjectured to require nearly quadratic space in order to obtain constant query time, which is matched by known data structures for both problems, up to polylogarithmic factors. We strengthen the evidence by giving a lower bound for an important subproblem arising in solutions to the range diameter problem: computing the diameter of two convex polygons, that are separated by a vertical line and are preprocessed independently, requires almost linear time in the number of vertices of the smaller polygon, no matter how much space is used. We also show that range diameter queries can be answered much more efficiently for the case of points in convex position by describing a data structure of size O(n log n) that supports queries in O(log n) time.
- Date Created:
- 2012-05-15
-
- Resource Type:
- Conference Proceeding
- Creator:
- Mannan, Mohammad, Barrera, David, Van Oorschot, Paul C., Lie, David, and Brown, Carson D.
- Abstract:
- Instead of allowing the recovery of original passwords, forgotten passwords are often reset using online mechanisms such as password verification questions (PVQ methods) and password reset links in email. These mechanisms are generally weak, exploitable, and force users to choose new passwords. Emailing the original password exposes the password to third parties. To address these issues, and to allow forgotten passwords to be securely restored, we present a scheme called Mercury. Its primary mode employs user-level public keys and a personal mobile device (PMD) such as a smart-phone, netbook, or tablet. A user generates a key pair on her PMD; the private key remains on the PMD and the public key is shared with different sites (e.g., during account setup). For password recovery, the site sends the (public key)-encrypted password to the user's pre-registered email address, or displays the encrypted password on a webpage, e.g., as a barcode. The encrypted password is then decrypted using the PMD and revealed to the user. A prototype implementation of Mercury is available as an Android application.
- Date Created:
- 2012-02-21
-
- Resource Type:
- Conference Proceeding
- Creator:
- Seidel, Raimund, Dehne, Frank, and Klein, Rolf
- Abstract:
- Given a set S of s points in the plane, where do we place a new point, p, in order to maximize the area of its region in the Voronoi diagram of S and p? We study the case where the Voronoi neighbors of p are in convex position, and prove that there is at most one local maximum.
- Date Created:
- 2002-12-01
-
- Resource Type:
- Conference Proceeding
- Creator:
- Bose, Prosenjit and Van Renssen, André
- Abstract:
- We present tight upper and lower bounds on the spanning ratio of a large family of constrained θ-graphs. We show that constrained θ-graphs with 4k2 (k≥ 1 and integer) cones have a tight spanning ratio of 1+2 sin(θ/2), where θ is 2 π/ (4k+2). We also present improved upper bounds on the spanning ratio of the other families of constrained θ-graphs.
- Date Created:
- 2014-01-01
-
- Resource Type:
- Conference Proceeding
- Creator:
- Peleg, David, Krizanc, Danny, Kirousis, Lefteris M., Kranakis, Evangelos, Kaklamanis, Christos, and Bose, Prosenjit
- Abstract:
- In wireless communication, the signal of a typical broadcast station is transmited from a broadcast center p and reaches objects at a distance, say, R from it. In addition there is a radius r, r < R, such that the signal originating from the center of the station is so strong that human habitation within distance r from the center p should be avoided. Thus every station determines a region which is an “annulus of permissible habitation". We consider the following station layout (SL) problem: Cover a given (say, rectangular) planar region which includes a collection of orthogonal buildings with a minimum number of stations so that every point in the region is within the reach of a station, while at the same time no building is within the dangerous range of a station. We give algorithms for computing such station layouts in both the one-and two-dimensional cases.
- Date Created:
- 1999-01-01
-
- Resource Type:
- Conference Proceeding
- Creator:
- Krizanc, Danny, Kranakis, Evangelos, and Kirousis, Lefteris M.
- Abstract:
- Let φ be a random Boolean formula that is an instance of 3-SAT. We consider the problem of computing the least real number such that if the ratio of the number of clauses over the number of variables of φ strictly exceeds κ, then φ is almost certainly unsatisfiable. By a well known and more or less straightforward argument, it can be shown that κ 3.
- Date Created:
- 1996-01-01
-
- Resource Type:
- Conference Proceeding
- Creator:
- Maheshwari, Anil, Sack, Jörg-Rüdiger, Lanthier, Mark, and Aleksandrov, Lyudmil
- Date Created:
- 1998-01-01
-
- Resource Type:
- Conference Proceeding
- Creator:
- Morin, Pat and Bose, Prosenjit
- Abstract:
- We consider online routing strategies for routing between the vertices of embedded planar straight line graphs. Our results include (1) two deterministic memoryless routing strategies, one that works for all Delaunay triangulations and the other that works for all regular triangulations, (2) a randomized memoryless strategy that works for all triangulations, (3) an O(1) memory strategy that works for all convex subdivisions, (4) an O(1) memory strategy that approximates the shortest path in Delaunay triangulations, and (5) theoretical and experimental results on the competitiveness of these strategies.
- Date Created:
- 1999-01-01
-
- Resource Type:
- Conference Proceeding
- Creator:
- Maheshwari, Anil and Zeh, Norbert
- Abstract:
- We present external memory algorithms for outerplanarity testing, embedding outerplanar graphs, breadth-first search (BFS) and depth-first search (DFS) in outerplanar graphs, and finding a2-separator of size 2 for a given outerplanar graph. Our algorithms take O(sort(N)) I/Os and can easily be improved to take O (perm (N)) I/Os, as all these problems have linear time solutions in internal memory. For BFS, DFS, and outerplanar embedding we show matching lower bounds.
- Date Created:
- 1999-01-01
-
- Resource Type:
- Conference Proceeding
- Creator:
- Prencipe, Giuseppe, Cáceres, Edson, Chan, Albert, and Dehne, Frank
- Abstract:
- In this paper, we present parallel algorithms for the coarse grained multicomputer (CGM) and the bulk synchronous parallel computer (BSP) for solving two well known graph problems: (1) determining whether a graph G is bipartite, and (2) determining whether a bipartite graph G is convex. Our algorithms require O(log p) and O(log2 p) communication rounds, respectively, and linear sequential work per round on a CGM with p processors and N/p local memory per processor, N=|G|. The algorithms assume that N/ p ≥ p€ for some fixed€ > 0, which is true for all commercially available multiprocessors. Our results imply BSP algorithms with O(log p) and O(log2 p) supersteps, respectively, O(g log(p) N p) communication time, and O(log(p) N p) local computation time. Our algorithm for determining whether a bipartite graph is convex includes a novel, coarse grained parallel, version of the PQ tree data structure introduced by Booth and Lueker. Hence, our algorithm also solves, with the same time complexity as indicated above, the problem of testing the consecutive-ones property for (0, 1) matrices as well as the chordal graph recognition problem. These, in turn, have numerous applications in graph theory, DNA sequence assembly, database theory, and other areas.
- Date Created:
- 2000-01-01
-
- Resource Type:
- Conference Proceeding
- Creator:
- White, Anthony and Salehi-Abari, Amirali
- Abstract:
- Autonomous agents require trust and reputation concepts in order to identify communities of agents with which to interact reliably in ways analogous to humans. Agent societies are invariably heterogeneous, with multiple decision making policies and actions governing their behaviour. Through the introduction of naive agents, this paper shows empirically that while learning agents can identify malicious agents through direct interaction, naive agents compromise utility through their inability to discern malicious agents. Moreover, the impact of the proportion of naive agents on the society is analyzed. The paper demonstrates that there is a need for witness interaction trust to detect naive agents in addition to the need for direct interaction trust to detect malicious agents. By proposing a set of policies, the paper demonstrates how learning agents can isolate themselves from naive and malicious agents.
- Date Created:
- 2010-07-20
-
- Resource Type:
- Conference Proceeding
- Creator:
- Lanthier, Mark, Velazquez, Elio, and Santoro, Nicola
- Abstract:
- This paper proposes a pro-active solution to the Frugal Feeding Problem (FFP) in Wireless Sensor Networks. The FFP attempts to find energy-efficient routes for a mobile service entity to rendezvous with each member of a team of mobile robots. Although the complexity of the FFP is similar to the Traveling Salesman Problem (TSP), we propose an efficient solution, completely distributed and localized for the case of a fixed rendezvous location (i.e., service facility with limited number of docking ports) and mobile capable entities (sensors). Our pro-active solution reduces the FFP to finding energy-efficient routes in a dynamic Compass Directed unit Graph (CDG). The proposed CDG incorporates ideas from forward progress routing and the directionality of compass routing in an energy-aware unit sub-graph. Navigating the CDG guarantees that each sensor will reach the rendezvous location in a finite number of steps. The ultimate goal of our solution is to achieve energy equilibrium (i.e., no further sensor losses due to energy starvation) by optimizing the use of the shared resource (recharge station). We also examine the impact of critical parameters such as transmission range, cost of mobility and sensor knowledge in the overall performance.
- Date Created:
- 2011-11-14
-
- Resource Type:
- Conference Proceeding
- Creator:
- Guo, Yuhong and Li, Xin
- Abstract:
- Multi-label classification is a central problem in many application domains. In this paper, we present a novel supervised bi-directional model that learns a low-dimensional mid-level representation for multi-label classification. Unlike traditional multi-label learning methods which identify intermediate representations from either the input space or the output space but not both, the mid-level representation in our model has two complementary parts that capture intrinsic information of the input data and the output labels respectively under the autoencoder principle while augmenting each other for the target output label prediction. The resulting optimization problem can be solved efficiently using an iterative procedure with alternating steps, while closed-form solutions exist for one major step. Our experiments conducted on a variety of multi-label data sets demonstrate the efficacy of the proposed bi-directional representation learning model for multi-label classification.
- Date Created:
- 2014-01-01
-
- Resource Type:
- Conference Proceeding
- Creator:
- Dujmović, Vida, De Carufel, Jean-Lou, Bose, Prosenjit, and Paradis, Frédérik
- Abstract:
- The well-separated pair decomposition (WSPD) of the complete Euclidean graph defined on points in ℝ2 (Callahan and Kosaraju [JACM, 42 (1): 67-90, 1995]) is a technique for partitioning the edges of the complete graph based on length into a linear number of sets. Among the many different applications of WSPDs, Callahan and Kosaraju proved that the sparse subgraph that results by selecting an arbitrary edge from each set (called WSPD-spanner) is a 1 + 8/(s − 4)-spanner, where s > 4 is the separation ratio used for partitioning the edges. Although competitive local-routing strategies exist for various spanners such as Yao-graphs, Θ-graphs, and variants of Delaunay graphs, few local-routing strategies are known for any WSPD-spanner. Our main contribution is a local-routing algorithm with a near-optimal competitive routing ratio of 1 + O(1/s) on a WSPD-spanner. Specifically, we present a 2-local and a 1-local routing algorithm on a WSPD-spanner with competitive routing ratios of 1+6/(s−2)+4/s and 1+6/(s−2)+ 6/s + 4/(s2 − 2s) + 8/s2respectively.
- Date Created:
- 2017-01-01
-
- Resource Type:
- Conference Proceeding
- Creator:
- Bertossi, Leopoldo
- Abstract:
- A correspondence between database tuples as causes for query answers in databases and tuple-based repairs of inconsistent databases with respect to denial constraints has already been established. In this work, answer-set programs that specify repairs of databases are used as a basis for solving computational and reasoning problems about causes. Here, causes are also introduced at the attribute level by appealing to a both null-based and attribute-based repair semantics. The corresponding repair programs are presented, and they are used as a basis for computation and reasoning about attribute-level causes.
- Date Created:
- 2018-01-01
-
- Resource Type:
- Conference Proceeding
- Creator:
- Kim, Sang-Woon and Oommen, B. John
- Abstract:
- The Maximum Likelihood (ML) and Bayesian estimation paradigms work within the model that the data, from which the parameters are to be estimated, is treated as a set rather than as a sequence. The pioneering paper that dealt with the field of sequence-based estimation [2] involved utilizing both the information in the observations and in their sequence of appearance. The results of [2] introduced the concepts of Sequence Based Estimation (SBE) for the Binomial distribution, where the authors derived the corresponding MLE results when the samples are taken two-at-a-time, and then extended these for the cases when they are processed three-at-a-time, four-at-a-time etc. These results were generalized for the multinomial “two-at-a-time” scenario in [3]. This paper (This paper is dedicated to the memory of Dr. Mohamed Kamel, who was a close friend of the first author.) now further generalizes the results found in [3] for the multinomial case and for subsequences of length 3. The strategy used in [3] (and also here) involves a novel phenomenon called “Occlusion” that has not been reported in the field of estimation. The phenomenon can be described as follows: By occluding (hiding or concealing) certain observations, we map the estimation problem onto a lower-dimensional space, i.e., onto a binomial space. Once these occluded SBEs have been computed, the overall Multinomial SBE (MSBE) can be obtained by combining these lower-dimensional estimates. In each case, we formally prove and experimentally demonstrate the convergence of the corresponding estimates.
- Date Created:
- 2016-01-01
-
- Resource Type:
- Conference Proceeding
- Creator:
- Maheshwari, Anil, Nandy, Ayan, Smid, Michiel, and Das, Sandip
- Abstract:
- Consider a line segment R consisting of n facilities. Each facility is a point on R and it needs to be assigned exactly one of the colors from a given palette of c colors. At an instant of time only the facilities of one particular color are 'active' and all other facilities are 'dormant'. For the set of facilities of a particular color, we compute the one dimensional Voronoi diagram, and find the cell, i.e, a segment of maximum length. The users are assumed to be uniformly distributed over R and they travel to the nearest among the facilities of that particular color that is active. Our objective is to assign colors to the facilities in such a way that the length of the longest cell is minimized. We solve this optimization problem for various values of n and c. We propose an optimal coloring scheme for the number of facilities n being a multiple of c as well as for the general case where n is not a multiple of c. When n is a multiple of c, we compute an optimal scheme in Θ(n) time. For the general case, we propose a coloring scheme that returns the optimal in O(n2logn) time.
- Date Created:
- 2014-01-01
-
- Resource Type:
- Conference Proceeding
- Creator:
- Oommen, B. John and Kim, Sang-Woon
- Abstract:
- This paper deals with the relatively new field of sequencebased estimation which involves utilizing both the information in the observations and in their sequence of appearance. Our intention is to obtain Maximum Likelihood estimates by “extracting” the information contained in the observations when perceived as a sequence rather than as a set. The results of [15] introduced the concepts of Sequence Based Estimation (SBE) for the Binomial distribution. This current paper generalizes these results for the multinomial “two-at-a-time” scenario. We invoke a novel phenomenon called “Occlusion” that can be described as follows: By “concealing” certain observations, we map the estimation problem onto a lower-dimensional binomial space. Once these occluded SBEs have been computed, we demonstrate how the overall Multinomial SBE (MSBE) can be obtained by mapping several lower-dimensional estimates onto the original higher-dimensional space. We formally prove and experimentally demonstrate the convergence of the corresponding estimates.
- Date Created:
- 2016-01-01
-
- Resource Type:
- Conference Proceeding
- Creator:
- Labiche, Yvan and Barros, Márcio
- Date Created:
- 2015-01-01
-
- Resource Type:
- Conference Proceeding
- Creator:
- Polk, Spencer and Oommen, B. John
- Abstract:
- This paper pioneers the avenue of enhancing a well-known paradigm in game playing, namely the use of History-based heuristics, with a totally-unrelated area of computer science, the field of Adaptive Data Structures (ADSs). It is a well-known fact that highly-regarded game playing strategies, such as alpha-beta search, benefit strongly from proper move ordering, and from this perspective, the History heuristic is, probably, one of the most acclaimed techniques used to achieve AI-based game playing. Recently, the authors of this present paper have shown that techniques derived from the field of ADSs, which are concerned with query optimization in a data structure, can be applied to move ordering in multi-player games. This was accomplished by ranking opponent threat levels. The work presented in this paper seeks to extend the utility of ADS-based techniques to two-player and multi-player games, through the development of a new move ordering strategy that incorporates the historical advantages of the moves. The resultant technique, the History-ADS heuristic, has been found to produce substantial (i.e, even up to 70%) savings in a variety of two-player and multi-player games, at varying ply depths, and at both initial and midgame board states. As far as we know, results of this nature have not been reported in the literature before.
- Date Created:
- 2015-01-01
-
- Resource Type:
- Conference Proceeding
- Creator:
- Oommen, B. John and Astudillo, César A.
- Abstract:
- We present a method that employs a tree-based Neural Network (NN) for performing classification. The novel mechanism, apart from incorporating the information provided by unlabeled and labeled instances, re-arranges the nodes of the tree as per the laws of Adaptive Data Structures (ADSs). Particularly, we investigate the Pattern Recognition (PR) capabilities of the Tree-Based Topology-Oriented SOM (TTOSOM) when Conditional Rotations (CONROT) [8] are incorporated into the learning scheme. The learning methodology inherits all the properties of the TTOSOM-based classifier designed in [4]. However, we now augment it with the property that frequently accessed nodes are moved closer to the root of the tree. Our experimental results show that on average, the classification capabilities of our proposed strategy are reasonably comparable to those obtained by some of the state-of-the-art classification schemes that only use labeled instances during the training phase. The experiments also show that improved levels of accuracy can be obtained by imposing trees with a larger number of nodes.
- Date Created:
- 2015-01-01
-
- Resource Type:
- Conference Proceeding
- Creator:
- Tavasoli, Hanane, Oommen, B. John, and Yazidi, Anis
- Abstract:
- In this paper, we propose a novel online classifier for complex data streams which are generated from non-stationary stochastic properties. Instead of using a single training model and counters to keep important data statistics, the introduced online classifier scheme provides a real-time self-adjusting learning model. The learning model utilizes the multiplication-based update algorithm of the Stochastic Learning Weak Estimator (SLWE) at each time instant as a new labeled instance arrives. In this way, the data statistics are updated every time a new element is inserted, without requiring that we have to rebuild its model when changes occur in the data distributions. Finally, and most importantly, the model operates with the understanding that the correct classes of previously-classified patterns become available at a later juncture subsequent to some time instances, thus requiring us to update the training set and the training model. The results obtained from rigorous empirical analysis on multinomial distributions, is remarkable. Indeed, it demonstrates the applicability of our method on synthetic datasets, and proves the advantages of the introduced scheme.
- Date Created:
- 2016-01-01
-
- Resource Type:
- Conference Proceeding
- Creator:
- Yazidi, Anis, Oommen, B. John, and Hammer, Hugo Lewi
- Abstract:
- The problem of clustering, or unsupervised classification, has been solved by a myriad of techniques, all of which depend, either directly or implicitly, on the Bayesian principle of optimal classification. To be more specific, within a Bayesian paradigm, if one is to compare the testing sample with only a single point in the feature space from each class, the optimal Bayesian strategy would be to achieve this based on the distance from the corresponding means or central points in the respective distributions. When this principle is applied in clustering, one would assign an unassigned sample into the cluster whose mean is the closest, and this can be done in either a bottom-up or a top-down manner. This paper pioneers a clustering achieved in an “Anti-Bayesian” manner, and is based on the breakthrough classification paradigm pioneered by Oommen et al. The latter relies on a radically different approach for classifying data points based on the non-central quantiles of the distributions. Surprisingly and counter-intuitively, this turns out to work equally or close-to-equally well to an optimal supervised Bayesian scheme, which thus begs the natural extension to the unexplored arena of clustering. Our algorithm can be seen as the Anti-Bayesian counter-part of the wellknown k-means algorithm (The fundamental Anti-Bayesian paradigm need not just be used to the k-means principle. Rather, we hypothesize that it can be adapted to any of the scores of techniques that is indirectly based on the Bayesian paradigm.), where we assign points to clusters using quantiles rather than the clusters’ centroids. Extensive experimentation (This paper contains the prima facie results of experiments done on one and two-dimensional data. The extensions to multi-dimensional data are not included in the interest of space, and would use the corresponding multi-dimensional Anti-Na¨ıve-Bayes classification rules given in [1].) demonstrates that our Anti-Bayesian clustering converges fast and with precision results competitive to a k-means clustering.
- Date Created:
- 2015-01-01
-
- Resource Type:
- Conference Proceeding
- Creator:
- Oommen, B. John and Polk, Spencer
- Abstract:
- The field of game playing is a particularly well-studied area within the context of AI, leading to the development of powerful techniques, such as the alpha-beta search, capable of achieving competitive game play against an intelligent opponent. It is well known that tree pruning strategies, such as alpha-beta, benefit strongly from proper move ordering, that is, searching the best element first. Inspired by the formerly unrelated field of Adaptive Data Structures (ADSs), we have previously introduced the History-ADS technique, which employs an adaptive list to achieve effective and dynamic move ordering, in a domain independent fashion, and found that it performs well in a wide range of cases. However, previous work did not compare the performance of the History-ADS heuristic to any established move ordering strategy. In an attempt to address this problem, we present here a comparison to two well-known, acclaimed strategies, which operate on a similar philosophy to the History-ADS, the History Heuristic, and the Killer Moves technique. We find that, in a wide range of two-player and multi-player games, at various points in the game’s progression, the History-ADS performs at least as well as these strategies, and, in fact, outperforms them in the majority of cases.
- Date Created:
- 2016-01-01
-
- Resource Type:
- Conference Proceeding
- Creator:
- Guo, Yuhong and Li, Xin
- Abstract:
- Semantic scene classification is a challenging problem in computer vision. In this paper, we present a novel multi-level active learning approach to reduce the human annotation effort for training robust scene classification models. Different from most existing active learning methods that can only query labels for selected instances at the target categorization level, i.e., the scene class level, our approach establishes a semantic framework that predicts scene labels based on a latent object-based semantic representation of images, and is capable to query labels at two different levels, the target scene class level (abstractive high level) and the latent object class level (semantic middle level). Specifically, we develop an adaptive active learning strategy to perform multi-level label query, which maintains the default label query at the target scene class level, but switches to the latent object class level whenever an "unexpected" target class label is returned by the labeler. We conduct experiments on two standard scene classification datasets to investigate the efficacy of the proposed approach. Our empirical results show the proposed adaptive multi-level active learning approach can outperform both baseline active learning methods and a state-of-the-art multi-level active learning method.
- Date Created:
- 2014-01-01
-
- Resource Type:
- Conference Proceeding
- Creator:
- Peng, Mengfei, Shi, Wei, Croft, William Lee, and Corriveau, Jean-Pierre
- Abstract:
- New threats to networks are constantly arising. This justifies protecting network assets and mitigating the risk associated with attacks. In a distributed environment, researchers aim, in particular, at eliminating faulty network entities. More specifically, much research has been conducted on locating a single static black hole, which is defined as a network site whose existence is known a priori and that disposes of any incoming data without leaving any trace of this occurrence. However, the prevalence of faulty nodes requires an algorithm able to (a) identify faulty nodes that can be repaired without human intervention and (b) locate black holes, which are taken to be faulty nodes whose repair does require human intervention. In this paper, we consider a specific attack model that involves multiple faulty nodes that can be repaired by mobile software agents, as well as a virus v that can infect a previously repaired faulty node and turn it into a black hole. We refer to the task of repairing multiple faulty nodes and pointing out the location of the black hole as the Faulty Node Repair and Dynamically Spawned Black Hole Search. Wefirst analyze the attack model we put forth. We then explain (a) how to identify whether a node is either (1) a normal node or (2) a repairable faulty node or (3) the black hole that has been infected by virus v during the search/repair process and, (b) how to perform the correct relevant actions. These two steps constitute a complex task, which, we explain, significantly differs from the traditional Black Hole Search. We continue by proposing an algorithm to solve this problem in an asynchronous ring network with only one whiteboard (which resides in a node called the homebase). We prove the correctness of our solution and analyze its complexity by both theoretical analysis and experiment evaluation. We conclude that, using our proposed algorithm, b + 4 agents can repair all faulty nodes and locate the black hole infected by a virus v within finite time. Our algorithm works even when the number of faulty nodes b is unknown a priori.
- Date Created:
- 2017-01-01
-
- Resource Type:
- Article
- Creator:
- Sack, Jörg-Rüdiger, Maheshwari, Anil, and Lingas, A.
- Abstract:
- We provide optimal parallel solutions to several link-distance problems set in trapezoided rectilinear polygons. All our main parallel algorithms are deterministic and designed to run on the exclusive read exclusive write parallel random access machine (EREW PRAM). Let P be a trapezoided rectilinear simple polygon with n vertices. In O(log n) time using O(n/log n) processors we can optimally compute: 1. Minimum réctilinear link paths, or shortest paths in the L1 metric from any point in P to all vertices of P. 2. Minimum rectilinear link paths from any segment inside P to all vertices of P. 3. The rectilinear window (histogram) partition of P. 4. Both covering radii and vertex intervals for any diagonal of P. 5. A data structure to support rectilinear link-distance queries between any two points in P (queries can be answered optimally in O(log n) time by uniprocessor). Our solution to 5 is based on a new linear-time sequential algorithm for this problem which is also provided here. This improves on the previously best-known sequential algorithm for this problem which used O(n log n) time and space.5 We develop techniques for solving link-distance problems in parallel which are expected to find applications in the design of other parallel computational geometry algorithms. We employ these parallel techniques, for example, to compute (on a CREW PRAM) optimally the link diameter, the link center, and the central diagonal of a rectilinear polygon.
- Date Created:
- 1995-09-01
-
- Resource Type:
- Article
- Creator:
- Bose, Prosenjit, Overmars, M., Wilfong, G., Toussaint, G., Garcia-Lopez, J., Zhu, B., Asberg, B., and Blanco, G.
- Abstract:
- We study the feasibility of design for a layer-deposition manufacturing process called stereolithography which works by controlling a vertical laser beam which when targeted on a photocurable liquid causes the liquid to harden. In order to understand the power as well as the limitations of this manufacturing process better, we define a mathematical model of stereolithography (referred to as vertical stereolithography) and analyze the class of objects that can be constructed under the assumptions of the model. Given an object (modeled as a polygon or a polyhedron), we give algorithms that decide in O(n) time (where n is the number of vertices in the polygon or polyhedron) whether or not the object can be constructed by vertical stereolithography. If the answer is in the affirmative, the algorithm reports a description of all the orientations in which the object can be made. We also show that the objects built with vertical stereolithography are precisely those that can be made with a 3-axis NC machine. We then define a more flexible model that more accurately reflects the actual capabilities of stereolithography (referred to as variable-angle stereolithography) and again study the class of feasible objects for this model. We give an O(n)-time algorithm for polygons and O(n log n)- as well as O(n)-time algorithms for polyhedra. We show that objects formed with variable-angle stereolithography can also be constructed using another manufacturing process known as gravity casting. Furthermore, we show that the polyhedral objects formed by vertical stereolithography are closely related to polyhedral terrains which are important structures in geographic information systems (GIS) and computational geometry. In fact, an object built with variable-angle stereolithography resembles a terrain with overhangs, thus initiating the study of more realistic terrains than the standard ones considered in geographic information systems. Finally, we relate our results to the area of grasping in robotics by showing that the polygonal and polyhedral objects that can be built by vertical stereolithography can be clamped by parallel jaw grippers with any positive-sized gripper.
- Date Created:
- 1997-01-01
-
- Resource Type:
- Article
- Creator:
- Yan, Donghang, Wang, Zhiyuan, Yu, Hongan, Wu, Xianguo, and Zhang, Jidong
- Abstract:
- A near infrared (NIR) electrochromic attenuator based on a dinuclear ruthenium complex and polycrystalline tungsten oxide was fabricated and characterized. The results show that the use of the NIR-absorbing ruthenium complex as a counter electrode material can improve the device performance. By replacing the visible electrochromic ferrocene with the NIR-absorbing ruthenium complex, the optical attenuation at 1550 nm was enhanced from 19.1 to 30.0 dB and color efficiency also increased from 29.2 to 121.2 cm2/C.
- Date Created:
- 2005-12-01
-
- Resource Type:
- Article
- Creator:
- Wiener, Michael J., Van Oorschot, Paul C., and Diffie, Whitfield
- Abstract:
- We discuss two-party mutual authentication protocols providing authenticated key exchange, focusing on those using asymmetric techniques. A simple, efficient protocol referred to as the station-to-station (STS) protocol is introduced, examined in detail, and considered in relation to existing protocols. The definition of a secure protocol is considered, and desirable characteristics of secure protocols are discussed.
- Date Created:
- 1992-06-01
-
- Resource Type:
- Article
- Creator:
- Wiener, Michael J. and Van Oorschot, Paul C.
- Abstract:
- A simple new technique of parallelizing methods for solving search problems which seek collisions in pseudorandom walks is presented. This technique can be adapted to a wide range of cryptanalytic problems which can be reduced to finding collisions. General constructions are given showing how to adapt the technique to finding discrete logarithms in cyclic groups, finding meaningful collisions in hash functions, and performing meet-in-the-middle attacks such as a known-plaintext attack on double encryption. The new technique greatly extends the reach of practical attacks, providing the most cost-effective means known to date for defeating: the small subgroup used in certain schemes based on discrete logarithms such as Schnorr, DSA, and elliptic curve cryptosystems; hash functions such as MD5, RIPEMD, SHA-1, MDC-2, and MDC-4; and double encryption and three-key triple encryption. The practical significance of the technique is illustrated by giving the design for three $10 million custom machines which could be built with current technology: one finds elliptic curve logarithms in GF(2155) thereby defeating a proposed elliptic curve cryptosystem in expected time 32 days, the second finds MD5 collisions in expected time 21 days, and the last recovers a double-DES key from two known plaintexts in expected time 4 years, which is four orders of magnitude faster than the conventional meet-in-the-middle attack on double-DES. Based on this attack, double-DES offers only 17 more bits of security than single-DES.
- Date Created:
- 1999-01-01
-
- Resource Type:
- Article
- Creator:
- Morin, Pat, Hurtado, Ferran, Bose, Prosenjit, and Carmi, Paz
- Abstract:
- We prove that, for every simple polygon P having k ≥ 1 reflex vertices, there exists a point q ε P such that every half-polygon that contains q contains nearly 1/2(k + 1) times the area of P. We also give a family of examples showing that this result is the best possible.
- Date Created:
- 2011-04-01
-
- Resource Type:
- Article
- Creator:
- Hayes, M. John, Langlois, Robert, and Weiss, Abraham
- Abstract:
- Conventional training simulators commonly use a hexapod configuration to provide motion cues. While widely used, studies have shown that hexapods are incapable of producing the range of motion required to achieve high fidelity simulation required in many applications. A novel alternative is the Atlas motion platform. This paper presents a new generalized kinematic model of the platform which can be applied to any spherical platform actuated by three omnidirectional wheels. In addition, conditions for slip-free and singularity-free motions are identified. Two illustrative examples are given for different omnidirectional wheel configurations.
- Date Created:
- 2011-02-01
-
- Resource Type:
- Article
- Creator:
- Adler, Andy, Loyka, Sergey, and Youmaran, Richard
- Date Created:
- 2009-01-01
-
- Resource Type:
- Article
- Creator:
- Lever, Rosemary, Ouellette, Gene, Pagan, Stephanie, and Sénéchal, Monique
- Abstract:
- The goal of the present intervention research was to test whether guided invented spelling would facilitate entry into reading for at-risk kindergarten children. The 56 participating children had poor phoneme awareness, and as such, were at risk of having difficulty acquiring reading skills. Children were randomly assigned to one of three training conditions: invented spelling, phoneme segmentation, or storybook reading. All children participated in 16 small group sessions over eight weeks. In addition, children in the three training conditions received letter-knowledge training and worked on the same 40 stimulus words that were created from an array of 14 letters. The findings were clear: on pretest, there were no differences between the three conditions on measures of early literacy and vocabulary, but, after training, invented spelling children learned to read more words than did the other children. As expected, the phoneme-segmentation and invented-spelling children were better on phoneme awareness than were the storybook-reading children. Most interesting, however, both the invented spelling and the phoneme-segmentation children performed similarly on phoneme awareness suggesting that the differential effect on learning to read was not due to phoneme awareness per se. As such, the findings support the view that invented spelling is an exploratory process that involves the integration of phoneme and orthographic representations. With guidance and developmentally appropriate feedback, invented spelling provides a milieu for children to explore the relation between oral language and written symbols that can facilitate their entry in reading.
- Date Created:
- 2012-04-01
-
- Resource Type:
- Article
- Creator:
- Shao, Li-Yang, Albert, Jacques, Coyle, Jason P., and Barry, Seán T.
- Abstract:
- The conformal coating of a 50 nm-thick layer of copper nanoparticles deposited with pulse chemical vapor deposition of a copper (I) guanidinate precursor on the cladding of a single mode optical fiber was monitored by using a tilted fiber Bragg grating (TFBG) photo-inscribed in the fiber core. The pulse-per-pulse growth of the copper nanoparticles is readily obtained from the position and amplitudes of resonances in the reflection spectrum of the grating. In particular, we confirm that the real part of the effective complex permittivity of the deposited nano-structured copper layer is an order of magnitude larger than that of a bulk copper film at an optical wavelength of 1550 nm. We further observe a transition in the growth behavior from granular to continuous film (as determined from the complex material permittivity) after approximately 20 pulses (corresponding to an effective thickness of 25 nm). Finally, despite the remaining granularity of the film, the final copper-coated optical fiber is shown to support plasmon waves suitable for sensing, even after the growth of a thin oxide layer on the copper surface.
- Date Created:
- 2011-06-01
-
- Resource Type:
- Article
- Creator:
- Albert, Jacques, Dakka, Milad A., Shevchenko, Yanina, and Chen, Chengkun
- Abstract:
- We show that the tilted-grating-assisted excitation of surface plasmon polaritons on gold coated single-mode optical fibers depends strongly on the state of polarization of the core-guided light, even in fibers with cylindrical symmetry. Rotating the linear polarization of the guided light by 90° relative to the grating tilt plane is sufficient to turn the plasmon resonances on and off with more than 17 dB of extinction ratio. By monitoring the amplitude changes of selected individual cladding mode resonances we identify what we believe to be a new refractive index measurement method that is shown to be accurate to better than 5 × 10-5.
- Date Created:
- 2010-03-01
-
- Resource Type:
- Article
- Creator:
- LeFevre, Jo-Anne and Sénéchal, Monique
- Abstract:
- One hundred and ten English-speaking children schooled in French were followed from kindergarten to Grade 2 (Mage: T1 = 5;6, T2 = 6;4, T3 = 6;11, T4 = 7;11). The findings provided strong support for the Home Literacy Model (Sénéchal & LeFevre, 2002) because in this sample the home language was independent of the language of instruction. The informal literacy environment at home predicted growth in English receptive vocabulary from kindergarten to Grade 1, whereas parent reports of the formal literacy environment in kindergarten predicted growth in children's English early literacy between kindergarten and Grade 1 and growth in English word reading during Grade 1. Furthermore, 76% of parents adjusted their formal literacy practices according to the reading performance of their child, in support of the presence of a responsive home literacy curriculum among middle-class parents.
- Date Created:
- 2014-01-01
-
- Resource Type:
- Article
- Creator:
- Driedger, Michael and Wolfart, Johannes
- Abstract:
- In this special issue of Nova Religio four historians of medieval and early modern Christianities offer perspectives on basic conceptual frameworks widely employed in new religions studies, including modernization and secularization, radicalism/violent radicalization, and diversity/diversification. Together with a response essay by J. Gordon Melton, these articles suggest strong possibilities for renewed and ongoing conversation between scholars of "old" and "new" religions. Unlike some early discussions, ours is not aimed simply at questioning the distinction between old and new religions itself. Rather, we think such conversation between scholarly fields holds the prospect of productive scholarly surprise and perspectival shifts, especially via the disciplinary practice of historiographical criticism.
- Date Created:
- 2018-05-01
-
- Resource Type:
- Article
- Creator:
- Fast, Stewart, Saner, Marc, and Brklacich, Michael
- Abstract:
- The new renewable fuels standard (RFS 2) aims to distinguish corn-ethanol that achieves a 20% reduction in greenhouse gas (GHG) emissions compared with gasoline. Field data from Kim et al. (2009) and from our own study suggest that geographic variability in the GHG emissions arising from corn production casts considerable doubt on the approach used in the RFS 2 to measure compliance with the 20% target. If regulators wish to require compliance of fuels with specific GHG emission reduction thresholds, then data from growing biomass should be disaggregated to a level that captures the level of variability in grain corn production and the application of life cycle assessment to biofuels should be modified to capture this variability.
- Date Created:
- 2012-05-01
-
- Resource Type:
- Article
- Creator:
- Quastel, Noah and Mendez, Pablo
- Abstract:
- This article draws on Margaret Radin's theorization of 'contested commodities' to explore the process whereby informal housing becomes formalized while also being shaped by legal regulation. In seeking to move once-informal housing into the domain of official legality, cities can seldom rely on a simple legal framework of private-law principles of property and contract. Instead, they face complex trade-offs between providing basic needs and affordability and meeting public-law norms around living standards, traditional neighbourhood feel and the environment. This article highlights these issues through an examination of the uneven process of legal formalization of basement apartments in Vancouver, Canada. We chose a lengthy period-from 1928 to 2009-to explore how basement apartments became a vital source of housing often at odds with city planning that has long favoured a low-density residential built form. We suggest that Radin's theoretical account makes it possible to link legalization and official market construction with two questions: whether to permit commodification and how to permit commodification. Real-world commodification processes-including legal sanction-reflect hybridization, pragmatic decision making and regulatory compromise. The resolution of questions concerning how to legalize commodification are also intertwined with processes of market expansion.
- Date Created:
- 2015-11-01
-
- Resource Type:
- Article
- Creator:
- Khalaf, Lynda A., Kichian, Maral, Bernard, Jean-Thomas, and Dufour, Jean-Marie
- Abstract:
- We test for the presence of time-varying parameters (TVP) in the long-run dynamics of energy prices for oil, natural gas and coal, within a standard class of mean-reverting models. We also propose residual-based diagnostic tests and examine out-of-sample forecasts. In-sample LR tests support the TVP model for coal and gas but not for oil, though companion diagnostics suggest that the model is too restrictive to conclusively fit the data. Out-of-sample analysis suggests a random-walk specification for oil price, and TVP models for both real-time forecasting in the case of gas and long-run forecasting in the case of coal.
- Date Created:
- 2012-06-01
-
- Resource Type:
- Article
- Creator:
- Mendez, Pablo
- Abstract:
- This paper asks whether age at arrival matters when it comes to home-ownership attainment among immigrants, paying particular attention to householders' self-identification as a visible minority. Combining methods that were developed separately in the immigrant housing and the immigrant offspring literatures, this study shows the importance of recognising generational groups based on age at arrival, while also accounting for the interacting effects of current age (or birth cohorts) and arrival cohorts. The paper advocates a (quasi-)longitudinal approach to studying home-ownership attainment among immigrants and their foreign-born offspring. Analysis of data from the Canadian Census reveals that foreign-born householders who immigrated as adults in the 1970s and the 1980s are more likely to be home-owners than their counterparts who immigrated at a younger age when they self-identify as South Asian or White, but not always so when they self-identify as Chinese or as ‘other visible minority’. The same bifurcated pattern recurs between householders who immigrated at secondary-school age and those who were younger upon arrival. Age at arrival therefore emerges as a variable of significance to help explain differences in immigrant housing outcomes, and should be taken into account in future studies of immigrant home-ownership attainment. Copyright © 2009 John Wiley & Sons, Ltd.
- Date Created:
- 2009-01-01
-
- Resource Type:
- Article
- Creator:
- Urrutia, J., Opatrny, J., Chávez, E., Dobrev, S., Stacho, L., and Kranakis, Evangelos
- Abstract:
- We address the problem of discovering routes in strongly connected planar geometric networks with directed links. Motivated by the necessity for establishing communication in wireless ad hoc networks in which the only information available to a vertex is its immediate neighborhood, we are considering routing algorithms that use the neighborhood information of a vertex for routing with constant memory only. We solve the problem for three types of directed planar geometric networks: Eulerian (in which every vertex has the same number of incoming and outgoing edges), Outerplanar (in which a single face contains all vertices of the network), and Strongly Face Connected, a new class of geometric networks that we define in the article, consisting of several faces, each face being a strongly connected outerplanar graph.
- Date Created:
- 2006-08-01
-
- Resource Type:
- Report
- Creator:
- Labiche, Yvan and Shafique, Muhammad
- Abstract:
- Model-based testing (MBT) is about testing a software system by using a model of its behaviour. To benefit fully from MBT, automation support is required. This paper presents a systematic review of prominent MBT tool support where we focus on tools that rely on state-based models. The systematic review protocol precisely describes the scope of the search and the steps involved in tool selection. Precisely defined criteria are used to compare selected tools and comprise support for test coverage criteria, level of automation for various testing activities, and support for the construction of test scaffolding. The results of this review should be of interest to a wide range of stakeholders: software companies interested in selecting the most appropriate MBT tool for their needs; organizations willing to invest into creating MBT tool support; researchers interested in setting research directions.
- Date Created:
- 2010-05-01
-
- Resource Type:
- Conference Proceeding
- Creator:
- Whitehead, Anthony D.
- Abstract:
- There have been a number of steganography embedding techniques proposed over the past few years. In turn, there has been great interest in steganalysis techniques as the embedding techniques improve. Specifically, universal steganalysis techniques have become more attractive since they work independently of the embedding technique. In this work, we examine the effectiveness of a basic universal technique that relies on some knowledge about the cover media, but not the embedding technique. We consider images as a cover media, and examine how a single technique that we call steganographic sanitization performs on 26 different steganography programs that are publicly available on the Internet. Our experiments are completed using a number of secret messages and a variety of different levels of sanitization. However, since our intent is to remove covert communication, and not authentication information, we examine how well the sanitization process preserves authentication information such as watermarks and digital fingerprints.
- Date Created:
- 2005-12-01
-
- Resource Type:
- Article
- Creator:
- Kovalio, Jacob
- Date Created:
- 2001-07-01
-
- Resource Type:
- Conference Proceeding
- Creator:
- Cross, Emma and Merriam, Helena
- Date Created:
- 2017-05-12
-
- Resource Type:
- Article
- Creator:
- Kovalio, Jacob
- Date Created:
- 2010-10-04
-
- Resource Type:
- Report
- Creator:
- Labiche, Yvan, Torre, Damiano, and Genero, Marcela
- Abstract:
- Context: The Unified Modeling Language (UML), with its 14 different diagram types, is the de-facto standard modeling language for object-oriented modeling and documentation. Since the various UML diagrams describe different aspects of one, and only one, software under development, they are not independent but strongly depend on each ot her in many ways. In other words, diagrams must remain consistent. Dependencies between diagrams can become so intricate that it is sometimes even possible to synthesize one diagram on the basis of others. Support for synthesizing one UML diagram from other diagrams can provide the designer with significant help, thus speeding up the design process, decreasing the risk of errors, and guaranteeing consistency among the diagrams. Objective: The aim of this article is to provide a comprehensive summary of UML synthesis techniques as they have been described in literature to date in order to obtain an extensive and detailed overview of the current research in this area. Method: We have performed a Systematic Mapping Study by following well-known guide-lines. We selected ten primary studies by means of a search with seven search engines per-formed on October 2, 2013. Results: Various results are worth mentioning. First it appears that researchers have not frequently published papers concerning UML synthesis techniques since 2004 (with the exception of two papers published in 2010). Only half of the UML diagram types are involved in the synthesis techniques we discovered. The UML diagram type most frequently used as the source for synthesizing another diagram is the sequence diagram (66.7%), and the most synthesized diagrams are the state machine diagram (58.3%) and the class diagram (25%). Conclusion: The fact that we did not obtain a large number of primary stud ies over a 14 year period (only ten papers) indicates that synthesizing a UML diagram from other UML diagrams is not a particularly active line of research. Research on UML diagram synthesis is nevertheless relevant since synthesis techniques rely on or en force diagram consistency , and studying UML diagram consistency is an active line of research. Another r esult is that research is need ed to investigate synthesis techniques for other types of UML diagrams than those involved in our primary studies.
- Date Created:
- 2015-08-01
-
- Resource Type:
- Conference Proceeding
- Creator:
- Opp, James and Whitehead, Anthony D.
- Abstract:
- In this work we discuss our efforts to use the ubiquity of smart phone systems and the mobility they provide to stream historical information about your current place on the earth to the end user. We propose the concept of timescapes to portray this historical significance of where they are standing and allow a brief travel through time. By combining GPS location, with a rich media interpretation of existing historical documents, historical facts become an on-demand resource available to travellers, school children, historians and any interested 3rd party. To our knowledge this is the first introduction of the term timescape to be used in the context of historical information pull. Copyright
- Date Created:
- 2013-09-05
-
- Resource Type:
- Report
- Creator:
- Labiche, Yvan and Alkhalid, A.
- Abstract:
- The practitioner interested in reducing software verification effort may found herself lost in the many alternative definitions of Graphical User Interface (GUI) testing that exist and their relation to the notion of system testing. One result of these many definitions is that one may end up testi ng twice the same parts of the Software Under Test (SUT), specifically the application logic code. To clarify two important testing activities for the avoidance of duplicate testing effort, this paper studies possible differences between GUI testing and system testing experimentally. Specifically, we selected a SUT equipped with system tests that directly exercise the application code; We used GUITAR, a well-known GUI testing software to GUI test this SUT. Experimental results show important differences between system testing and GUI testing in terms of structural coverage and test cost.
- Date Created:
- 2016-09-08
-
- Resource Type:
- Report
- Creator:
- Labiche, Yvan, Genero, Marcela, and Torre, Damiano
- Abstract:
- Context: The Unified Modeling Language (UML), with its 14 different diagram types, is the de-facto standard tool for objectoriented modeling and documentation. Since the various UML diagrams describe different aspects of one, and only one, software under development, they are not independent but strongly depend on each other in many ways. In other words, the UML diagrams describing a software must be consistent. Inconsistencies between these diagrams may be a source of the considerable increase of faults in software systems. It is therefore paramount that these inconsistencies be detected, analyzed and hopefully fixed. Objective: The aim of this article is to deliver a comprehensive summary of UML consistency rules as they are described in the literature to date to obtain an extensive and detailed overview of the current research in this area. Method: We performed a Systematic Mapping Study by following well-known guidelines. We selected 94 primary studies from a search with seven search engines performed in December 2012. Results: Different results are worth mentioning. First it appears that researchers tend to discuss very similar consistency rules, over and over again. Most rules are horizontal (98.07%) and syntactic (88.03%). The most used diagrams are the class diagram (71.28%), the state machine diagram (42.55%) and the sequence diagram (47.87%). Conclusion: The fact that many rules are duplicated in primary studies confirms the need for a well accepted list of consistency rules. This paper is a first step in this direction. Results indicate that much more work is needed to develop consistency rules for all 14 UML diagrams, in all dimensions of consistency (e.g., semantic and syntactic on the one hand, horizontal, vertical and evolution on the other hand).
- Date Created:
- 2014-01-01
-
- Resource Type:
- Report
- Creator:
- Mehrfard, Hossein and Labiche, Yvan
- Abstract:
- Reverse-engineering object interactions from source code can be done through static, dynamic, or hybrid (static plus dynamic) analyses. In the latter two, monitoring a program and collecting runtime information translates into some overhead during program execution. Depending on the type of application, the imposed overhead can reduce the precision and accuracy of the reverse-engineered object interactions (the larger the overhead the less precise or accurate the reverse-engineered interactions), to such an extent that the reverse-engineered interactions may not be correct, especially when reverse-engineering a multithreaded software system. One is therefore seeking an instrumentation strategy as less intrusive as possible. In our past work, we showed that a hybrid approach is one step towards such a solution, compared to a purely dynamic approach, and that there is room for improvements. In this paper, we uncover, in a systematic way, other aspects of the dynamic analysis that can be improved to further reduce runtime overhead, and study alternative solutions. Our experiments show effective overhead reduction thanks to a modified procedure to collect runtime information.
- Date Created:
- 2015-11-01
-
- Resource Type:
- Report
- Creator:
- Torre, Damiano, Elaasar, Maged, Genero, Marcela, and Labiche, Yvan
- Abstract:
- UML diagrams describe different views of one piece of software. These diagrams strongly depend on each other and must therefore be consistent with one another, since inconsistencies between diagrams may be a source of faults during software development activities that rely on these diagrams. It is therefore paramount that consistency rules be defined and that inconsistencies be detected, analyzed and fixed. The relevant literature shows that authors typically define their own UML consistency rules, sometimes defining the same rules and sometimes defining rules that are already in the UML standard. The reason might be that no consolidated set of rules that are deemed relevant by authors can be found to date. The aim of our research is to provide a consolidated set of UML consistency rules and obtain a detailed overview of the current research in this area. We therefore followed a systematic procedure in order to collect and analyze UML consistency rules. We then consolidated a set of 116 UML consistency rules (avoiding redundant definitions or definitions already in the UML standard) that can be used as an important reference for UML-based software development activities, for teaching UML-based software development, and for further research.
- Date Created:
- 2016-07-01
-
- Resource Type:
- Report
- Creator:
- Labiche, Yvan and Asoudeh, Nesa
- Abstract:
- In this paper we propose a method and a tool to generate test suites from extended finite state machines, accounting for multiple (potentially conflicting) objectives. We aim at maximizing coverage and feasibility of a test suite while minimizing similarity between its test cases and minimizing overall cost. Therefore, we define a multi-objective genetic algorithm that searches for optimal test suites based on four objective functions. In doing so, we create an entire test suite at once as opposed to test cases one at a time. Our approach is evaluated on two different case studies, showing interesting initial results.
- Date Created:
- 2013-10-01
-
- Resource Type:
- Report
- Creator:
- Labiche, Yvan and Khalsa, Sunint Kaur
- Abstract:
- For functional testing based on the input domain of a functionality, parameters and their values are identified and a test suite is generated using a criterion exercising combinations of those parameters and values. Since software systems are large, resulting in large numbers of parameters and values, a technique based on combinatorics called Combinatorial Testing (CT) is used to automate the process of creating those combinations. CT is typically performed with the help of combinatorial objects called Covering Arrays. The goal of the present work is to determine available algorithms/tools for generating a combinatorial test suite. We tried to be as complete as possible by using a precise protocol for selecting papers describing those algorithms/tools. The 75 algorithms/tools we identified are then categorized on the basis of different comparison criteria, including: the test suite generation technique, the support for selection (combination) criteria, mixed covering array, the strength of coverage, and the support for constraints between parameters. Results can be of interest to researchers or software companies who are looking for a CT algorithm/tool suitable for their needs.
- Date Created:
- 2014-01-01
-
- Resource Type:
- Report
- Creator:
- Araujo, Wladimir, Briand, Lionel Claude, and Labiche, Yvan
- Abstract:
- Design by Contract (DbC) is a software development methodology that focuses on clearly defining the interfaces between components to produce better quality object-oriented software. The idea behind DbC is that a method defines a contract stating the requirements a client needs to fulfill to use it, the precondition, and the properties it ensures after its execution, the postcondition. Though there exists ample support for DbC for sequential programs, applying DbC to concurrent programs presents several challenges. Using Java as the target programming language, this paper tackles such challenges by augmenting the Java Modelling Language (JML) and modifying the JML compiler to generate Runtime Assertion Checking (RAC) code to support DbC in concurrent programs. We applied our solution in a carefully designed case study on a highly concurrent industrial software system from the telecommunications domain to assess the effectiveness of contracts as test oracles in detecting and diagnosing functional faults in concurrent software. Based on these results, clear and objective requirements are defined for contracts to be effective test oracles for concurrent programs whilst balancing the effort to design them. Main results include that contracts of a realistic level of completeness and complexity can detect around 76% of faults and reduce the diagnosis effort for such faults by at least ten times. We, therefore, show that DbC can not only be applied to concurrent software but can also be a valuable tool to improve the economics of software engineering.
- Date Created:
- 2013-09-01
-
- Resource Type:
- Report
- Creator:
- Kolbah, Bojana and Labiche, Yvan
- Abstract:
- This paper discusses reverse engineering source code to produce UML sequence diagrams, with the aim to aid program comprehension and other software life cycle activities (e.g., verification). As a first step we produce scenario diagrams using the UML sequence diagram notation. We build on previous work, now combining static and dynamic analyses of a Java software, our objective being to obtain a lightweight instrumentation and therefore disturb the software behaviour as little as possible. We extract the control flow graph from the software source code and obtain an execution trace by instrumenting and running the software. Control flow and trace information is represented as models and UML scenario diagram generation becomes a model transformation problem. Our validation shows that we indeed reduce the execution overhead inherent to dynamic analysis, without losing in terms of the quality of the reverse-engineered information, and therefore in terms of the usefulness of the approach (e.g., for program comprehension).
- Date Created:
- 2011-09-01
-
- Resource Type:
- Article
- Creator:
- Tudin, Susan
- Abstract:
- Every year, for over three decades, Carleton University in Ottawa, Ontario has participated with other local educational institutions in providing a week-long instruction program that introduces young students to higher education. Highly motivated participants in grades 8 – 11 and numbering over 3,000 attend from several school boards in both eastern Ontario and western Quebec. The Enriched Mini Course Program has become an important recruitment tool for each institution, and at Carleton University, over 50 enriched mini courses are offered including one recent addition by the MacOdrum library staff. In this article, the author recounts how leading an enriched mini course for millennials in the university library's new Discovery Centre is an innovative initiative that demonstrates the significance of the academic library in the local community, and how staff collaboration helps to develop team building and positive vibes with the millennials.
- Date Created:
- 2016-07-01
-
- Resource Type:
- Conference Proceeding
- Creator:
- Neely, Colleen, Davis, Kate, Jin, Lei, and Rykse, Harriet
- Abstract:
- Presented at Electronic Resources & Libraries Conference, Austin, TX, February 28, 2011.
- Date Created:
- 2011-02-28
-
- Resource Type:
- Conference Proceeding
- Creator:
- Hilton, Robert, Stoney, Christopher, and Shepherd, Robert
- Date Created:
- 2010-06-01
-
- Resource Type:
- Article
- Creator:
- Russell, Joshua, Coady, Joseph, Schott, Stephan, Duquette, Jean, Lafreniere, Keelia, and Chabot, Jean-Pierre
- Date Created:
- 2019-11-01
-
- Resource Type:
- Book
- Creator:
- Kropf, Joel and Cavell, Janice
- Abstract:
- “The first volume of Documents on Canadian External Relations was published in 1967, as Canada celebrated its first century of nationhood. Since then, volumes in this series have dealt with various periods in the history of Canadian foreign policy, from the Laurier era up to the Pearson years in government. The series currently includes 29 regular volumes and has reprinted over 20,000 documents, totalling almost 40,000 pages of text, making it the largest historical documentary project in Canada. The subject of this special volume, the Arctic, has an ever-growing importance for Canada as we approach our federation's 150th anniversary. This volume illuminates how and why Canada asserted its sovereignty over the Far North between 1874 and 1949, and it demonstrates how much Canadians today owe to the nation builders of the past”--Preface, p. [vi].
- Date Created:
- 2016-01-01
-
- Resource Type:
- Book
- Creator:
- Griffiths, Naomi E.S.
- Abstract:
- First edition published on the occasion of an exhibition of R.L. Griffith's paintings at Wallack Galleries in Ottawa: The Estate Collection: Pleasure and Solace, works by Robert Lewis Griffiths.
- Date Created:
- 2015-11-18
-
- Resource Type:
- Other
- Creator:
- Draayer, Ingrid
- Abstract:
- This guide combines the knowledge gathered during my long career coordinating the Carleton University Library exhibits program and my recent sabbatical research on exhibits and events in academic libraries. Between 1983, when I was hired as Exhibits Librarian at Carleton University Library, and 2002, when the Library had little space available for exhibits and I became Head of Access Services, I was responsible for running the Library’s exhibits program. After the latest renovation to MacOdrum Library was completed in the Fall of 2013 and included dedicated space for exhibits, I was once again asked to coordinate and produce exhibits for the Library. During my 2014/2015 sabbatical I investigated the current state of exhibits and events in academic libraries through literature and Web searches and site visits to a number of universities. The end result is this guide, which I hope is both practical and inspirational.
- Date Created:
- 2015-09-10
-
- Resource Type:
- Report
- Creator:
- Humeny, Courtney
- Abstract:
- The Iowa Gamb ling Task (IGT) is widely used to assess the role of emotion in decision making. However, there is only indirect evidence to support that the task measures emotion. There are inconsistencies in performance within in healthy populations who display risk tak ing traits. Two hundred and fifty participants were assessed for psychopathy, sensation seeking, and impulsiveness. The IGT was compared with modified versions that directly manipulated emotion within in the task by indexing reward and punishment cards wit h images varying in emotional content. Participants continued to learn to avoid risky decks in all versions of the IGT. The manipulation of emotional content within the task did affect performance: fearful images contributed to greater risky deck selection s. Across the tasks, psychopathy showed the strongest relationship to risky deck selections, and lower levels of psychopathy was associated decreased risky deck selections. However, psychopathy did not affect learning in the modified versions. Exploratory analysis on image valance found that negative images (compared to neutral) benefited learning for individuals with higher levels of psychopathy. Discussion will center on the benefits of manipulating emotion directly within the task as a means to assess th e validity of the IGT.
- Date Created:
- 2016-02-02
-
- Resource Type:
- Report
- Creator:
- Humeny, Courtney
- Abstract:
- The debate surrounding how emotion and c ognition are organized in the brain often lead s to Damasio’s Somatic Marker Hypothesis. This theory endorses a highly interactive process between emotion and cognition, but has been criticized for being too broad to capture the specific links between the t wo. It also implies that emotion operates from a neural architecture that is dissociable from cognition. Although empirical findings from the Iowa Gambling Task lend support for the theory, this can promote a false dichotomy between emotion and cognition. Issues will be raised regarding the view that the theory and the task are ill - formulated to account for the phases of decision making. Further theoretical work may be required to align the task with Damasio’s view of emotion as integrated with cognition.
- Date Created:
- 2016-01-05
-
- Resource Type:
- Report
- Creator:
- Yisa, Felix
- Abstract:
- My study attempted to find out if the old part of our brain (limbic system) had a significant role in influencing how we detect the valence of blurry words without conscious awareness of what the words are. 10 participants were shown blurry words that could not be read and were asked to guess valence, without a time limit. The hypotheses for this study was that participants would be accurate in detecting valence of blurred words and that participants would rate negative words the most accurately. I also predicted that participants would attempt to read words before rating valence and they would attempt to read the words only in the beginning. The stimuli were shown to the participants on printed-paper. There were 10 blurred words per page with accompanying 5-point Likert scales by each blurred word with a reference scale at the top of every page. My research data found that there was a significant statistical difference between people’s ability to detect the valence of blurred words compared to the normal ability (which is 100% accuracy). The comparison showed that the participants were significantly worse at detecting the valence of blurred words than unblurred words. There was no significant statistical difference between people’s ability to detect the valence of blurry neutral words compared to the valence of blurry nonsensical words. Participants were equally accurate at both of these word-types. Participant responses also showed that they were statistically better at detecting the valence of negative blurry words than positive blurry words. So they were better at detecting negative valence than those of other valences.
- Date Created:
- 2015-01-06
-
- Resource Type:
- Conference Proceeding
- Creator:
- Neely, Colleen
- Abstract:
- Webinar presented to members of the Ontario Council of University Libraries, June 29, 2011.
- Date Created:
- 2011-06-29
-
- Resource Type:
- Conference Proceeding
- Creator:
- Cross, Emma
- Abstract:
- Resource Description and Access is the new content standard coming Spring 2013, with national libraries using RDA effective March 30, 2013. Libraries need to address training for staff in all departments on how to interpret, catalogue and use RDA records.
- Date Created:
- 2013-02-13
-
- Resource Type:
- Conference Proceeding
- Creator:
- Duimovich, George
- Abstract:
- Presentation to Data Science Seminar at Carleton University, Institute for Data Science, May 11, 2016.
- Date Created:
- 2016-05-11
-
- Resource Type:
- Conference Proceeding
- Creator:
- Tudin, Susan
- Abstract:
- Poster presented at the Teaching & Learning Symposium, Carleton University, May 11, 2016
- Date Created:
- 2016-05-11
-
- Resource Type:
- Article
- Creator:
- Miller, James D. and Johnston-Miller, Mary Margaret
- Date Created:
- 2016-04-12
-
- Resource Type:
- Article
- Creator:
- Miller, James D. and Johnston-Miller, Mary
- Date Created:
- 2013-10-11
-
- Resource Type:
- Poster
- Creator:
- Hayward, Angela, Cross, Emma, and McGreal, Louise
- Abstract:
- Carleton University Library has an innovative staff development program to expand the skill set of e-book cataloguers to provide a comprehensive service to manage and expand access e-books. In 2009 Carleton University Library hired its first e-book cataloguer in response to the rapid growth of digital resources in the Library collection; a second position was added in 2011. These positions have successfully evolved to incorporate a wide variety of duties related to e-books in response to rapidly changing digital environment. Conference poster presented at the CLA annual conference, June 3 to 5, 2015 in Ottawa, Ontario.
- Date Created:
- 2015-06-03
-
- Resource Type:
- Report
- Creator:
- Duxbury, Linda E. and Bennell, Craig
- Abstract:
- Police in schools In an era where the costs of policing are constantly under scrutiny from governing municipalities, the time has come for police agencies to re-evaluate the services they provide. To do this, they need to answer questions relating to the value that different activities they perform create in the communities they serve. In other words, they need to change the focus of the conversation from “what does this service cost” to “what value does this service provide.” This document summarizes key findings from a longitudinal (2014-2017), multi-method (quantitative, qualitative, and ethnographic analysis, along with a Social Return on Investment [SROI] analysis) case study undertaken to identify the value of School Resource Officers (SROs) that are employed by Peel Regional Police and work in the service’s Neighborhood Police Unit (NPU). Of note is the application of SROI techniques in this evaluation process. SROI, a methodology that emerged from the not-for-profit sector, helps researchers identify sources of value outside of those considered through traditional valuation techniques, such as cost-benefit analysis. Evaluation of Peel Police’s SRO program was motivated by a number of factors. First, the costs of this program are both easy to identify and significant (just over $9 million per year). Second, it is very challenging to identify the value that this program provides to students and the community. The challenges of quantifying the value offered by assigning full-time SROs to Canadian high schools is evidenced by the fact that such programs are rare, as police services around the world have responded to pressures to economize by removing officers from schools and either eliminating the role of the SRO or having one officer attend to many schools.
- Date Created:
- 2018-01-10
-
- Resource Type:
- Conference Proceeding
- Creator:
- Cross, Emma
- Date Created:
- 2017-05-31
-
- Resource Type:
- Article
- Creator:
- Becker, Hilary
- Date Created:
- 2014-01-01
-
- Resource Type:
- Conference Proceeding
- Creator:
- Bucking, Scott and Cotton, James S.
- Abstract:
- Net zero energy (NZE) communities are becoming pivotal to the energy vision of developers. Communities that produce as much energy as they consume provide many benefits, such as reducing life-cycle costs and better resilience to grid outages. If deployed using smart-grid technology, NZE communities can act as a grid node and aid in balancing electrical demand. However, identifying cost-effective pathways to NZE requires detailed energy and economic models. Information required to build such models is not typically available at the early master-planning stages, where the largest energy and economic saving opportunities exist. Methodologies that expedite and streamline energy and economic modeling could facilitate early decision making. This paper describes a reproducible methodology that aids modelers in identifying energy and economic savings opportunities in the early community design stages. As additional information becomes available, models can quickly be recreated and evaluated. The proposed methodology is applied to the first-phase design of a NZE community under development in Southwestern Ontario.
- Date Created:
- 2015-01-01
-
- Resource Type:
- Conference Proceeding
- Creator:
- Bucking, Scott, Zmeureanu, Radu, and Athienitis, Andreas
- Abstract:
- This paper presents a multi-objective redesign case study of an archetype solar house based on a near net zero energy (NZE) demonstration home located in Eastman, Quebec. Using optimization techniques, pathways are identified from the original design to both cost and energy optimal designs. An evolutionary algorithm is used to optimize trade-offs between passive solar gains and active solar generation, using two objective functions: net-energy consumption and life-cycle cost over a thirty-year life cycle. In addition, this paper explores different pathways to net zero energy based on economic incentives, such as feed-in tariffs for on-site electricity production from renewables. The main objective is to identify pathways to net zero energy that will facilitate the future systematic design of similar homes based on the concept of the archetype that combines passive solar design; energy-efficiency measures, including a geothermal heat pump; and a building-integrated photovoltaic system. Results from this paper can be utilized as follows: (1) systematic design improvements and applications of lessons learned from a proven NZE home design concept, (2) use of a methodology to understand pathways to cost and energy optimal building designs, and (3) to aid in policy development on economic incentives that can positively influence optimized home design.
- Date Created:
- 2014-01-01
-
- Resource Type:
- Conference Proceeding
- Creator:
- Bucking, Scott
- Abstract:
- Net-zero energy is an influential idea in guiding the building stock towards renewable energy resources. Increasingly, this target is scaled to entire communities which may include dozens of buildings in each new development phase. Although building energy modelling processes and codes have been well developed to guide decision making, there is a lack of methodologies for community integrated energy masterplanning. The problem is further complicated by the availability of district systems which better harvest and store on-site renewable energy. In response to these challenges, this paper contributes an energy modelling methodology which helps energy masterplanners determine trade-offs between building energy saving measures and district system design. Furthermore, this paper shows that it is possible to mitigate electrical and thermal peaks of a net-zero energy community using minimal district equipment. The methodology is demonstrated using a cold-climate case-study with both significant heating/ cooling loads and solar energy resources.
- Date Created:
- 2017-07-25
-
- Resource Type:
- Conference Proceeding
- Creator:
- Bucking, Scott
- Abstract:
- Energy models are commonly used to examine the multitude of pathways to improve building performance. As presently practiced, a deterministic approach is used to evaluate incremental design improvements to achieve performance targets. However, significant insight can be gained by examining the implications of modeling assumptions using a probabilistic approach. Analyzing the effect of small perturbations on the inputs of energy and economic models can improve decision making and modeler confidence in building simulation results. This paper describes a reproducible methodology which AIDS modelers in identifying energy and economic uncertainties caused by variabilities in solar exposure. Using an optimization framework, uncertainty is quantified across the entire simulation solution space. This approach improves modeling outcomes by factoring in the effect of variability in assumptions and improves confidence in simulation results. The methodology is demonstrated using a net zero energy commercial office building case study.
- Date Created:
- 2017-01-01
-
- Resource Type:
- Research Paper
- Creator:
- Acheson, Keith and Maule, Christopher
- Abstract:
- An earlier version of this paper was prepared for a Symposium, Cultural Policies in Regional Integration, sponsored by the Center for the Study of Western Hemispheric Trade and the Mexican Center of the Institute of Latin American Studies, The University of Texas at Austin, February 2, 1998.
- Date Created:
- 1998-02-28
-
- Resource Type:
- Article
- Creator:
- Lim, Merlyna
- Abstract:
- The article scrutinizes the complex entanglement of cyberurban spaces in the making and development of contemporary social movement by analyzing its imaginaries, practices, and trajectories. This issue of New Geographies, “Geographies of Information” (edited by Taraneh Meskhani & Ali Fard), presents a new set of frameworks that refrain from generalizations to highlight the many facets of the socio-technical constructions, processes, and practices that form the spaces of information and communication. In addition to Lim, contributors of the issue include prominent thinkers and scholars in various related disciplines such as Rob Kitchin (critical data), Stephen Graham (urbanism) and Malcolm McCullough (architecture/urban computing).
- Date Created:
- 2015-10-20
-
- Resource Type:
- Report
- Creator:
- Daw, Jamie R., Mintzes, Barbara, Morgan, Steven G., Gagnon, Marc-André, Martin, Danielle, and Lexchin, Joel
- Date Created:
- 2015-07-15
-
- Resource Type:
- Research Paper
- Creator:
- Rowlands, Dane and Calleja, Rachael
- Abstract:
- The analysis of official development assistance has always struggled with the contradiction between its more altruistic motivations for global development and its easy adaptation as an instrument for the donor’s pursuit of self-interested foreign policy objectives. In the international system foreign aid may thus become a forum for both cooperative and competitive interactions between donors. This chapter explores the interdependence of aid by reviewing the literature on donor interdependence, with a particular focus on donor competition for influence in recipient states. We then present a simple theoretical framework to examine donor competition, and provide some preliminary empirical testing of resulting hypotheses. We conclude that while the evidence about competition is fixed, the behaviour of some donors is consistent with their pursuit of influence in certain recipient states.
- Date Created:
- 2015-03-01
-
- Resource Type:
- Research Paper
- Creator:
- Kilberg, Joshua, Vidino, Lorenzo, Lefkowitz, Josh, and Kohlmann, Evan
- Abstract:
- Since the early 2000s the Internet has become particularly crucial for the global jihadist movement. Nowhere has the Internet been more important in the movement’s development than in the West. While dynamics differ from case to case, it is fair to state that almost all recent cases of radicalization in the West involve at least some digital footprint. Jihadists, whether structured groups or unaffiliated sympathizers, have long understood the importance of the Internet in general and social media, in particular. Zachary Chesser, one of the individuals studied in this report, fittingly describes social media as “simply the most dynamic and convenient form of media there is.” As the trend is likely to increase, understanding how individuals make the leap to actual militancy is critically important. This study is based on the analysis of the online activities of seven individuals. They share several key traits. All seven were born or raised in the United States. All seven were active in online and offline jihadist scene around the same time (mid‐ to late 2000s and early 2010s). All seven were either convicted for terrorism‐related offenses (or, in the case of two of the seven, were killed in terrorism‐related incidents.) The intended usefulness of this study is not in making the case for monitoring online social media for intelligence purpose—an effort for which authorities throughout the West need little encouragement. Rather, the report is meant to provide potentially useful pointers in the field of counter‐radicalization. Over the past ten years many Western countries have devised more or less extensive strategies aimed at preventing individuals from embracing radical ideas or de‐radicalizing (or favoring the disengagement) of committed militants. (Canada is also in the process of establishing its own counter‐radicalization strategy.)
- Date Created:
- 2015-05-01
-
- Resource Type:
- Report
- Creator:
- Mount, Phil and Knezevic, Irena
- Abstract:
- REPORT HIGHLIGHTS - Opportunity for on-site food production comes from public and political support for ‘local food’, combined with a shortage of land for new producers - GIS study of Ontario healthcare properties shows 217 with more than one acre of arable land available, and 54 with more than five acres - Case studies demonstrate the benefits of a ‘farmer’— independent, staff member or community group—and/or labour force dedicated to the project - Initial and on-going viability correlates to the extent of institutional support, particularly staff time for project coordination - Institutional motivations for on-site food production initiatives vary, include mental and physical therapeutic benefits See more at the Project SOIL website.
- Date Created:
- 2015-09-30
-
- Resource Type:
- Research Paper
- Creator:
- Acheson, Keith and Liu, Xiguang
- Date Created:
- 1998-03-19
-
- Resource Type:
- Conference Proceeding
- Creator:
- Jackson, Edward T. and Schwartz, Karen
- Date Created:
- 2008-05-30
-
- Resource Type:
- Conference Proceeding
- Creator:
- van de Sande, Adje and Schwartz, Karen
- Date Created:
- 2008-05-30
-
- Resource Type:
- Article
- Creator:
- Fleming, Courtney, Morel, Vanessa, Schwartz, Karen, Armstrong, Meredith, O’Brien, Ann-Marie, and Moore, Patricia
- Abstract:
- This study uses an exploratory qualitative design to examine the lived experience of one group of service users on community treatment orders (CTOs). The study was designed and completed by four graduate students at Carleton University School of Social Work. Despite the unique features of CTO legislation in Ontario, many findings from this study are remarkably similar to findings of research conducted in other jurisdictions. What is unique in our findings is the lack of focus on the actual conditions and provision of the CTO. The issue for our participants was less about the CTO itself, and more about the labels, control and discrimination associated with severe mental illness. Cette étude utilise un concept qualitatif et exploratoire pour examiner les expériences vécues d’un groupe qui utilise les ordonnances de traitement en milieu communautaire (OTMC). Cette étude a été designée et complétée par 4 étudiants de l’école de service social de l’université Carleton. Malgré les nombreux aspects uniques de la loi gérant les OTMC de l’Ontario, plusieurs résultats de cette étude sont remarquablement similaires aux résultats découverts dans de différentes juridictions. L’élément unique de cette recherche est le manque de focus sur les conditions véritables et les provisions des OTMC. La problématique encourue par les participants n’était pas au sujet des OTMC en soi, mais plus tôt au sujet de l’étiquetage, du contrôle, et de la discrimination associé aux troubles de santé mentale sévères.
- Date Created:
- 2010-01-12
-
- Resource Type:
- Report
- Creator:
- Morris, Marika
- Description:
- The study was initiated as Canada’s contribution to the Wilson’s Centre Global Women’s Leadership Initiative Women in Public Service Project initiated by Hilary Clinton when she was Secretary of State. In partnership with the Centre for Women in Politics and Public Leadership, Gender Equality Measurement Initiative, and Centre for Research on Women and Work at Carleton University and the Public Service Commission of Canada.
- Abstract:
- This study was undertaken to determine whether women in leadership positions in the Canadian federal Public Service (PS) have had an impact on policy, programs, operations, administration or workplace conditions, what that impact might be, and how to measure it. Drawing from qualitative interviews with current and retired Executives and Deputy Ministers in the Canadian federal public service, it provides recommendations and considerations around gender and impact moving forward.
- Date Created:
- 2016-06-22
-
- Resource Type:
- Report
- Creator:
- Higgins, Christopher and Duxbury, Linda E.
- Abstract:
- This report provides key findings and recommendations from a study of work-life conflict and employee well-being that involved 4500 police officers working for 25 police forces across Canada. Findings from this study should help police forces across Canada implement policies and practices that will help them thrive in a "sellers market for labour."
- Date Created:
- 2012-03-01
-
- Resource Type:
- Report
- Creator:
- Duxbury, Linda E. and Higgins, Christopher
- Abstract:
- The study examined work-life experiences of 25,000 Canadians who were employed full time in 71 public, private and not-for-profit organizations across all provinces and territories between June 2011 and June 2012. Two-thirds of survey respondents had incomes of $60,000 or more a year and two-thirds were parents. Previous studies were conducted in 1991 and 2001. “It is fascinating to see what has changed over time and what hasn’t,’’ said Duxbury. Among the findings: Most Canadian employees still work a fixed nine-to-five schedule – about two-thirds. Overall, the typical employee spends 50.2 hours in work-related activities a week. Just over half of employees take work home to complete outside regular hours. The use of flexible work arrangements such as a compressed work week (15 per cent) and flexible schedules (14 per cent) is much less common. Fifty-seven per cent of those surveyed reported high levels of stress. One-third of working hours are spent using email. Employees in the survey were twice as likely to let work interfere with family as the reverse. Work-life conflict was associated with higher absenteeism and lower productivity. Succession planning, knowledge transfer and change management are likely to be a problem for many Canadian organizations. There has been little career mobility within Canadian firms over the past several years.
- Date Created:
- 2012-10-25
-
- Resource Type:
- Article
- Creator:
- Sarma, Nandini and Knoerr, Hélène
- Date Created:
- 2015-02-05