Anomaly detection involves identifying observations that deviate from the normal behavior of a system. One of the ways to achieve this is by identifying the phenomena that characterize "normal" observations. Subsequently, based on the characteristics of data learned from the normal observations, new observations are classified as being either normal or not. Most state-of-the-art approaches, especially those which belong to the family parameterized statistical schemes, work under the assumption that the underlying distributions of the observations are stationary. That is, they assume that the distributions that are learned during the training (or learning) phase, though unknown, are not time-varying. They further assume that the same distributions are relevant even as new observations are encountered. Although such a " stationarity" assumption is relevant for many applications, there are some anomaly detection problems where stationarity cannot be assumed. For example, in network monitoring, the patterns which are learned to represent normal behavior may change over time due to several factors such as network infrastructure expansion, new services, growth of user population, etc. Similarly, in meteorology, identifying anomalous temperature patterns involves taking into account seasonal changes of normal observations. Detecting anomalies or outliers under these circumstances introduces several challenges. Indeed, the ability to adapt to changes in non-stationary environments is necessary so that anomalous observations can be identified even with changes in what would otherwise be classified as normal behavior. In this paper, we proposed to apply weak estimation theory for anomaly detection in dynamic environments. In particular, we apply this theory to detect anomaly activities in system calls. Our experimental results demonstrate that our proposal is both feasible and effective for the detection of such anomalous activities.
The observation of four-wave mixing (FWM) in single-walled carbon nanotubes (SWCNTs) deposited around a tilted fiber Bragg grating (TFBG) has been demonstrated. A thin, floating SWCNT film is manually wrapped around the outer cladding of the fiber and FWM occurs between two core-guided laser signals by TFBG-induced interaction of the core mode and cladding modes. The effective nonlinear coefficient is calculated to be 1.8 10 3W -1Km -1. The wavelength of generated idlers is tunable with a range of 7.8 nm.
A novel technique for increasing the sensitivity of tilted fibre Bragg grating (TFBG) based refractometers is presented. The TFBG sensor was coated with chemically synthesized silver nanowires 100nm in diameter and several micrometres in length. A 3.5-fold increase in sensor sensitivity was obtained relative to the uncoated TFBG sensor. This increase is associated with the excitation of surface plasmons by orthogonally polarized fibre cladding modes at wavelengths near 1.5μm. Refractometric information is extracted from the sensor via the strong polarization dependence of the grating resonances using a Jones matrix analysis of the transmission spectrum of the fibre.
Matching Dependencies (MDs) are a recent proposal for declarative entity resolution. They are rules that specify, given the similarities satisfied by values in a database, what values should be considered duplicates, and have to be matched. On the basis of a chase-like procedure for MD enforcement, we can obtain clean (duplicate-free) instances; actually possibly several of them. The clean answers to queries (which we call the resolved answers) are invariant under the resulting class of instances. In this paper, we investigate a query rewriting approach to obtaining the resolved answers (for certain classes of queries and MDs). The rewritten queries are specified in stratified Datalog not,s with aggregation. In addition to the rewriting algorithm, we discuss the semantics of the rewritten queries, and how they could be implemented by means of a DBMS.
A collection of n anonymous mobile robots is deployed on a unit-perimeter ring or a unit-length line segment. Every robot starts moving at constant speed, and bounces each time it meets any other robot or segment endpoint, changing its walk direction. We study the problem of position discovery, in which the task of each robot is to detect the presence and the initial positions of all other robots. The robots cannot communicate or perceive information about the environment in any way other than by bouncing. Each robot has a clock allowing it to observe the times of its bounces. The robots have no control on their walks, which are determined by their initial positions and the starting directions. Each robot executes the same position detection algorithm, which receives input data in real-time about the times of the bounces, and terminates when the robot is assured about the existence and the positions of all the robots. Some initial configuration of robots are shown to be infeasible - no position detection algorithm exists for them. We give complete characterizations of all infeasible initial configurations for both the ring and the segment, and we design optimal position detection algorithms for all feasible configurations. For the case of the ring, we show that all robot configurations in which not all the robots have the same initial direction are feasible. We give a position detection algorithm working for all feasible configurations. The cost of our algorithm depends on the number of robots starting their movement in each direction. If the less frequently used initial direction is given to k ≤ n/2 robots, the time until completion of the algorithm by the last robot is 1/2 ⌈n/k⌉. We prove that this time is optimal. By contrast to the case of the ring, for the unit segment we show that the family of infeasible configurations is exactly the set of so-called symmetric configurations. We give a position detection algorithm which works for all feasible configurations on the segment in time 2, and this algorithm is also proven to be optimal.
The Hierarchical Optimized Link State Routing (HOLSR) protocol enhances the scalability and heterogeneity of traditional OLSR-based Mobile Ad-Hoc Networks (MANETs). It organizes the network in logical levels and nodes in clusters. In every cluster, it implements the mechanisms and algorithms of the original OLSR to generate and to distribute control traffic information. However, the HOLSR protocol was designed with no security in mind. Indeed, it both inherits, from OLSR, and adds new security threats. For instance, the existence of misbehaving nodes can highly affect important HOLSR operations, such as the cluster formation. Cluster IDentification (CID) messages are implemented to organize a HOLSR network in clusters. In every message, the hop count field indicates to the receiver the distance in hops to the originator. An attacker may maliciously alter the hop count field. As a consequence, a receiver node may join a cluster head farther away than it appears. Then, the scalability properties in a HOLSR network is affected by an unbalanced distribution of nodes per cluster. We present a solution based on the use of hash chains to protect mutable fields in CID messages. As a consequence, when a misbehaving node alters the hop count field in a CID message, the receiver nodes are able of detecting and discarding the invalid message.
Given a set of n points in the plane, range diameter queries ask for the furthest pair of points in a given axis-parallel rectangular range. We provide evidence for the hardness of designing space-efficient data structures that support range diameter queries by giving a reduction from the set intersection problem. The difficulty of the latter problem is widely acknowledged and is conjectured to require nearly quadratic space in order to obtain constant query time, which is matched by known data structures for both problems, up to polylogarithmic factors. We strengthen the evidence by giving a lower bound for an important subproblem arising in solutions to the range diameter problem: computing the diameter of two convex polygons, that are separated by a vertical line and are preprocessed independently, requires almost linear time in the number of vertices of the smaller polygon, no matter how much space is used. We also show that range diameter queries can be answered much more efficiently for the case of points in convex position by describing a data structure of size O(n log n) that supports queries in O(log n) time.
Instead of allowing the recovery of original passwords, forgotten passwords are often reset using online mechanisms such as password verification questions (PVQ methods) and password reset links in email. These mechanisms are generally weak, exploitable, and force users to choose new passwords. Emailing the original password exposes the password to third parties. To address these issues, and to allow forgotten passwords to be securely restored, we present a scheme called Mercury. Its primary mode employs user-level public keys and a personal mobile device (PMD) such as a smart-phone, netbook, or tablet. A user generates a key pair on her PMD; the private key remains on the PMD and the public key is shared with different sites (e.g., during account setup). For password recovery, the site sends the (public key)-encrypted password to the user's pre-registered email address, or displays the encrypted password on a webpage, e.g., as a barcode. The encrypted password is then decrypted using the PMD and revealed to the user. A prototype implementation of Mercury is available as an Android application.
The goal of the present intervention research was to test whether guided invented spelling would
facilitate entry into reading for at-risk kindergarten children. The 56 participating children had poor
phoneme awareness, and as such, were at risk of having difficulty acquiring reading skills. Children
were randomly assigned to one of three training conditions: invented spelling, phoneme
segmentation, or storybook reading. All children participated in 16 small group sessions over eight
weeks. In addition, children in the three training conditions received letter-knowledge training and
worked on the same 40 stimulus words that were created from an array of 14 letters. The findings
were clear: on pretest, there were no differences between the three conditions on measures of early
literacy and vocabulary, but, after training, invented spelling children learned to read more words
than did the other children. As expected, the phoneme-segmentation and invented-spelling children
were better on phoneme awareness than were the storybook-reading children. Most interesting,
however, both the invented spelling and the phoneme-segmentation children performed similarly on
phoneme awareness suggesting that the differential effect on learning to read was not due to
phoneme awareness per se. As such, the findings support the view that invented spelling is an
exploratory process that involves the integration of phoneme and orthographic representations. With
guidance and developmentally appropriate feedback, invented spelling provides a milieu for children
to explore the relation between oral language and written symbols that can facilitate their entry in
The new renewable fuels standard (RFS 2) aims to distinguish corn-ethanol that achieves a 20% reduction in greenhouse gas (GHG) emissions compared with gasoline. Field data from Kim et al. (2009) and from our own study suggest that geographic variability in the GHG emissions arising from corn production casts considerable doubt on the approach used in the RFS 2 to measure compliance with the 20% target. If regulators wish to require compliance of fuels with specific GHG emission reduction thresholds, then data from growing biomass should be disaggregated to a level that captures the level of variability in grain corn production and the application of life cycle assessment to biofuels should be modified to capture this variability.
We test for the presence of time-varying parameters (TVP) in the long-run dynamics of energy prices for oil, natural gas and coal, within a standard class of mean-reverting models. We also propose residual-based diagnostic tests and examine out-of-sample forecasts. In-sample LR tests support the TVP model for coal and gas but not for oil, though companion diagnostics suggest that the model is too restrictive to conclusively fit the data. Out-of-sample analysis suggests a random-walk specification for oil price, and TVP models for both real-time forecasting in the case of gas and long-run forecasting in the case of coal.
This report provides key findings and recommendations from a study of work-life conflict and employee well-being that involved 4500 police officers working for 25 police forces across Canada. Findings from this study should help police forces across Canada implement policies and practices that will help them thrive in a "sellers market for labour."
The study examined work-life experiences of 25,000 Canadians who were employed full time in 71 public, private and not-for-profit organizations across all provinces and territories between June 2011 and June 2012. Two-thirds of survey respondents had incomes of $60,000 or more a year and two-thirds were parents.
Previous studies were conducted in 1991 and 2001.
“It is fascinating to see what has changed over time and what hasn’t,’’ said Duxbury.
Among the findings:
Most Canadian employees still work a fixed nine-to-five schedule – about two-thirds.
Overall, the typical employee spends 50.2 hours in work-related activities a week. Just over half of employees take work home to complete outside regular hours.
The use of flexible work arrangements such as a compressed work week (15 per cent) and flexible schedules (14 per cent) is much less common.
Fifty-seven per cent of those surveyed reported high levels of stress.
One-third of working hours are spent using email.
Employees in the survey were twice as likely to let work interfere with family as the reverse.
Work-life conflict was associated with higher absenteeism and lower productivity.
Succession planning, knowledge transfer and change management are likely to be a problem for many Canadian organizations.
There has been little career mobility within Canadian firms over the past several years.
The term ‘fundraising methods’ refers to the tactics used by charities to generate current or future monies and gifts in kind to provide services to clients, fund research, and cover administrative costs. Under conditions of reduced financial support from government, fundraising is an important, even critical, source of revenue for charities. Equally important is access to accurate information on fundraising methods used by charities in Canada. This paper traces the evolution of fundraising data collected by Canada Revenue Agency (CRA) over the last ten years, compares definitions employed by CRA with examples drawn from the academic and practitioner literatures, and highlights methods not currently being tracked by the T3010 Registered Charity Information Return.
This research project is an examination of change in the fundraising activities employed by small Canadian registered charities (defined as registered charities with total annual revenues under $100,000) over the ten year period from 2000 to 2009. Utilizing data from the Registered Charity Information Returns (T3010) filed by charities with the Canada Revenue Agency (CRA), the study provides a profile of fundraising methods used, examining trends in types and number of fundraising methods utilized over the ten year period. We analyze variation in terms of size, designation type (charitable organization/public foundation /private foundation), location (rural/urban), charitable activity (welfare, religion, education, health, benefit to the community, other), orientation (religious/secular), and geographic region (each province and territory, western Canada/central Canada/Maritimes/territories).
This paper is an overview of the important considerations that arise at the outset of a project.
There are numerous ways that a work team may decide on which methods should be
prioritized among the many tools available for community engagement. As the project comes
to grips with the scale and the scope of a 7-year project on Community Engagement, it will be
essential to explore how the various evaluative methods: Theory of Change (ToC),
Developmental Evaluation, Collective Impact, and Action Research are combined, and how
Evaluation scholars have typically approached these subjects in the past. Is it possible to use
‘Theory of Change’ at the same time as other methods? One may answer this question with a
resounding “Yes!” In the community sector, there are many versions of a Theory of Change. The
term may be applied to both one’s personalized impression of the arrow of change, as well as
according to traditional Log Frame models for mapping long term ‘policy change.’ Even if there
are dilemmas in coming up with language to describe what is meant by “Theory of Change,”
there are many opportunities for ToC to be fused with other methods, and tried and tested
over the life of the CFICE project, whatever the original connotations of the researcher or
community practitioner may be.
Cough sound discriminator algorithms are capable of distinguishing between dry and wet cough types. The performance of such algorithms, however, is affected by noise and reverberation which might exist in patients' environments. In this thesis, the performance of the previously developed cough sound discriminator in a noisy and reverberant room is quantitatively measured using Linear Separation Score. Experiments revealed a significant decrease in the performance of the cough sound discriminator in the presence of noise and reverberation using a single microphone for the cough sound acquisition. In order to improve this performance, a microphone array structure which included a maximum of 7 microphones was designed with a delay-and-sum beamformer. Experiments showed a significant improvement in the performance of the cough sound discriminator using a microphone array in noisy and reverberant environments. Finally, a Graphical User Interface was developed in order to visualize the beampattem emitted by the microphone array structure.
This research used a constructive methodology to design an organization using a combination of organization design and results-based management approaches. Drawing upon the business ecosystems literature and practical experience, design principles were used to guide the design of the organization that produces and disseminates the Technology Innovation Management Review, a journal concerned with the issues relating to launching and growing technology companies. A logic model links the organization's activities to outputs and expected outcomes across multiple time scales. An integrated performance management framework tracks the organization's progress toward those outcomes and provides a mechanism to continuously improve the organization by feeding these results into new cycles of ongoing redesign. The results from the first six months of the organization's operation provide lessons and action items that demonstrate the potential of this approach for broader application.