Two and a half hours of documentary footage from June 21, 2019, proceedings of the Senate when Bill C-81 received Royal Assent. The footage takes viewers behind the scenes with individuals closely involved with ensuring the Act received Royal Assent and features interviews with the Honourable Carla Qualtrough, Senator Jim Munson, James van Raalte, Sinead Tuite, Bill Adair, and Frank Folino.
Photobleaching of optical absorption bands in the 5 eV region and the creation of others at higher and lower energy have been examined in the case of ArF (6.4 eV) and KrF (5 eV) excimer laserirradiation of 3GeO2:97SiO2glasses. We report a difference in the transformation process of the neutral oxygen monovacancy and also of the germanium lone pair center (GLPC) into electron trap centers associated with fourfold coordinated Ge ions and Ge-E′ centers when we use one or the other laser. Correlations between absorption bands and electron spin resonance signals were made after different steps of laser irradiation. It was found that the KrF laser generates twice as many Ge-E′ centers as the ArF laser for the same dose of energy delivered. The main reason for this difference is found to be the more efficient bleaching of the GLPC (5.14 eV) by the KrF laser compared to that by the ArF laser.
Silica-based thin-film multilayers are investigated as a means to enhance the effective second-order nonlinearity induced in silica glass structures by corona poling. Structures consisting of phosphorus-doped and undoped silica glass layers exhibit second harmonic generation (SHG) that is higher by an order of magnitude compared to the SHG in bulk silica glass poled under the same conditions. When the poled structure consists of two multilayered stacks separated in space, the stacks exhibit comparable poling-induced nonlinearities. This result suggests that the poling voltage is divided between the two stacks such that simultaneous poling of multiple regions within the sample is realized.
Samples of synthetic fused silica have been implanted at room temperature with silicon ions of energy 1.5 MeV. Fluences ranged from 1011 to 1013 cm−2. Samples were probed using variable‐energy positron annihilation spectroscopy. The Doppler‐broadening S parameter corresponding to the implanted region decreased with increasing fluence and saturated at a fluence of 1013 cm−2. It is shown that the decrease in the S parameter is due to the suppression of positronium (Ps) which is formed in the preimplanted material, due to the competing process of implantation‐induced trapping of positrons. In order to satisfactorily model the positron data it was necessary to account for positron trapping due to defects created by both electronic and nuclear stopping of the implanted ions. Annealing of the 1013 cm−2 sample resulted in measurable recovery of the preimplanted S parameter spectrum at 350 °C and complete recovery to the preimplanted condition at 600 °C. Volume compaction was also observed afterimplantation. Upon annealing, the compaction was seen to decrease by 75%.
The effective indices of the cladding modes of optical fibers depend on the refractive index of the medium surrounding the fiber. We show experimentally and theoretically that while cladding modes with similar effective indices normally have similar refractometric sensitivities, the addition of a 50 nm thick gold sheath enhances the sensitivity of some EH modes by more than one order of magnitude while nearly completely suppressing the sensitivity of neighbouring HE modes (by three orders of magnitude, down to insignificant levels). A differential sensitivity of ∼1000 nm/(refractive index unit) is experimentally reported between adjacent EH and HE grating resonances.
A photolithographic method is described for fabricating refractive index Bragg gratings in photosensitive optical fiber by using a special phase mask grating made of silica glass. A KrF excimer laser beam (249 nm) at normal incidence is modulated spatially by the phase mask grating. The diffracted light, which forms a periodic, high-contrast intensity pattern with half the phase mask grating pitch, photoimprints a refractive index modulation into the core of photosensitive fiber placed behind, in proximity, and parallel, to the mask; the phase mask grating striations are oriented normal to the fiber axis. This method of fabricating in-fiber Bragg gratings is flexible, simple to use, results in reduced mechanical sensitivity of the grating writing apparatus and is functional even with low spatial and temporal coherence laser sources.
Anomaly detection involves identifying observations that deviate from the normal behavior of a system. One of the ways to achieve this is by identifying the phenomena that characterize "normal" observations. Subsequently, based on the characteristics of data learned from the normal observations, new observations are classified as being either normal or not. Most state-of-the-art approaches, especially those which belong to the family parameterized statistical schemes, work under the assumption that the underlying distributions of the observations are stationary. That is, they assume that the distributions that are learned during the training (or learning) phase, though unknown, are not time-varying. They further assume that the same distributions are relevant even as new observations are encountered. Although such a " stationarity" assumption is relevant for many applications, there are some anomaly detection problems where stationarity cannot be assumed. For example, in network monitoring, the patterns which are learned to represent normal behavior may change over time due to several factors such as network infrastructure expansion, new services, growth of user population, etc. Similarly, in meteorology, identifying anomalous temperature patterns involves taking into account seasonal changes of normal observations. Detecting anomalies or outliers under these circumstances introduces several challenges. Indeed, the ability to adapt to changes in non-stationary environments is necessary so that anomalous observations can be identified even with changes in what would otherwise be classified as normal behavior. In this paper, we proposed to apply weak estimation theory for anomaly detection in dynamic environments. In particular, we apply this theory to detect anomaly activities in system calls. Our experimental results demonstrate that our proposal is both feasible and effective for the detection of such anomalous activities.
The observation of four-wave mixing (FWM) in single-walled carbon nanotubes (SWCNTs) deposited around a tilted fiber Bragg grating (TFBG) has been demonstrated. A thin, floating SWCNT film is manually wrapped around the outer cladding of the fiber and FWM occurs between two core-guided laser signals by TFBG-induced interaction of the core mode and cladding modes. The effective nonlinear coefficient is calculated to be 1.8 10 3W -1Km -1. The wavelength of generated idlers is tunable with a range of 7.8 nm.
We report on the fabrication of a chirped, phase mask that was used to create a fiber Bragg grating(FBG)device for the compensation of chromatic dispersion in longhaul optical transmission networks.Electron beamlithography was used to expose the grating onto a resist-coated quartz plate. After etching, this phase mask was used to holographically expose an index grating into the fiber core [K. O. Hill, F. Bilodeau, D. C. Johnson, and J. Albert, Appl. Phys. Lett.62, 1035 (1993)]. The linear increase in the grating period, “chirp,” is only 0.55 nm over the 10 cm grating. This is too small to be defined by computer aided design and a digital deflection system. Instead, the chirp was incorporated by repeatedly rescaling the analog electronics used for field size calibration. Special attention must be paid to minimize any field stitching and exposure artifacts. This was done by using overlapping fields in a “voting” method. As a result, each grating line is exposed by the accumulation of three overlapping exposures at 1/3 dose. This translates any abrupt stitching error into a small but uniform change in the line-to-space ratio of the grating. The phase mask was used with the double-exposure photoprinting technique [K. O. Hill, F. Bilodeau, B. Malo, T. Kitagawa, S. Thériault, D. C. Johnson, J. Albert, and K. Takiguchi, Opt. Lett. 19, 1314 (1994)]: a KrF excimer laser holographically imprints an apodized chirped Bragg grating in a hydrogen loaded SMF-28 optical fiber. Our experiments have demonstrated a spectral delay of −1311 ps/nm with a linearity of +/−10 ps over the 3 dB bandwidth of the resonant wavelength of the FBG. The reflectance, centered on 1550 nm, shows a side-lobe suppression of −25 dB. Fabrication processes and optical characterization will be discussed.
This paper presents ObjRecombGA, a genetic algorithm framework for recombining related programs at the object file level. A genetic algorithm guides the selection of object files, while a robust link resolver allows working program binaries to be produced from the object files derived from two ancestor programs. Tests on compiled C programs, including a simple web browser and a well-known 3D video game, show that functional program variants can be created that exhibit key features of both ancestor programs. This work illustrates the feasibility of applying evolutionary techniques directly to commodity applications. Copyright 2010 ACM.
The underlying issues relating to the usability and security of multiple passwords are largely unexplored. However, we know that people generally have difficulty remembering multiple passwords. This reduces security since users reuse the same password for different systems or reveal other passwords as they try to log in. We report on a laboratory study comparing recall of multiple text passwords with recall of multiple click-based graphical passwords. In a one-hour session (short-term), we found that participants in the graphical password condition coped significantly better than those in the text password condition. In particular, they made fewer errors when recalling their passwords, did not resort to creating passwords directly related to account names, and did not use similar passwords across multiple accounts. After two weeks, participants in the two conditions had recall success rates that were not statistically different from each other, but those with text passwords made more recall errors than participants with graphical passwords. In our study, click-based graphical passwords were significantly less susceptible to multiple password interference in the short-term, while having comparable usability to text passwords in most other respects. Copyright 2009 ACM.
Online systems often struggle to account for the complicated self-presentation and disclosure needs of those with complex identities or specialized anonymity. Using the lenses of gender, recovery, and performance, our proposed panel explores the tensions that emerge when the richness and complexity of individual personalities and subjectivities run up against design norms that imagine identity as simplistic or one-dimensional. These models of identity not only limit the ways individuals can express their own identities, but also establish norms for other users about what to expect, causing further issues when the inevitable dislocations do occur. We discuss the challenges in translating identity into these systems, and how this is further marred by technical requirements and normative logics that structure cultures and practices of databases, algorithms and computer programming.
Developing applications for touch devices is hard. Developing touch based applications for multi-user input is harder. The Multi-Touch for Java (MT4j) toolkit supports developing touch based applications for multiple users. In this paper, we outline our experience using MT4j for developing a number of software applications to support developers working in co-located teams. Our experience using the toolkit will help developers to understand the nuances of the toolkit and design issues that can be applied to other toolkits for developing multi-user touch based applications.
Germanium ions have been implanted in fused silica using ion beams having energies of 3 and 5 MeV and doses ranging from 1×1012 to 5×1014 ions/cm2. For wavelengths shorter than 400 nm, the optical absorption increases strongly with two absorption bands appearing at 244 and 212 nm. The ion-induced optical absorption can be bleached almost completely by irradiation with 249 nm excimer laser light. Ion implantation also increases the refractive index of silica near the substrate surface. At 632.8 nm a refractive index increase of more than 10-2 has been measured. This decreases by 4×10-3 upon bleaching with 249 nm light.
We have studied optical changes induced by ArF (6.4 eV/193 nm) excimer laser light illumination of high purity SiO2 implanted with Si2+ (5 MeV) at a fluence of 1015 ions/cm2. Optical absorption was measured from 3 eV (400 nm) to 8 eV (155 nm) and showed evidence of several well-defined absorption bands. A correlation in the bleaching behavior appears to exist between the so-called D band (located at 7.15 eV) and the well-known B2α band which is attributed to oxygen vacancies. Changes in the refractive index as a function of ArF illumination were measured and found to be in good quantitative agreement with a Kramers-Kronig analysis of the optical absorption data.
A two-step double ion-exchange process is employed to produce dual-core waveguides in glass. First, potassium ion exchange is carried out at 400°C. Then, silver ion exchange is performed at 300°C. The fabricated waveguides have low losses, large single-mode regions, and more symmetrical profiles than single ion-exchanged waveguides. Etched gratings are also made in dual-core waveguides. Very high efficiencies are demonstrated in these waveguides.
A fiber twist sensor based on the surface plasmon resonance (SPR) effect of an Au-coated tilted fiber Bragg grating (TFBG) is proposed. The SPR response to the twist effect on an Au-coated TFBG (immersing in distilled water) is studied theoretically and experimentally. The results show that the transmission power around the wavelength of SPR changes with the twist angle. For the twist ranging from 0° to 180° in clockwise or anti-clockwise directions, the proposed sensor shows sensitivities of 0.037 dBm/° (S-polarized) and 0.039 dBm/° (P-polarized), which are almost 7.5 times higher than that of the current similar existing twist sensor.
Phosphate glass samples doped with silver ions through a Na+-Ag+ ion-exchange process were treated in a hydrogen atmosphere at temperatures near 430 °C for durations ranging from 4 to 5 h. Such treatment causes metallic silver precipitation at the surface as well as nanoclustering of silver atoms under the surface under conditions very similar to those used for silicate glasses. The presence of silver clusters resulted in a characteristic coloring of the glass and was verified by the observation of a plasmon resonance peak near 410-420 nm in the absorption spectra. Applying a DC voltage between 1.4 and 2 kV at temperatures between 120 and 130 °C led to dissolution of the clusters in the area under the positive electrode, thereby bleaching the glass color. The use of a patterned doped-silicon electrode further led to the formation of a 300 nm thick surface relief on the glass surface and of a volume complex permittivity grating extending at least 4 μm under the surface. Such volume complex refractive index gratings may find applications in passive or active (laser) photonic devices in rare-earth doped phosphate glasses, where conventional bulk grating formation techniques have limited applicability.
The aeronautics industry is looking for ice protection systems consuming less energy. Electromechanical and especially piezoelectric solutions are a promising area of research for reducing average consumptions. This article provides an analytical model
of a simple structure to assess the power and voltage required to obtain the delamination of the accumulated layer of ice at the support/ice interface. This model also allows analyzing the impact of the resonance frequencies used for supplying piezoelectric actuators on the tensile stress into PZT materials. Finally, this article assesses the effect of different ice
- phobic coatings combined with piezoelectric ice protection systems.
Experimental measurements of ice adhesion for different ice - phobic coatings allow evaluating the shear stress at which ice is detached from the surface. These results are then used to estimate - thanks to the proposed analytical model
- the additional gain of power that would be provided by the use of such coatings.
A variable diffraction efficiency phase mask is produced by focused ion beam, implanting a grating pattern into a fused SiO
2 substrate with a 100-nm-diam, 200keV Si beam. The substrate is prepared by cleaning and coating with a 20-nm-thick film of Al to dissipate the ion charge. The pattern consists of 930 lines, each 80μm long, at a pitch of 1.075μm, to obtain a 1-mm-long grating. The substrate is wet etched in a 1M% HF solution for about 45min to produce a phase mask with the desired diffraction efficiency. This phase mask is used to photoimprint Bragg gratings into standard hydrogenated single-mode telecommunication fibers using 193nm light from an ArF laser.
Frost cracking, the breakdown of rock by freezing, is one of the most important mechanical weathering processes acting on Earth's surface. Insights on the mechanisms driving frost cracking stem mainly from laboratory and theoretical studies. Transferring insights from such studies to natural conditions, involving jointed bedrock and heterogeneous thermal and hydrological properties, is a major challenge. We address this problem with simultaneous in situ measurements of acoustic emissions, used as proxy of rock damage, and rock temperature/moisture content. The 1 year data set acquired in an Alpine rock wall shows that (1) liquid water content has an important impact on freezing-induced rock damage, (2) sustained freezing can yield much stronger damage than repeated freeze-thaw cycling, and (3) that frost cracking occurs over the full range of temperatures measured extending from 0 down to -15°C. These new measurements yield a slightly different picture than previous field studies where ice segregation appears to play an important role. Key PointsRock liquid water content has an important impact on the freezing-induced damageSustained freezing can yield stronger damage than repeated freeze-thaw cyclingFrost cracking occurs on a wide range of temperatures extending from 0 to -15C
I examine the relation between sensation and discursive thought (dianoia) in Plato, Plotinus, and Proclus. In Theaetetus, a soul whose highest faculty was sensation would have no unified experience of the sensible world, lacking universal ideas to give order to the sensible flux. It is implied that such universals are grasped by the soul's thinking. In Plotinus the soul is not passive when it senses the world, but as the logos of all things it thinks the world through its own forms. Proclus argues against the derivation of universal logoi from the senses, which alone can't make the sensible world comprehensible. At most they give a record of the original sense-impression in its particularity. The soul's own projected logoi give the sensible world stability. For Proclus, bare sensation does not depend on thought, but a unified experience of the sense-world depends on its paradigmatic logoi in our souls.
The techno-economic feasibility of retrofitting existing Canadian houses with solar assisted heat pump (SAHP) is investigated. The SAHP architecture is adopted from previous studies conducted for the Canadian climate. The system utilizes two thermal storage tanks to store excess solar energy for use later in the day. The control strategy is defined in order to prioritise the use of solar energy for space and domestic hot water heating purposes. Due to economic and technical constraints a series of eligibility criteria are introduced for a house to qualify for the retrofit. A model was built in ESP-r and the retrofit was introduced into all eligible houses in the Canadian Hybrid Residential End-Use Energy and GHG Emissions model. Simulations were conducted for an entire year to estimate the annual energy savings, and GHG emission reductions. Results show that the SAHP system performance is strongly affected by climatic conditions, auxiliary energy sources and fuel mixture for electricity generation. Energy consumption and GHG emission of the Canadian housing stock can be reduced by about 20% if all eligible houses receive the SAHP system retrofit. Economic analysis indicates that the incentive measures will likely be necessary to promote the SAHP system in the Canadian residential market.
Buildings play a significant role in climate change mitigation. In North America, energy used to construct and operate buildings accounts for some 40% of total energy use, largely originating from fossil fuels. The strategic reduction of these energy demands requires knowledge of potential upgrades prior to a building's construction. Furthermore, renewable energy generation integrated into buildings façades and district systems can improve the resiliency of community infrastructure. However, loads that are non-coincidental with on-site generation can cause load balancing issues. This imbalance is typically due to solar resources peaking at noon, whereas building loads typically peak in the morning and late afternoon or evenings. Ideally, the combination of on-site generation and localized storage could remedy such load balancing issues while reducing the need for fossil fuels. In response to these issues, this paper contributes a methodology that co-optimizes building designs and district technologies as an integrated community energy system. A distributed evolutionary algorithm is proposed that can navigate over 10154 potential community permutations. This is the first time in literature that a methodology demonstrates the co-optimization of buildings and district energy systems to reduce energy use in buildings and balance loads at this scale. The proposed solution is reproducible and scalable for future community masterplanning studies.
This study was conducted to assess the techno-economic feasibility of converting the Canadian housing stock (CHS) into net/near zero energy buildings by introducing and integrating high efficient and renewable/alternative energy technologies in new construction and existing houses. Performance assessment of energy retrofit and renewable/alternative energy technologies in existing houses in regional and national scale is necessary to devise feasible strategies and incentive measures. The Canadian Hybrid Residential End-Use Energy and GHG Emissions model (CHREM) that utilizes a bottom-up modeling approach is used to investigate the techno-economic feasibility of air to water heat pump retrofit in the Canadian housing stock. The proposed energy retrofit includes an air to water heat pump, auxiliary boiler, thermal storage tank, hydronic heat delivery and domestic hot water (DHW) heating. Energy savings, GHG emission changes and economic feasibility of the air source heat pump retrofit are considered in this study. Results show that there is a potential to reduce 36% of energy consumption and 23% of GHG emissions of the CHS if all eligible houses undertake the retrofit. Economic analysis indicates that the feasibility of air to water heat pump systems is strongly affected by the current status of primary energy use for electricity generation and space and DHW heating as well as energy prices and economic conditions. Legislation, economic incentives and education for homeowners are necessary to enhance the penetration level of air to water heat pump retrofits in the CHS.
An improved understanding of the consumption patterns, end-uses, and temporal variations of electrical loads in houses is warranted because a significant fraction of a society's total electricity consumption occurs within residential buildings. In general, there is a lack of high-temporal-resolution data describing occupant electrical consumption that are available to researchers in this field. To address this, new measurements were performed and combined with data emanating from an earlier study to provide a database of annual measurements for 23 houses at a 1-min resolution that characterizes whole-house, non-HVAC, air conditioner, and furnace fan electrical draws, as well as the draw patterns of some major appliances. All houses were located in Ottawa, Canada. The non-HVAC measurements of this 23-house sample were shown to be in agreement with published estimates for the housing stock. The furnace fan was found to be the most significant end-use. These high-temporal-resolution data of electrical demands in houses can be used by researchers to increase the fidelity of building performance simulation analyses of different micro-generation technologies in residential buildings.
Fuel cells with nominal outputs of approximately 1kW AC are emerging as a prime-mover of a micro-cogeneration system potentially well-suited to compete, on an energy basis, with conventional methods for satisfying occupant electrical and thermal demands in a residential application. As the energy benefits of these systems can be incremental when compared to efficient conventional methods, it is especially important to consider the uncertainties of the models on which simulation results are based. However, researchers have yet to take this aspect into account.This article makes a contribution by demonstrating how these model uncertainties may be propagated to the simulation results of a micro-cogeneration system for comparison to a reference scenario using a case study. This case study compares the energy performance of a fuel-cell based micro-cogeneration system serving only domestic hot water demands to an efficient reference scenario where the conventional methods for providing electrical and thermal demands are considered to be a central gas-fired combined-cycle plant and a condensing tankless water heater respectively. The simulation results demonstrated that if model uncertainties were ignored, it would have been possible to demonstrate that the considered micro-cogeneration system was more efficient than the reference scenario for average consumption levels of domestic hot water. However, when model uncertainties were considered, the efficiency of the considered micro-cogeneration system could not reliably exceed that of the reference scenario by serving the domestic hot water needs of a single-family home.
Oral narrative skills are assumed to develop through parent-child interactive routines. One such
routine is shared reading. A causal link between shared reading and narrative knowledge,
however, has not been clearly established. The present research tested whether an 8-week
shared-reading intervention enhanced the fictional narrative skills of children entering formal
education. Dialogic reading, a shared reading activity that involves elaborative questioning
techniques, was used to engage children in oral interaction during reading and to emphasize
elements of story knowledge. Forty English-speaking five- and six-year-olds were assigned to
either the dialogic-reading or an alternative-treatment group. ANCOVA results found that the
dialogic-reading children’s post-test narratives were significantly better on structure and context
measures than those for the alternative-treatment children, but results differed for produced or
retold narratives. The dialogic-reading children also showed expressive vocabulary gains.
Overall, this study concretely determined that aspects of fictional narrative construction
knowledge can be learned from interactive book reading.
Techno-economic impact of retrofitting houses in the Canadian housing stock with PV and BIPV/T systems is evaluated using the Canadian Hybrid End-use Energy and Emission Model. Houses with south, south-east and south-west facing roofs are considered eligible for the retrofit since solar irradiation is maximum on south facing surfaces in the northern hemisphere. The PV system is used to produce electricity and supply the electrical demand of the house, with the excess electricity sold to the grid in a net-metering arrangement. The BIPV/T system produces electricity as well as thermal energy to supply the electrical as well as the thermal demands for space and domestic hot water heating. The PV system consists of PV panels installed on the available roof surface while the BIPV/T system adds a heat pump, thermal storage tank, auxiliary heater, domestic hot water heating equipment and hydronic heat delivery system, and replaces the existing heating system in eligible houses. The study predicts the energy savings, GHG emission reductions and tolerable capital costs for regions across Canada. Results indicate that the PV system retrofit yields 3% energy savings and 5% GHG emission reduction, while the BIPV/T system yields 18% energy savings and 17% GHG emission reduction in the Canadian housing stock. While the annual electricity use slightly increases, the fossil fuel use of the eligible houses substantially decreases due to BIPV/T system retrofit.
It has been observed in the literature that as the cardinality of the prescribed discrete input-output data set increases, the corresponding four-bar linkages that minimise the Euclidean norm of the design and structural errors tend to converge to the same linkage. The important implication is that minimising the Euclidean norm, or any p-norm, of the structural error, which leads to a nonlinear least-squares problem requiring iterative solutions, can be accomplished implicitly by minimising that of the design error, which leads to a linear least-squares problem that can be solved directly. Apropos, the goal of this paper is to take the first step towards proving that as the cardinality of the data set tends towards infinity the observation is indeed true. In this paper we will integrate the synthesis equations in the range between minimum and maximum input values, thereby reposing the discrete approximate synthesis problem as a continuous one. Moreover, we will prove that a lower bound of the Euclidean norm, and indeed of any p-norm, of the design error for planar RRRR function-generating linkages exists and is attained with continuous approximate synthesis.
Building Performance Simulation (BPS) is a powerful tool to estimate and reduce building energy consumption at the design stage. However, the true potential of BPS remains unrealized if trial and error simulation methods are practiced to identify combinations of parameters to reduce energy use of design alternatives. Optimization algorithms coupled with BPS is a process-orientated tool which identifies optimal building configurations using conflicting performance indicators. However, the application of optimization approaches to building design is not common practice due to time and computation requirements. This paper proposes a hybrid evolutionary algorithm which uses information gained during previous simulations to expedite and improve algorithm convergence using targeted deterministic searches. This technique is applied to a net-zero energy home case study to optimize trade-offs in passive solar gains and active solar generation using a cost constraint.
We describe a novel Distributed Storage protocol in Disruption (Delay) Tolerant Networks (DTN). Since DTNs can not guarantee the connectivity of the network all the time, distributed data storage and look up has to be performed in a store-and-forward way. In this work, we define local distributed location regions which are called cells to facilitate the data storage and look up process. Nodes in a cell have high probability of moving within their cells. Our protocol resorts to storing data items in cells which have hierarchical structure to reduce routing information storage at nodes. Multiple copies of a data item may be stored at nodes to counter the adverse impact of the nature of DTNs. The cells are relatively stable regions and as a result, data exchange overheads among nodes are reduced. Through experimentation, we show that the proposed distributed storage protocol achieves higher successful data storage ratios with lower delays and limited data item exchange requirements than other protocols in the literature.
we present a method of segmenting video to detect cuts with accuracy equal to or better than both histogram and other feature based methods. As well, the method is faster than other feature based methods. By utilizing feature tracking on corners, rather than lines, we are able to reliably detect features such as cuts, fades and salient frames. Experimental evidence shows that the method is able to withstand high motion situations better than existing methods. Initial implementations using full sized video frames are able to achieve processing rates of 10-30 frames per second depending on the level of motion and number of features being tracked; this includes the time to generate the MPEG decompressed frames.
The design and analysis of community-scale energy systems and incentives is a non-trivial task. The challenge of such undertakings is the well-documented uncertainty of building occupant behaviours. This is especially true in the residential sector, where occupants are given more freedom of activity compared to work environments. Further complicating matters is the dearth of available measured data. Building performance simulation tools are one approach to community energy analysis, however such tools often lack realistic models for occupant-driven demands, such as appliance and lighting (AL) loads. For community-scale analysis, such AL models must also be able to capture the temporal and inter-dwelling variation to achieve realistic estimates of aggregate electrical demand. This work adapts the existing Centre for Renewable Energy Systems Technology (CREST) residential energy model to simulate Canadian residential AL demands. The focus of the analysis is to determine if the daily, seasonal, and inter-dwelling variation of AL demands estimated by the CREST model is realistic. An in-sample validation is conducted on the model using 22 high-resolution measured AL demand profiles from dwellings located in Ottawa, Canada. The adapted CREST model is shown to broadly capture the variation of AL demand variations observed in the measured data, however seasonal variation in daily AL demand behaviour was found to be under-estimated by the model. The average and variance of daily load factors was found to be similar between measured and modelled. The model was found to under-predict the daily coincidence factors of aggregated demands, although the variance of coincident factors was shown to be similar between measured and modelled. A stochastic baseload input developed for this work was found to improve estimates of the magnitude and variation of both baseload and peak demands.
This article describes the progress made toward implementing Resource Description and Access (RDA) in libraries across Canada, as of Fall 2013. Differences in the training experiences in the English-speaking cataloging communities and French-speaking cataloging communities are discussed. Preliminary results of a survey of implementation in English-Canadian libraries are included as well as a summary of the support provided for French-Canadian libraries. Data analysis includes an examination of the rate of adoption in Canada by region and by sector. Challenges in RDA training delivery in a Canadian context are identified, as well as opportunities for improvement and expansion of RDA training in the future.
The rise of game development and game studies on university campuses prompts academic libraries to consider how to support teaching and research in this area. This article examines current issues and challenges in the development of game collections at academic libraries. The gaming ecosystem has become more complex and libraries may need to move beyond collections largely based on console video games. This article will advance the discussion by considering emerging issues to support access to the full range of games. The article will use examples from Carleton University Library, Ottawa, which has been developing a game collection since 2008.
An apodized in-fibre Bragg grating reflector is fabricated using the phase mask photoimprinting technique. The reflector has a centre wavelength of 1550 nm, a bandwidth of 0.22 nm and a peak reflectivity of 90%. At 0.4 nm (50 GHz) from the centre wavelength the reflectivity is 40 dB lower than the peak reflectivity; this is an improvement of more than 20 dB over an unapodized Bragg grating reflector with similar bandwidth and peak reflectivity.
Random Forests variable importance measures are often used to rank variables by their relevance to a classification problem and subsequently reduce the number of model inputs in high-dimensional data sets, thus increasing computational efficiency. However, as a result of the way that training data and predictor variables are randomly selected for use in constructing each tree and splitting each node, it is also well known that if too few trees are generated, variable importance rankings tend to differ between model runs. In this letter, we characterize the effect of the number of trees (ntree) and class separability on the stability of variable importance rankings and develop a systematic approach to define the number of model runs and/or trees required to achieve stability in variable importance measures. Results demonstrate that both a large ntree for a single model run, or averaged values across multiple model runs with fewer trees, are sufficient for achieving stable mean importance values. While the latter is far more computationally efficient, both the methods tend to lead to the same ranking of variables. Moreover, the optimal number of model runs differs depending on the separability of classes. Recommendations are made to users regarding how to determine the number of model runs and/or trees that are required to achieve stable variable importance rankings.
An apodized chirped in-fibre Bragg grating that has a linear dispersion characteristic is reported. The frequency components of an optical pulse (centre wavelength 1551 nm; 10 GHz bandwidth) incident on the grating are reflected with a relative delay that varies linearly from 0 to 130 ps across the spectral width of the pulse. The dispersion compensator is used to correct for the dispersion in a 100 km link (nondispersion shifted fibre) operating at a 10 Gbit/s transmission rate and a wavelength of 1551 nm.
When hydrogen loading is used to enhance the photosensitivity of silica-based optical waveguides and fibres, the presence of molecular hydrogen dissolved in the glass matrix changes the effective index of propagation of guided optical modes by as much as 0.05%. Real-time monitoring of the reflectivity spectrum of Bragg gratings written in such conditions shows that the centre wavelength follows the changes in hydrogen concentration due to diffusion and reaction with glass defects.
The core refractive index of Corning SMF-28 optical fibre exposed to ArF laser pulses increases with the square of the fluence per pulse. Bragg gratings with a refractive index modulation amplitude higher than 10
-3 have been obtained. This is an order of magnitude improvement over previously reported values for this type of fibre in the absence of treatment to enhance the photosensitivity.
Single-longitudinal-mode operation of Er3+-P2O5-codoped silica planar waveguide lasers which are equipped with integrated Bragg grating reflectors is demonstrated, with a polarized output of 340 μW at 1546 nm. The gratings are photo-imprinted using 193 nm light exposure through a phase mask in GeO2-free optical waveguides that have been sensitized by H2 loading.
The electrical resistivity distribution at the base of La Soufrière of Guadeloupe lava dome is reconstructed by using transmission electrical resistivity data obtained by injecting an electrical current between two electrodes located on opposite sides of the volcano. Several pairs of injection electrodes are used in order to constitute a data set spanning the whole range of azimuths, and the electrical potential is measured along a cable covering an angular sector of ≈120? along the basis of the dome. The data are inverted to performa slice electrical resistivity tomography (SERT) with specific functions implemented in the EIDORS open source package dedicated to electrical impedance tomography applied to medicine and geophysics. The resulting image shows the presence of highly conductive regions separated by resistive ridges. The conductive regions correspond to unconsolidated material saturated by hydrothermal fluids. Two of them are associated with partial flank collapses and may represent large reservoirs that could have played an important role during past eruptive events. The resistive ridges may represent massive andesite and are expected to constitute hydraulic barriers.
A novel technique for increasing the sensitivity of tilted fibre Bragg grating (TFBG) based refractometers is presented. The TFBG sensor was coated with chemically synthesized silver nanowires 100nm in diameter and several micrometres in length. A 3.5-fold increase in sensor sensitivity was obtained relative to the uncoated TFBG sensor. This increase is associated with the excitation of surface plasmons by orthogonally polarized fibre cladding modes at wavelengths near 1.5μm. Refractometric information is extracted from the sensor via the strong polarization dependence of the grating resonances using a Jones matrix analysis of the transmission spectrum of the fibre.
Social defeat in mice is a potent stressor that promotes the development of depressive- and anxiety-like behaviours, as well as variations of neuroendocrine and brain neurotransmitter activity. Although environmental enrichment may protect against some of the adverse behavioural and biological effects of social defeat, it seems that, among male group-housed mice maintained in an enriched environment (EE), aggressive behaviours may be more readily instigated, thus promoting distress and exacerbating psychopathological features. Thus, although an EE can potentially have numerous beneficial effects, these may depend on the general conditions in which mice were raised. It was observed in the current investigations that EE group-housed BALB/cByJ mice displayed increased anxiety-like behaviours compared to their counterparts maintained in a standard environment (SE). Furthermore, in response to social defeat, EE group-housed male mice exhibited decreased weight gain, exaggerated corticosterone elevations and altered hippocampal norepinephrine utilization compared to their SE counterparts. These effects were not apparent in the individually housed EE mice and, in fact, enrichment among these mice appeared to buffer against serotonin changes induced by social defeat. It is possible that some potentially beneficial effects of enrichment were precluded among group-housed mice, possibly owing to social disturbances that might occur in these conditions. In fact, even if social interaction is an essential feature of enrichment, it seems that some of the positive effects of this housing condition might be optimal when mice are housed individually, particularly with regard to buffering the effects of social defeat.
A 100-kDa protein that is a main component of the microsomal fraction from rabbit gastric mucosa is phosphorylated by cAMP-dependent protein kinase (PKA) in the presence of 0.2% Triton X-100. Microsomes from rabbit gastric mucosa possess activity of H,K-ATPase but not activity of Na,K-ATPase. Incubation of microsomes with 5 μM fluorescein 5′-isothiocyanate (FITC) results in both an inhibition of H,K-ATPase and labeling of a protein with an electrophoretic mobility corresponding to the mobility of the protein phosphorylated by PKA. The data suggest that the α-subunit of H,K-ATPase can be a potential target for PKA phosphorylation.
This paper analyzes how the “particular symbolic fortunes” of Canada’s most widely recognized literary prize, the Scotiabank Giller Prize, undergo what James English calls “capital intraconversion”––how they are “culturally ‘laundered’” through their association with Frontier College, Canada’s longest-running adult literacy organization. While the Giller initially benefitted from fashioning itself as the private, industry-driven alternative to state-sponsored culture in Canada, increasing criticism of its corporate sponsorship has led, in the past decade, to a rebranding effort. This effort, I contend, seeks to benefit from two key terms––multiculturalism and literacy. Associated as the discourse of multiculturalism and the figure of the literate citizen are with the strong publics of the western, liberal-democratic nation-state, they possess a remarkable ability to accentuate the symbolic capital of Canada’s most widely recognized literary prize.
Ca-ATPase activity in sarcoplasmic reticulum (SR) membranes isolated from skeletal muscles of the typical hibernator, the ground squirrel Spermophilus undulatus, is about 2-fold lower than that in SR membranes of rats and rabbits and is further decreased 2-fold during hibernation. The use of carbocyanine anionic dye Stains-All has revealed that Ca-binding proteins of SR membranes, histidine-rich Ca-binding protein and sarcalumenin, in ground squirrel, rat, and rabbit SR have different electrophoretic mobility corresponding to apparent molecular masses 165, 155, and 170 kDa and 130, 145, and 160 kDa, respectively; the electrophoretic mobility of calsequestrin (63 kDa) is the same in all preparations. The content of these Ca-binding proteins in SR membranes of the ground squirrels is decreased 3–4 fold and the content of 55, 30, and 22 kDa proteins is significantly increased during hibernation.
This article interrogates the question of what it means to be a scholar-commentator in the digital age. Deploying an autoethnographic style, the essay asks about the role of power and responsibility in teaching, research, and public commentary, particularly in the context of studying and engaging in Jewish politics. The article addresses questions about the proper role of the scholar in the academy and the role of subjectivity and political commitments in structuring scholarship, pedagogy, and public engagement. It also examines how one’s view of the profession can seem to shift through the emergence of new writing outlets and new forums for public engagement. Finally, the author investigates how a scholar’s own political commitments can shift over time, how one seeks to shore up identification on social media while trying to change hearts and minds through the op-ed pages, and how community identification can serve as a buffer and motivator for particular forms of research and political action.
There is a paradoxical relationship between the density of solar housing and net household energy use. The amount of solar energy available per person decreases as density increases. At the same time, transportation energy, and to some extent, household operating energy decreases. Thus, an interesting question is posed: how does net energy use vary with housing density? This study attempts to provide insight into this question by examining three housing forms: low-density detached homes, medium-density townhouses, and high-density high-rise apartments in Toronto. The three major quantities of energy that are summed for each are building operational energy use, solar energy availability, and personal transportation energy use. Solar energy availability is determined on the basis of an effective annual collector efficiency. The results show that under the base case in which solar panels are applied to conventional homes, the high-density development uses one-third less energy than the low-density one. Improving the efficiency of the homes results in a similar trend. Only when the personal vehicle fleet or solar collectors are made to be extremely efficient does the trend reverse-the low-density development results in lower net energy.
Current research depicts suburbs as becoming more heterogeneous in terms of socio-economic status. Providing a novel analysis, this paper engages with that research by operationalising suburban ways of living (homeownership, single-family dwelling occupancy and automobile use) and relating them to the geography of income across 26 Canadian metropolitan areas. We find that suburban ways of living exist in new areas and remain associated with higher incomes even as older suburbs, as places, have become more diverse. In the largest cities the relationship between income and suburban ways of living is weaker due to the growth of condominiums in downtowns that allow higher income earners to live urban lifestyles. Homeownership is overwhelmingly more important than other variables in explaining the geography of income across 26 metropolitan areas.
We study the feasibility and time of communication in random geometric radio networks, where nodes fail randomly with positive correlation. We consider a set of radio stations with the same communication range, distributed in a random uniform way on a unit square region. In order to capture fault dependencies, we introduce the ranged spot model in which damaging events, called spots, occur randomly and independently on the region, causing faults in all nodes located within distance s from them. Node faults within distance 2s become dependent in this model and are positively correlated. We investigate the impact of the spot arrival rate on the feasibility and the time of communication in the fault-free part of the network. We provide an algorithm which broadcasts correctly with probability 1 - ε in faulty random geometric radio networks of diameter D in time O(D + log1/ε).
In this paper, we present a novel semidefinite programming approach for multiple-instance learning. We first formulate the multiple-instance learning as a combinatorial maximum margin optimization problem with additional instance selection constraints within the framework of support vector machines. Although solving this primal problem requires non-convex programming, we nevertheless can then derive an equivalent dual formulation that can be relaxed into a novel convex semidefinite programming (SDP). The relaxed SDP has free parameters where T is the number of instances, and can be solved using a standard interior-point method. Empirical study shows promising performance of the proposed SDP in comparison with the support vector machine approaches with heuristic optimization procedures.
Let (S,d) be a finite metric space, where each element p S has a non-negative weight w(p). We study spanners for the set S with respect to weighted distance function d w , where d w (p,q) is w(p)+d(p,q)+wq if p≠q and 0 otherwise. We present a general method for turning spanners with respect to the d-metric into spanners with respect to the d w -metric. For any given ε>0, we can apply our method to obtain (5+ε)-spanners with a linear number of edges for three cases: points in Euclidean space ℝ d , points in spaces of bounded doubling dimension, and points on the boundary of a convex body in ℝ d where d is the geodesic distance function. We also describe an alternative method that leads to (2+ε)-spanners for points in ℝ d and for points on the boundary of a convex body in ℝ d . The number of edges in these spanners is O(nlogn). This bound on the stretch factor is nearly optimal: in any finite metric space and for any ε>0, it is possible to assign weights to the elements such that any non-complete graph has stretch factor larger than 2-ε.
A black hole is a highly harmful host that disposes of visiting agents upon their arrival. It is known that it is possible for a team of mobile agents to locate a black hole in an asynchronous ring network if each node is equipped with a whiteboard of at least O(log n) dedicated bits of storage. In this paper, we consider the less powerful token model: each agent has has available a bounded number of tokens that can be carried, placed on a node or removed from it. All tokens are identical (i.e., indistinguishable) and no other form of communication or coordination is available to the agents. We first of all prove that a team of two agents is sufficient to locate the black hole in finite time even in this weaker coordination model. Furthermore, we prove that this can be accomplished using only O(nlogn) moves in total, which is optimal, the same as with whiteboards. Finally, we show that to achieve this result the agents need to use only O(1) tokens each.
A Semi-Separated Pair Decomposition (SSPD), with parameter s > 1, of a set is a set {(A i ,B i )} of pairs of subsets of S such that for each i, there are balls and containing A i and B i respectively such that min ( radius ) , radius ), and for any two points p, q S there is a unique index i such that p A i and q B i or vice-versa. In this paper, we use the SSPD to obtain the following results: First, we consider the construction of geometric t-spanners in the context of imprecise points and we prove that any set of n imprecise points, modeled as pairwise disjoint balls, admits a t-spanner with edges which can be computed in time. If all balls have the same radius, the number of edges reduces to . Secondly, for a set of n points in the plane, we design a query data structure for half-plane closest-pair queries that can be built in time using space and answers a query in time, for any ε> 0. By reducing the preprocessing time to and using space, the query can be answered in time. Moreover, we improve the preprocessing time of an existing axis-parallel rectangle closest-pair query data structure from quadratic to near-linear. Finally, we revisit some previously studied problems, namely spanners for complete k-partite graphs and low-diameter spanners, and show how to use the SSPD to obtain simple algorithms for these problems.
We present a succinct representation of a set of n points on an n×n grid using bits to support orthogonal range counting in time, and range reporting in time, where k is the size of the output. This achieves an improvement on query time by a factor of upon the previous result of Mäkinen and Navarro [1], while using essentially the information-theoretic minimum space. Our data structure not only can be used as a key component in solutions to the general orthogonal range search problem to save storage cost, but also has applications in text indexing. In particular, we apply it to improve two previous space-efficient text indexes that support substring search [2] and position-restricted substring search [1]. We also use it to extend previous results on succinct representations of sequences of small integers, and to design succinct data structures supporting certain types of orthogonal range query in the plane.
The time required for a sequence of operations on a data structure is usually measured in terms of the worst possible such sequence. This, however, is often an overestimate of the actual time required. Distribution-sensitive data structures attempt to take advantage of underlying patterns in a sequence of operations in order to reduce time complexity, since access patterns are non-random in many applications. Unfortunately, many of the distribution- sensitive structures in the literature require a great deal of space overhead in the form of pointers. We present a dictionary data structure that makes use of both randomization and existing space-efficient data structures to yield very low space overhead while maintaining distribution sensitivity in the expected sense.
We present I/O-efficient algorithms to construct planar Steiner spanners for point sets and sets of polygonal obstacles in the plane, and for constructing the “dumbbell” spanner of [6] for point sets in higher dimensions. As important ingredients to our algorithms, we present I/O efficient algorithms to color the vertices of a graph of bounded degree, answer binary search queries on topology buffer trees, and preprocess a rooted tree for answering prioritized ancestor queries.
Let P be a simple polygon with m vertices and let be a set of n points in P. We consider the points of to be users. We consider a game with two players and. In this game, places a point facility inside P, after which places another point facility inside P. We say that a user is served by its nearest facility, where distances are measured by the geodesic distance in P. The objective of each player is to maximize the number of users they serve. We show that for any given placement of a facility by, an optimal placement for can be computed in O(m + n(logn + logm)) time. We also provide a polynomial-time algorithm for computing an optimal placement for.
We present results related to satisfying shortest path queries on a planar graph stored in external memory. In particular, we show how to store rooted trees in external memory so that bottom-up paths can be traversed I/O-efficiently, and we present I/O-efficient algorithms for triangulating planar graphs and computing small separators of such graphs. Using these techniques, we can construct a data structure that allows for answering shortest path queries on a planar graph I/O-efficiently.
Matching Dependencies (MDs) are a recent proposal for declarative entity resolution. They are rules that specify, given the similarities satisfied by values in a database, what values should be considered duplicates, and have to be matched. On the basis of a chase-like procedure for MD enforcement, we can obtain clean (duplicate-free) instances; actually possibly several of them. The clean answers to queries (which we call the resolved answers) are invariant under the resulting class of instances. In this paper, we investigate a query rewriting approach to obtaining the resolved answers (for certain classes of queries and MDs). The rewritten queries are specified in stratified Datalog not,s with aggregation. In addition to the rewriting algorithm, we discuss the semantics of the rewritten queries, and how they could be implemented by means of a DBMS.
A collection of n anonymous mobile robots is deployed on a unit-perimeter ring or a unit-length line segment. Every robot starts moving at constant speed, and bounces each time it meets any other robot or segment endpoint, changing its walk direction. We study the problem of position discovery, in which the task of each robot is to detect the presence and the initial positions of all other robots. The robots cannot communicate or perceive information about the environment in any way other than by bouncing. Each robot has a clock allowing it to observe the times of its bounces. The robots have no control on their walks, which are determined by their initial positions and the starting directions. Each robot executes the same position detection algorithm, which receives input data in real-time about the times of the bounces, and terminates when the robot is assured about the existence and the positions of all the robots. Some initial configuration of robots are shown to be infeasible - no position detection algorithm exists for them. We give complete characterizations of all infeasible initial configurations for both the ring and the segment, and we design optimal position detection algorithms for all feasible configurations. For the case of the ring, we show that all robot configurations in which not all the robots have the same initial direction are feasible. We give a position detection algorithm working for all feasible configurations. The cost of our algorithm depends on the number of robots starting their movement in each direction. If the less frequently used initial direction is given to k ≤ n/2 robots, the time until completion of the algorithm by the last robot is 1/2 ⌈n/k⌉. We prove that this time is optimal. By contrast to the case of the ring, for the unit segment we show that the family of infeasible configurations is exactly the set of so-called symmetric configurations. We give a position detection algorithm which works for all feasible configurations on the segment in time 2, and this algorithm is also proven to be optimal.
We motivate, formalize and investigate the notions of data quality assessment and data quality query answering as context dependent activities. Contexts for the assessment and usage of a data source at hand are modeled as collections of external databases, that can be materialized or virtual, and mappings within the collections and with the data source at hand. In this way, the context becomes "the complement" of the data source wrt a data integration system. The proposed model allows for natural extensions, like considering data quality predicates, and even more expressive ontologies for data quality assessment.
We consider a problem which can greatly enhance the areas of cursive script recognition and the recognition of printed character sequences. This problem involves recognizing words/strings by processing their noisy subsequences. Let X* be any unknown word from a finite dictionary H. Let U be any arbitrary subsequence of X*. We study the problem of estimating X* by processing Y, a noisy version of U. Y contains substitution, insertion, deletion and generalized transposition errors — the latter occurring when transposed characters are themselves subsequently substituted. We solve the noisy subsequence recognition problem by defining and using the constrained edit distance between X ε H and Y subject to any arbitrary edit constraint involving the number and type of edit operations to be performed. An algorithm to compute this constrained edit distance has been presented. Using these algorithms we present a syntactic Pattern Recognition (PR) scheme which corrects noisy text containing all these types of errors. Experimental results which involve strings of lengths between 40 and 80 with an average of 30.24 deleted characters and an overall average noise of 68.69 % demonstrate the superiority of our system over existing methods.
We consider the rendezvous problem for identical mobile agents (i.e., running the same deterministic algorithm) with tokens in a synchronous torus with a sense of direction and show that there is a striking computational difference between one and more tokens. More specifically, we show that 1) two agents with a constant number of unmovable tokens, or with one movable token, each cannot rendezvous if they have o(log n) memory, while they can perform rendezvous with detection as long as they have one unmovable token and O(log n) memory; in contrast, 2) when two agents have two movable tokens each then rendezvous (respectively, rendezvous with detection) is possible with constant memory in an arbitrary n × m (respectively, n × n) torus; and finally, 3) two agents with three movable tokens each and constant memory can perform rendezvous with detection in a n × m torus. This is the first publication in the literature that studies tradeoffs between the number of tokens, memory and knowledge the agents need in order to meet in such a network.
We present a tradeoff between the expected time for two identical agents to rendez-vous on a synchronous, anonymous, oriented ring and the memory requirements of the agents. In particular, we show that there exists a 2t state agent, which can achieve rendez-vous on an n node ring in expected time O( n 2/2 t ∈+∈2 t ) and that any t/2 state agent requires expected time Ω( n 2/2 t ). As a corollary we observe that Θ(loglogn) bits of memory are necessary and sufficient to achieve rendez-vous in linear time.
We prove that for all 0 ≤ t ≤ k and d ≥ 2k, every graph G with treewidth at most k has a 'large' induced subgraph H, where H has treewidth at most t and every vertex in H has degree at most d in G, The order of H depends on t, k, d, and the order of G. With t = k, we obtain large sets of bounded degree vertices. With t = 0, we obtain large independent sets of bounded degree. In both these cases, our bounds on the order of H are tight. For bounded degree independent sets in trees, we characterise the extremal graphs. Finally, we prove that an interval graph with maximum clique size k has a maximum independent set in which every vertex has degree at most 2k.
The verification of non-functional requirements of software models (such as performance, reliability, scalability, security, etc.) requires the transformation of UML models into different analysis models such as Petri nets, queueing networks, formal logic, etc., which represent the system at a higher level of abstraction. The paper proposes a new "abstraction-raising" transformation approach for generating analysis models from UML models. In general, such transformations must bridge a large semantic gap between the source and the target model. The proposed approach is illustrated by a transformation from UML to Klaper (Kernel LAnguage for PErformance and Reliability analysis of component-based systems).
Given a connected geometric graph G, we consider the problem of constructing a t-spanner of G having the minimum number of edges. We prove that for every t with 1 1+1/t) edges. This bound almost matches the known upper bound, which states that every connected weighted graph with n vertices contains a t-spanner with O(tn1+2/(t+1)) edges. We also prove that the problem of deciding whether a given geometric graph contains a t-spanner with at most K edges is NP-hard. Previously, this NP-hardness result was only known for non-geometric graphs.
Persuasive technologies are increasingly ubiquitous, but the strategies they utilise largely originate in America. Consumer behaviour research shows us that certain persuasion strategies will be more effective on some cultures than others. We claim that the existing strategies will be less effective on non-American audiences than they are on American audiences, and we use information from interviews to show that there exists much scope to develop persuasive technologies from a collectivism-focused perspective. To illustrate the development of such a tool, we describe the design of a collectivism-focused financial planning tool.
We present the first local approximation schemes for maximum independent set and minimum vertex cover in unit disk graphs. In the graph model we assume that each node knows its geographic coordinates in the plane (location aware nodes). Our algorithms are local in the sense that the status of each node v (whether or not v is in the computed set) depends only on the vertices which are a constant number of hops away from v. This constant is independent of the size of the network. We give upper bounds for the constant depending on the desired approximation ratio. We show that the processing time which is necessary in order to compute the status of a single vertex is bounded by a polynomial in the number of vertices which are at most a constant number of vertices away from it. Our algorithms give the best possible approximation ratios for this setting. The technique which we use to obtain the algorithm for vertex cover can also be employed for constructing the first global PTAS for this problem in unit disk graph which does not need the embedding of the graph as part of the input.
Given an integer k ≥ 2, we consider the problem of computing the smallest real number t(k) such that for each set P of points in the plane, there exists a t(k)-spanner for P that has chromatic number at most k. We prove that t(2)∈=∈3, t(3)∈=∈2, , and give upper and lower bounds on t(k) for k∈>∈4. We also show that for any ε>∈0, there exists a (1∈+∈ε)t(k)-spanner for P that has O(|P|) edges and chromatic number at most k. Finally, we consider an on-line variant of the problem where the points of P are given one after another, and the color of a point must be assigned at the moment the point is given. In this setting, we prove that t(2)∈=∈3, , , and give upper and lower bounds on t(k) for k∈>∈4.
Intrusion detection, area coverage and border surveillance are important applications of wireless sensor networks today. They can be (and are being) used to monitor large unprotected areas so as to detect intruders as they cross a border or as they penetrate a protected area. We consider the problem of how to optimally move mobile sensors to the fence (perimeter) of a region delimited by a simple polygon in order to detect intruders from either entering its interior or exiting from it. We discuss several related issues and problems, propose two models, provide algorithms and analyze their optimal mobility behavior.
It is well-known that the greedy algorithm produces high quality spanners and therefore is used in several applications. However, for points in d-dimensional Euclidean space, the greedy algorithm has cubic running time. In this paper we present an algorithm that computes the greedy spanner (spanner computed by the greedy algorithm) for a set of n points from a metric space with bounded doubling dimension in time using space. Since the lower bound for computing such spanners is Ω(n 2), the time complexity of our algorithm is optimal to within a logarithmic factor.
We investigate the problem of locally coloring and constructing special spanners of location aware Unit Disk Graphs (UDGs). First we present a local approximation algorithm for the vertex coloring problem in UDGs which uses at most four times as many colors as required by an optimal solution. Then we look at the colorability of spanners of UDGs. In particular we present a local algorithm for constructing a 4-colorable spanner of a unit disk graph. The output consists of the spanner and the 4-coloring. The computed spanner also has the properties that it is planar, the degree of a vertex in the spanner is at most 5 and the angles between two edges are at least π/3. By enlarging the locality distance (i.e. the size of the neighborhood which a vertex has to explore in order to compute its color) we can ensure the total weight of the spanner to be arbitrarily close to the weight of a minimum spanning tree. We prove that a local algorithm cannot compute a bipartite spanner of a unit disk graph and therefore our algorithm needs at most one color more than any local algorithm for the task requires. Moreover, we prove that there is no local algorithm for 3-coloring UDGs or spanners of UDGs, even if the 3-colorability of the graph (or the spanner respectively) is guaranteed in advance.
The paper proposes to integrate performance analysis in the early phases of the model-driven development process for Software Product Lines (SPL). We start by adding generic performance annotations to the UML model representing the set of core reusable SPL assets. The annotations are generic and use the MARTE Profile recently adopted by OMG. A first model transformation realized in the Atlas Transformation Language (ATL), which is the focus of this paper, derives the UML model of a specific product with concrete MARTE performance annotations from the SPL model. A second transformation generates a Layered Queueing Network performance model for the given product by applying an existing transformation approach named PUMA, developed in previous work. The proposed technique is illustrated with an e-commerce case study that models the commonality and variability in both structural and behavioural SPL views. A product is derived and the performance of two design alternatives is compared.
Increasingly ubiquitous wireless technologies require novel localization techniques to pinpoint the position of an uncooperative node, whether the target be a malicious device engaging in a security exploit or a low-battery handset in the middle of a critical emergency. Such scenarios necessitate that a radio signal source be localized by other network nodes efficiently, using minimal information. We propose two new algorithms for estimating the position of an uncooperative transmitter, based on the received signal strength (RSS) of a single target message at a set of receivers whose coordinates are known. As an extension to the concept of centroid localization, our mechanisms weigh each receiver's coordinates based on the message's relative RSS at that receiver, with respect to the span of RSS values over all receivers. The weights may decrease from the highest RSS receiver either linearly or exponentially. Our simulation results demonstrate that for all but the most sparsely populated wireless networks, our exponentially weighted mechanism localizes a target node within the regulations stipulated for emergency services location accuracy.
We present two results for path traversal in trees, where the traversal is performed in an asymptotically optimal number of I/Os and the tree structure is represented succinctly. Our first result is for bottom-up traversal that starts with a node in the tree T and traverses a path to the root. For blocks of size B, a tree on N nodes, and for a path of length K, we design data structures that permit traversal of the bottom-up path in O(K/B) I/Os using only bits, for an arbitrarily selected constant, ε, where 0∈<∈ε<∈1. Our second result is for top-down traversal in binary trees. We store T using (3∈+∈q)N∈+∈o(N) bits, where q is the number of bits required to store a key, while top-down traversal can still be performed in an asymptotically optimal number of I/Os.
We consider n mobile sensors located on a line containing a barrier represented by a finite line segment. Sensors form a wireless sensor network and are able to move within the line. An intruder traversing the barrier can be detected only when it is within the sensing range of at least one sensor. The sensor network establishes barrier coverage of the segment if no intruder can penetrate the barrier from any direction in the plane without being detected. Starting from arbitrary initial positions of sensors on the line we are interested in finding final positions of sensors that establish barrier coverage and minimize the maximum distance traversed by any sensor. We distinguish several variants of the problem, based on (a) whether or not the sensors have identical ranges, (b) whether or not complete coverage is possible and (c) in the case when complete coverage is impossible, whether or not the maximal coverage is required to be contiguous. For the case of n sensors with identical range, when complete coverage is impossible, we give linear time optimal algorithms that achieve maximal coverage, both for the contiguous and non-contiguous case. When complete coverage is possible, we give an O(n 2) algorithm for an optimal solution, a linear time approximation scheme with approximation factor 2, and a (1∈+∈ε) PTAS. When the sensors have unequal ranges we show that a variation of the problem is NP-complete and identify some instances which can be solved with our algorithms for sensors with unequal ranges.
Delay (or disruption) tolerant sensor networks may be modeled as Markovian evolving graphs [1]. We present experimental evidence showing that considering multiple (possibly not shortest) paths instead of one fixed (greedy) path can decrease the expected time to deliver a packet on such a network by as much as 65 per cent depending on the probability that an edge exists in a given time interval. We provide theoretical justification for this result by studying a special case of the Markovian evolving grid graph. We analyze a natural algorithm for routing on such networks and show that it is possible to improve the expected time of delivery by up to a factor of two depending upon the probability of an edge being up during a time step and the relative positions of the source and destination. Furthermore we show that this is optimal, i.e., no other algorithm can achieve a better expected running time. As an aside, our results give high probability bounds for Knuth's toilet paper problem [11].