The techno-economic feasibility of retrofitting existing Canadian houses with solar assisted heat pump (SAHP) is investigated. The SAHP architecture is adopted from previous studies conducted for the Canadian climate. The system utilizes two thermal storage tanks to store excess solar energy for use later in the day. The control strategy is defined in order to prioritise the use of solar energy for space and domestic hot water heating purposes. Due to economic and technical constraints a series of eligibility criteria are introduced for a house to qualify for the retrofit. A model was built in ESP-r and the retrofit was introduced into all eligible houses in the Canadian Hybrid Residential End-Use Energy and GHG Emissions model. Simulations were conducted for an entire year to estimate the annual energy savings, and GHG emission reductions. Results show that the SAHP system performance is strongly affected by climatic conditions, auxiliary energy sources and fuel mixture for electricity generation. Energy consumption and GHG emission of the Canadian housing stock can be reduced by about 20% if all eligible houses receive the SAHP system retrofit. Economic analysis indicates that the incentive measures will likely be necessary to promote the SAHP system in the Canadian residential market.
This study was conducted to assess the techno-economic feasibility of converting the Canadian housing stock (CHS) into net/near zero energy buildings by introducing and integrating high efficient and renewable/alternative energy technologies in new construction and existing houses. Performance assessment of energy retrofit and renewable/alternative energy technologies in existing houses in regional and national scale is necessary to devise feasible strategies and incentive measures. The Canadian Hybrid Residential End-Use Energy and GHG Emissions model (CHREM) that utilizes a bottom-up modeling approach is used to investigate the techno-economic feasibility of air to water heat pump retrofit in the Canadian housing stock. The proposed energy retrofit includes an air to water heat pump, auxiliary boiler, thermal storage tank, hydronic heat delivery and domestic hot water (DHW) heating. Energy savings, GHG emission changes and economic feasibility of the air source heat pump retrofit are considered in this study. Results show that there is a potential to reduce 36% of energy consumption and 23% of GHG emissions of the CHS if all eligible houses undertake the retrofit. Economic analysis indicates that the feasibility of air to water heat pump systems is strongly affected by the current status of primary energy use for electricity generation and space and DHW heating as well as energy prices and economic conditions. Legislation, economic incentives and education for homeowners are necessary to enhance the penetration level of air to water heat pump retrofits in the CHS.
An improved understanding of the consumption patterns, end-uses, and temporal variations of electrical loads in houses is warranted because a significant fraction of a society's total electricity consumption occurs within residential buildings. In general, there is a lack of high-temporal-resolution data describing occupant electrical consumption that are available to researchers in this field. To address this, new measurements were performed and combined with data emanating from an earlier study to provide a database of annual measurements for 23 houses at a 1-min resolution that characterizes whole-house, non-HVAC, air conditioner, and furnace fan electrical draws, as well as the draw patterns of some major appliances. All houses were located in Ottawa, Canada. The non-HVAC measurements of this 23-house sample were shown to be in agreement with published estimates for the housing stock. The furnace fan was found to be the most significant end-use. These high-temporal-resolution data of electrical demands in houses can be used by researchers to increase the fidelity of building performance simulation analyses of different micro-generation technologies in residential buildings.
Fuel cells with nominal outputs of approximately 1kW AC are emerging as a prime-mover of a micro-cogeneration system potentially well-suited to compete, on an energy basis, with conventional methods for satisfying occupant electrical and thermal demands in a residential application. As the energy benefits of these systems can be incremental when compared to efficient conventional methods, it is especially important to consider the uncertainties of the models on which simulation results are based. However, researchers have yet to take this aspect into account.This article makes a contribution by demonstrating how these model uncertainties may be propagated to the simulation results of a micro-cogeneration system for comparison to a reference scenario using a case study. This case study compares the energy performance of a fuel-cell based micro-cogeneration system serving only domestic hot water demands to an efficient reference scenario where the conventional methods for providing electrical and thermal demands are considered to be a central gas-fired combined-cycle plant and a condensing tankless water heater respectively. The simulation results demonstrated that if model uncertainties were ignored, it would have been possible to demonstrate that the considered micro-cogeneration system was more efficient than the reference scenario for average consumption levels of domestic hot water. However, when model uncertainties were considered, the efficiency of the considered micro-cogeneration system could not reliably exceed that of the reference scenario by serving the domestic hot water needs of a single-family home.
Techno-economic impact of retrofitting houses in the Canadian housing stock with PV and BIPV/T systems is evaluated using the Canadian Hybrid End-use Energy and Emission Model. Houses with south, south-east and south-west facing roofs are considered eligible for the retrofit since solar irradiation is maximum on south facing surfaces in the northern hemisphere. The PV system is used to produce electricity and supply the electrical demand of the house, with the excess electricity sold to the grid in a net-metering arrangement. The BIPV/T system produces electricity as well as thermal energy to supply the electrical as well as the thermal demands for space and domestic hot water heating. The PV system consists of PV panels installed on the available roof surface while the BIPV/T system adds a heat pump, thermal storage tank, auxiliary heater, domestic hot water heating equipment and hydronic heat delivery system, and replaces the existing heating system in eligible houses. The study predicts the energy savings, GHG emission reductions and tolerable capital costs for regions across Canada. Results indicate that the PV system retrofit yields 3% energy savings and 5% GHG emission reduction, while the BIPV/T system yields 18% energy savings and 17% GHG emission reduction in the Canadian housing stock. While the annual electricity use slightly increases, the fossil fuel use of the eligible houses substantially decreases due to BIPV/T system retrofit.
The design and analysis of community-scale energy systems and incentives is a non-trivial task. The challenge of such undertakings is the well-documented uncertainty of building occupant behaviours. This is especially true in the residential sector, where occupants are given more freedom of activity compared to work environments. Further complicating matters is the dearth of available measured data. Building performance simulation tools are one approach to community energy analysis, however such tools often lack realistic models for occupant-driven demands, such as appliance and lighting (AL) loads. For community-scale analysis, such AL models must also be able to capture the temporal and inter-dwelling variation to achieve realistic estimates of aggregate electrical demand. This work adapts the existing Centre for Renewable Energy Systems Technology (CREST) residential energy model to simulate Canadian residential AL demands. The focus of the analysis is to determine if the daily, seasonal, and inter-dwelling variation of AL demands estimated by the CREST model is realistic. An in-sample validation is conducted on the model using 22 high-resolution measured AL demand profiles from dwellings located in Ottawa, Canada. The adapted CREST model is shown to broadly capture the variation of AL demand variations observed in the measured data, however seasonal variation in daily AL demand behaviour was found to be under-estimated by the model. The average and variance of daily load factors was found to be similar between measured and modelled. The model was found to under-predict the daily coincidence factors of aggregated demands, although the variance of coincident factors was shown to be similar between measured and modelled. A stochastic baseload input developed for this work was found to improve estimates of the magnitude and variation of both baseload and peak demands.
Random Forests variable importance measures are often used to rank variables by their relevance to a classification problem and subsequently reduce the number of model inputs in high-dimensional data sets, thus increasing computational efficiency. However, as a result of the way that training data and predictor variables are randomly selected for use in constructing each tree and splitting each node, it is also well known that if too few trees are generated, variable importance rankings tend to differ between model runs. In this letter, we characterize the effect of the number of trees (ntree) and class separability on the stability of variable importance rankings and develop a systematic approach to define the number of model runs and/or trees required to achieve stability in variable importance measures. Results demonstrate that both a large ntree for a single model run, or averaged values across multiple model runs with fewer trees, are sufficient for achieving stable mean importance values. While the latter is far more computationally efficient, both the methods tend to lead to the same ranking of variables. Moreover, the optimal number of model runs differs depending on the separability of classes. Recommendations are made to users regarding how to determine the number of model runs and/or trees that are required to achieve stable variable importance rankings.
This paper analyzes how the “particular symbolic fortunes” of Canada’s most widely recognized literary prize, the Scotiabank Giller Prize, undergo what James English calls “capital intraconversion”––how they are “culturally ‘laundered’” through their association with Frontier College, Canada’s longest-running adult literacy organization. While the Giller initially benefitted from fashioning itself as the private, industry-driven alternative to state-sponsored culture in Canada, increasing criticism of its corporate sponsorship has led, in the past decade, to a rebranding effort. This effort, I contend, seeks to benefit from two key terms––multiculturalism and literacy. Associated as the discourse of multiculturalism and the figure of the literate citizen are with the strong publics of the western, liberal-democratic nation-state, they possess a remarkable ability to accentuate the symbolic capital of Canada’s most widely recognized literary prize.
The well-separated pair decomposition (WSPD) of the complete Euclidean graph defined on points in ℝ2 (Callahan and Kosaraju [JACM, 42 (1): 67-90, 1995]) is a technique for partitioning the edges of the complete graph based on length into a linear number of sets. Among the many different applications of WSPDs, Callahan and Kosaraju proved that the sparse subgraph that results by selecting an arbitrary edge from each set (called WSPD-spanner) is a 1 + 8/(s − 4)-spanner, where s > 4 is the separation ratio used for partitioning the edges. Although competitive local-routing strategies exist for various spanners such as Yao-graphs, Θ-graphs, and variants of Delaunay graphs, few local-routing strategies are known for any WSPD-spanner. Our main contribution is a local-routing algorithm with a near-optimal competitive routing ratio of 1 + O(1/s) on a WSPD-spanner. Specifically, we present a 2-local and a 1-local routing algorithm on a WSPD-spanner with competitive routing ratios of 1+6/(s−2)+4/s and 1+6/(s−2)+ 6/s + 4/(s2 − 2s) + 8/s2respectively.
New threats to networks are constantly arising. This justifies protecting network assets and mitigating the risk associated with attacks. In a distributed environment, researchers aim, in particular, at eliminating faulty network entities. More specifically, much research has been conducted on locating a single static black hole, which is defined as a network site whose existence is known a priori and that disposes of any incoming data without leaving any trace of this occurrence. However, the prevalence of faulty nodes requires an algorithm able to (a) identify faulty nodes that can be repaired without human intervention and (b) locate black holes, which are taken to be faulty nodes whose repair does require human intervention. In this paper, we consider a specific attack model that involves multiple faulty nodes that can be repaired by mobile software agents, as well as a virus v that can infect a previously repaired faulty node and turn it into a black hole. We refer to the task of repairing multiple faulty nodes and pointing out the location of the black hole as the Faulty Node Repair and Dynamically Spawned Black Hole Search. Wefirst analyze the attack model we put forth. We then explain (a) how to identify whether a node is either (1) a normal node or (2) a repairable faulty node or (3) the black hole that has been infected by virus v during the search/repair process and, (b) how to perform the correct relevant actions. These two steps constitute a complex task, which, we explain, significantly differs from the traditional Black Hole Search. We continue by proposing an algorithm to solve this problem in an asynchronous ring network with only one whiteboard (which resides in a node called the homebase). We prove the correctness of our solution and analyze its complexity by both theoretical analysis and experiment evaluation. We conclude that, using our proposed algorithm, b + 4 agents can repair all faulty nodes and locate the black hole infected by a virus v within finite time. Our algorithm works even when the number of faulty nodes b is unknown a priori.
Net-zero energy is an influential idea in guiding the building stock towards renewable
energy resources. Increasingly, this target is scaled to entire communities
which may include dozens of buildings in each new development phase.
Although building energy modelling processes and codes have been well developed
to guide decision making, there is a lack of methodologies for community
integrated energy masterplanning. The problem is further complicated by the
availability of district systems which better harvest and store on-site renewable
energy. In response to these challenges, this paper contributes an energy modelling
methodology which helps energy masterplanners determine trade-offs between
building energy saving measures and district system design. Furthermore,
this paper shows that it is possible to mitigate electrical and thermal peaks of a
net-zero energy community using minimal district equipment. The methodology
is demonstrated using a cold-climate case-study with both significant heating/
cooling loads and solar energy resources.
Energy models are commonly used to examine the multitude of pathways to improve building performance. As presently practiced, a deterministic approach is used to evaluate incremental design improvements to achieve performance targets. However, significant insight can be gained by examining the implications of modeling assumptions using a probabilistic approach. Analyzing the effect of small perturbations on the inputs of energy and economic models can improve decision making and modeler confidence in building simulation results. This paper describes a reproducible methodology which AIDS modelers in identifying energy and economic uncertainties caused by variabilities in solar exposure. Using an optimization framework, uncertainty is quantified across the entire simulation solution space. This approach improves modeling outcomes by factoring in the effect of variability in assumptions and improves confidence in simulation results. The methodology is demonstrated using a net zero energy commercial office building case study.
This paper investigates the relationship between stock share and expectations and risk preferences using linked survey responses and administrative records from account holders. The survey allows individual-level, quantitative estimates of risk tolerance and of the perceived mean and variance of stock returns. Estimated risk tolerance, expected return, and perceived risk have economically and statistically significant explanatory power for the distribution of stock shares. Relative to each other, the magnitudes are in proportion with the predictions of benchmark theories, but they are substantially attenuated. MBA graduates have more stable beliefs, more knowledge about their account holdings, and less attenuation.
Older Americans, even those who are long retired, have strong willingness to work, especially in jobs with flexible schedules. For many, labor force participation near or after normal retirement age is limited more by a lack of acceptable job opportunities or low expectations about finding them than by unwillingness to work longer. This paper establishes these findings using an approach to identification based on strategic survey questions (SSQs) purpose-designed to complement behavioral data. These findings suggest that demand-side factors are important in explaining late-in-life labor market behavior and may be the most appropriate target for policy aimed at promoting working longer.
This article examines the politics of 'seeing' civilians in Afghanistan with a focus on the 2009 Kunduz air strike. Drawing on the literature on professional vision and professional knowledges, I ask how divergences in the 'ways of seeing' between different professional communities can be explained, and how they are resolved in practice. 'Seeing,' I argue, is based on talking. The vocabularies with which we describe the world and understand our relationships shape how we 'see'. As a consequence, Afghans gathered around a truck can appear an 'immediate threat' or not -- depending on the ideological prisms at work. The article suggests that we need to treat professional vision as necessarily contested and examine how professionals are socialized into accepting one way of seeing as valid. Seeing is based on talking, and we need to talk about how we see (violence).
More about the Centre for Studies on Poverty and Social Citizenship: https://carleton.ca/cspsc
See also: Canada's First National Housing Strategy - A Panel Discussion focusing on Canada’s first National Housing Strategy at the CASWE National Conference 2018
In 2016, with funding from the Ontario Trillium Foundation’s Seed Grant program, The Somali
Centre for Family Services of Ottawa (SCFS) invited Carleton University’s Centre for Studies on Poverty
and Social Citizenship (CSPSC) to partner on the completion of a needs assessment focusing on the
barriers faced by Somali youth in accessing post-secondary education, and employment training and
opportunities. In carrying out this research, the SCFS’s main objective was to address social and
economic exclusion locally by inviting Somali youth (age 19-30) from the Ottawa area to engage in the
conceptualization and design of resources that could best support their participation in educational and
Few people have bothered to defend the Majoritarian, winner take all character of the current Canadian electoral system. This parliamentary system has been in existence in the same form since the founding of the modern state in 1867. In these remarks, I offer a defense of Majoritarianism in the Canadian context when the alternative is some form of Proportional Representation. These remarks were prepared as an opening statement in a debate on electoral reform at a Faculty of Public Affairs 75th Anniversary conference at Carleton University, March 3, 2017.
The debate arose because of the Prime Minister's announced intention to replace the current system with some other during the election campaign that led to his victory in 2015. The debate occurred a few months after the release of a lengthy report on electoral reform by a special allparty committee of the House of Commons. A few weeks before the debate, the Prime Minister announced (independently of the debate, of course) that his government would no longer pursue electoral reform, perhaps because it looked like he would not be able to avoid a referendum, a process which is hard to control. In any event, and especially in the light of recent attempts to change the system both at the federal level and in some provinces, I think it is important for people to understand that the existing electoral system is a sensible one that likely will continue to serve us well.
Two CFD codes are used to simulate noise data for a tandem cylinder experiment and two scaled NASA SR-2 propeller tests. The first code, STAR CCM+, is a grid-based commercial CFD code while the second code, SmartRotor, is an in-house grid-free CFD code which uses a panel method coupled with a discrete vortex method. Good comparison to experiment is achieved, with STAR CCM+ predicting the vortex shedding of the tandem cylinder case within 3 Hz and 10 dB while also predicting first propeller harmonics within 20 and 11 dB for the first and second propeller simulations, respectively. SmartRotor predicted first propeller harmonics within 6 and 37 dB for the first and second experiments, respectively. A parametic study on the influence of blade count on propeller noise was then performed using both codes to simulating the noise of 7-, 8-, and 10-bladed propellers finding quieter operation with increasing blade count.
International military intervention undertaken to protect civilians victimised by civil conflict became increasingly common following the Cold War. Within the increasingly developed liberal global order, the use of force has become a humanitarian tool. The specific politics created by using force in this way, however, have not been systematically studied by either the literature on “humanitarian intervention” or the Responsibility to Protect. This project problematizes the United States and North Atlantic Treaty Organisation’s reliance on a particular kind of technology, air power, by asking what humanitarian politics result from this approach. Rather than simply an instrument of global policy, air power transformed how these two actors understood ethnic conflict through the political affordances it does, and does not, allow. In keeping with the technical and doctrinal context provided by air power, such conflict is reconfigured from a practice of directly protecting bodies to managing the circulation of objects as a means of policing conflict spaces. What results s a depoliticisation of conflict and a disjoint between the humanitarian ethos and physical effect of such interventions. In this way, human protection becomes a second order effect of ordering conflict spaces. The precise political effect of this was a depoliticisation of both civilians, and their politics, because they were indiscernible to this model. As a result, the impact of these interventions was problematic in humanitarian terms. This was obscured and the model allowed to persist because popular, professional, political, and academic discourses all assume the use of military power is purely technical, rather than political in its own right. This dissertation unpacks this dynamic by providing a historical, multi-case study analysis of how the relationship between humanitarianism and air power emerged, what tactics and strategies result, and how the deployment of these means affects the ongoing politics of global humanitarianism and the Responsibility to Protect.
With the ever increasing lengths of today's wind turbine rotor blades, there is a need for airfoils which are both aerodynamically, and structurally efficient. In this work, a multi-objective genetic algorithm was developed to design flatback wind turbine airfoils. The effect of the aerodynamic evaluator, specifically lift-to-drag ratio, torque, and torque-to-thrust ratio, on the airfoil shape and performance was examined. Notable differences, particularly in the levels of lift and roughness insensitivity, were observed. Upon further analysis of the effect of other design parameters, an airfoil family which has a high level of structural and aerodynamic performance was designed. An experimental set-up was developed at the Carleton University Low Speed Wind Tunnel for the 2D testing of airfoils. Two airfoils are tested and show differences between predictions and reality, particularly in the stall and post-stall regions, thereby highlighting the importance of wind tunnel testing as part of the design process.
This dissertation investigates the motivations, messages, and methods of Canadians who organized in opposition to nuclear weapons between 1959 and 1963. The efforts of Canadian anti-nuclear movements have been undervalued in histories of disarmament activism. Canadian disarmers have been dismissed as quiet in comparison to better-known movements in the United States and in Great Britain. This dissertation demonstrates that there were in fact complex and vigorous expressions of anti-nuclear sentiment in Cold War Canada. Canadian disarmers may have been few in number, and may have been conservative in their protest methods, but they were committed participants in an international struggle to protect humanity from the threat of nuclear war. There were many Canadian movements in opposition to the Bomb, both organized and disorganized, which were shaped by the diverse relationships that disarmers had to the world around them. Disarmers’ endeavours were informed by engagements with feminisms, Western ideals of masculinity, parents’ desires to protect their children, young people’s hopes to inherit a world of peace and prosperity, longstanding ideas about social protest, concerns over domestic politics, and enthusiasm for international cooperation. Focusing on the various ways in which Canadians worked for disarmament in the early 1960s, this study demonstrates how much often divided and sometimes isolated disarmament organizations shared. This dissertation is the first extended historical analysis of anti-nuclear efforts in Canada in the late 1950s and early 1960s. It is also a necessary revision of the existing historiography on disarmament activism. This dissertation brings together diverse literatures on Canada’s Sixties; American, British, and Western European disarmament and peace movements; connected social movements such as the New Left, feminist movements, and environmental movements; and histories of children and childhood. The thesis offers a reassessment of these movements and their importance to an understanding of Cold War social and political dynamics.
Understanding the mechanisms that are responsible for maintaining genetic variation continues to be the focus of much research in evolutionary ecology. It has been suggested that the abundant genetic variation found in Lobelia inflata is maintained by fluctuating selection coupled with temporal genotype-environment interaction. I begin by asking whether microsatellite genotypes exhibit variation in key life-history traits including timing of germination, bolting, flowering and maturation. I used a common garden experiment to show that phenotypic variation exists, that this variation occurs in life-history traits, and that this variation has a genetic basis. Next, I looked at how the microsatellite genotypes that differed in life-history traits expressed differential fitness across environments in a “space-for-time” experiment: I grew multiple lineages under varying conditions to simulate differing natural conditions. Results offer tentative support for the hypothesis that fluctuating selection is responsible for maintaining variation in this system.
American perspectives on the Middle East often contend that the region's nation-states are comprised of clearly demarcated ethnic and religious groups whose identities remain static over time. Cultural features of the region are seen to be as durable as its physical features. These perspectives further maintain that when nation-state boundaries are incongruent with the boundaries of ethno-sectarian groups, civil unrest or violent conflict is inevitable. These assumptions are inaccurate because they employ outmoded colonial and Wilsonian views on social organization and can essentialize and/or depoliticize conflict. However, representations based on the assumptions of clearly bounded and static ethno-sectarian groups carry the advantages of making cultural landscapes legible, and thus amenable to geopolitical management. The goal of this project is to understand how ethno-sectarian territorial assumptions are employed in contemporary American views on the Middle East. To do this, I analyze three important sets of maps and texts which encapsulate contemporary American views on the region. The set of maps consists of easily accessible ethnographic maps of the Middle East. These maps are drafted, published, and made available by U.S.-based cartographers, journalists, government agencies, media outlets, and universities. The first set of texts focuses on the U.S. military's Iraq Troop Surge and are made up of American media coverage along with government, military, and think tank documents. The second set of texts focuses on the Arab Spring and are comprised of American media coverage, think tank reports, and academic commentary. My findings show that in most of these materials, it is assumed that the ethno-sectarian characteristics of the region can be depicted accurately, objectively, and completely in cartographic and textual representations. I conclude by asserting that problematic ethno-sectarian depictions are reinforced by the writing of prominent American foreign policy intellectuals. These depictions are important because they play roles in framing American geopolitical strategy and action in the region.
Many bird radar studies provide estimates of the number of birds flying past a given area, but very few of these actually estimate detectability. One of the challenges in radar ornithology is estimating the probability of detection of flying targets with altitude and distance. I estimated the detection patterns associated with three marine radars by using a combination of field trials and simulation modelling, and estimated the probabilities of correctly and incorrectly detecting birds in relation to altitude. The results indicate considerable variation in power among radar units. The nominal beam width was 4 degrees and effective beam width was 7 degrees. The results from the simulation indicate detectability varies with altitude, with few birds detected in the lower altitude bands. Many simulated birds were classified as two different birds when crossing the beam twice and there were many false detections, especially in the lowest altitude bands near the radar.
In recent years, machine learning techniques have been rapidly developed and widely applied to many industrial and academic fields. Moreover, as an important part of machine learning, ensemble techniques have drawn significant attention in both academic researches and practical applications, which make use of multiple single models to construct a hybrid model. Usually, compared to each individual model, a better performance can be achieved by ensemble methods. In this thesis, a novel ensemble method is proposed to improve the performance for binary classification. The proposed method can non-linearly combine the base models by adaptively selecting the most suitable one for each data instance. The new approach has been validated on two datasets, and the experiments results show an up to 18.5% improvement on F1 score compared to the best individual model. In addition, the proposed method outperforms two other commonly used ensemble methods (Averaging and Stacking) in improving F1 score.
This study evaluated SpokenText Reader, a smartphone application to aid the print disabled who study from audio recordings. Based on a literature review and usability test, conducted with print disabled university students. These students indicated that they can be better accommodated using it while studying. Additionally, a model emerged for a hybrid design, which blends features offered by SpokenText Reader with those offered by current smartphone e-text readers designed for the print disabled. The new model proposes a design, which has e-text under the hood but offers two interfaces: one with a user experience biased to interacting with the e-text and the second with a user experience biased to audio generated from the e-text in real-time.