Context:
The Unified Modeling Language (UML), with its 14 different diagram types, is the de-facto standard modeling language for object-oriented modeling and documentation. Since
the various UML diagrams describe different aspects of one, and only one, software under
development, they are not independent but strongly depend on each ot her in many ways.
In other words, diagrams must remain consistent. Dependencies between diagrams can become so intricate that it is sometimes even possible to synthesize one diagram on the basis of others. Support for synthesizing one UML diagram from other diagrams can provide the designer with significant help, thus speeding up the design process, decreasing the risk of errors, and guaranteeing consistency among the diagrams.
Objective:
The aim of this article is to provide a comprehensive summary of UML synthesis techniques as they have been described in literature to date in order to obtain an extensive and
detailed overview of the current research in this area.
Method:
We have performed a Systematic Mapping Study by following well-known guide-lines. We selected ten primary studies
by means of a search with seven search engines per-formed on October 2, 2013.
Results:
Various results are worth mentioning. First it appears that researchers have not frequently published papers concerning UML synthesis techniques since 2004 (with the exception
of two papers published in 2010). Only half of the UML diagram types are involved in the synthesis techniques we discovered. The UML diagram type most frequently
used as the source for synthesizing another diagram is the sequence diagram (66.7%), and the most synthesized diagrams are the state machine diagram (58.3%) and the class diagram (25%).
Conclusion:
The fact that we did not
obtain
a large
number
of primary stud
ies
over a 14 year
period
(only ten papers) indicates that
synthesizing
a UML diagram
from other UML diagrams
is not a
particularly
active line of research.
Research on UML diagram synthesis is nevertheless
relevant since synthesis techniques rely
on
or en
force diagram consistency
,
and
studying UML
diagram consistency
is an active line of research.
Another
r
esult is that
research
is
need
ed
to
investigate synthesis techniques for other types of UML diagrams
than
those involved in our primary studies.
In this work we discuss our efforts to use the ubiquity of smart phone systems and the mobility they provide to stream historical information about your current place on the earth to the end user. We propose the concept of timescapes to portray this historical significance of where they are standing and allow a brief travel through time. By combining GPS location, with a rich media interpretation of existing historical documents, historical facts become an on-demand resource available to travellers, school children, historians and any interested 3rd party. To our knowledge this is the first introduction of the term timescape to be used in the context of historical information pull. Copyright
The practitioner interested in reducing software verification effort may found herself lost in the many alternative definitions of Graphical User Interface (GUI) testing that exist and their relation to the notion of system testing. One result of these many definitions is that one may end up testi
ng twice the same parts of the Software Under Test (SUT), specifically the application logic code. To clarify two important testing activities
for the avoidance of duplicate testing effort, this paper studies possible differences between GUI testing and system testing experimentally. Specifically, we selected a SUT equipped with system tests that directly exercise the application code; We used GUITAR, a well-known GUI testing software to GUI test this SUT. Experimental results show important differences between system testing and GUI testing in terms of structural coverage and test cost.
Context: The Unified Modeling Language (UML), with its 14
different diagram types, is the de-facto standard tool for objectoriented
modeling and documentation. Since the various UML
diagrams describe different aspects of one, and only one, software
under development, they are not independent but strongly depend
on each other in many ways. In other words, the UML diagrams
describing a software must be consistent. Inconsistencies between
these diagrams may be a source of the considerable increase of
faults in software systems. It is therefore paramount that these
inconsistencies be detected, analyzed and hopefully fixed.
Objective:
The aim of this article is to deliver a comprehensive
summary of UML consistency rules as they are described in the
literature to date to obtain an extensive and detailed overview of
the current research in this area.
Method:
We performed a Systematic Mapping Study by
following well-known guidelines. We selected 94 primary studies
from a search with seven search engines performed in December
2012.
Results:
Different results are worth mentioning. First it appears
that researchers tend to discuss very similar consistency rules,
over and over again. Most rules are horizontal (98.07%) and
syntactic (88.03%). The most used diagrams are the class diagram
(71.28%), the state machine diagram (42.55%) and the sequence
diagram (47.87%).
Conclusion:
The fact that many rules are duplicated in primary
studies confirms the need for a well accepted list of consistency
rules. This paper is a first step in this direction. Results indicate
that much more work is needed to develop consistency rules for
all 14 UML diagrams, in all dimensions of consistency (e.g.,
semantic and syntactic on the one hand, horizontal, vertical and
evolution on the other hand).
Reverse-engineering object interactions from source
code can be done through static, dynamic, or hybrid (static plus
dynamic) analyses. In the latter two, monitoring a program and
collecting runtime information translates into some overhead
during program execution. Depending on the type of application,
the imposed overhead can reduce the precision and accuracy of
the reverse-engineered object interactions (the larger the overhead
the less precise or accurate the reverse-engineered interactions),
to such an extent that the reverse-engineered interactions
may not be correct, especially when reverse-engineering a multithreaded
software system. One is therefore seeking an instrumentation
strategy as less intrusive as possible. In our past work, we
showed that a hybrid approach is one step towards such a solution,
compared to a purely dynamic approach, and that there is
room for improvements. In this paper, we uncover, in a systematic
way, other aspects of the dynamic analysis that can be improved
to further reduce runtime overhead, and study alternative
solutions. Our experiments show effective overhead reduction
thanks to a modified procedure to collect runtime information.
UML diagrams describe different views of one piece of software. These diagrams strongly depend on each other and must therefore be consistent with one another, since inconsistencies between diagrams may be a source of faults during software development activities that rely on these diagrams. It is therefore paramount that consistency rules be defined and that inconsistencies be detected, analyzed and fixed. The relevant literature shows that authors typically define their own UML consistency rules, sometimes defining the same rules and sometimes defining rules that are already in the UML standard. The reason might be that no consolidated set of rules that are deemed relevant by authors can be found to date. The aim of our research is to provide a consolidated set of UML consistency rules and obtain a detailed overview of the current research in this area. We therefore followed a systematic procedure in order to collect and analyze UML consistency rules. We then consolidated a set of 116 UML consistency rules (avoiding redundant definitions or definitions already in the UML standard) that can be used as an important reference for UML-based software development activities, for teaching UML-based software development, and for further research.
In this paper we propose a method and a tool to generate test suites from extended finite state machines, accounting for multiple (potentially conflicting) objectives. We aim at maximizing coverage and feasibility of a test suite while minimizing similarity between its test cases and minimizing overall cost. Therefore, we define a multi-objective genetic algorithm that searches for optimal test suites based on four objective functions. In doing so, we create an entire test suite at once as opposed to test cases one at a time. Our approach is evaluated on two different case studies, showing interesting initial results.
For functional testing based on the input domain of a functionality, parameters and their values are identified and a test suite is generated using a criterion exercising combinations of those parameters and values. Since software systems are large, resulting in large numbers of parameters and values, a technique based on combinatorics called Combinatorial Testing (CT) is used to automate the process of creating those combinations. CT is typically performed with the help of combinatorial objects called Covering Arrays. The goal of the present work is to determine available algorithms/tools for generating a combinatorial test suite. We tried to be as complete as possible by using a precise protocol for selecting papers describing those algorithms/tools. The 75 algorithms/tools we identified are then categorized on the basis of different comparison criteria, including: the test suite generation technique, the support for selection (combination) criteria, mixed covering array, the strength of coverage, and the support for constraints between parameters. Results can be of interest to researchers or software companies who are looking for a CT algorithm/tool suitable for their needs.
Design by Contract (DbC) is a software development methodology that focuses on clearly defining the interfaces between components to produce better quality object-oriented software. The idea behind DbC is that a method defines a contract stating the requirements a client needs to fulfill to use it, the precondition, and the properties it ensures after its execution, the postcondition. Though there exists ample support for DbC for sequential programs, applying DbC to concurrent programs presents several challenges. Using Java as the target programming language, this paper tackles such challenges by augmenting the Java Modelling Language (JML) and modifying the JML compiler to generate Runtime Assertion Checking (RAC) code to support DbC in concurrent programs. We applied our solution in a carefully designed case study on a highly concurrent industrial software system from the telecommunications domain to assess the effectiveness of contracts as test oracles in detecting and diagnosing functional faults in concurrent software. Based on these results, clear and objective requirements are defined for contracts to be effective test oracles for concurrent programs whilst balancing the effort to design them. Main results include that contracts of a realistic level of completeness and complexity can detect around 76% of faults and reduce the diagnosis effort for such faults by at least ten times. We, therefore, show that DbC can not only be applied to concurrent software but can also be a valuable tool to improve the economics of software engineering.
This paper discusses reverse engineering source code to produce UML sequence diagrams, with the aim to aid program comprehension and other software life cycle activities (e.g.,
verification). As a first step we produce scenario diagrams using the UML sequence diagram notation. We build on previous work, now combining static and dynamic analyses of a Java software, our objective being to obtain a lightweight instrumentation and therefore disturb the software behaviour as little as possible. We extract the control flow graph from the software source code and obtain an execution trace by instrumenting and running the software. Control flow and trace information is represented as models and UML scenario diagram generation becomes a model transformation problem. Our validation shows that we indeed reduce the execution overhead inherent to dynamic analysis, without losing in terms of the quality of the reverse-engineered information, and therefore in terms of the usefulness of the approach (e.g., for program comprehension).
Every year, for over three decades, Carleton University in Ottawa, Ontario has participated with other local educational institutions in providing a week-long instruction program that introduces young students to higher education. Highly motivated participants in grades 8 – 11 and numbering over 3,000 attend from several school boards in both eastern Ontario and western Quebec. The Enriched Mini Course Program has become an important recruitment tool for each institution, and at Carleton University, over 50 enriched mini courses are offered including one recent addition by the MacOdrum library staff.
In this article, the author recounts how leading an enriched mini course for millennials in the university library's new Discovery Centre is an innovative initiative that demonstrates the significance of the academic library in the local community, and how staff collaboration helps to develop team building and positive vibes with the millennials.
“The first volume of Documents on Canadian External Relations was published in 1967, as Canada celebrated its first century of nationhood. Since then, volumes in this series have dealt with various periods in the history of Canadian foreign policy, from the Laurier era up to the Pearson years in government. The series currently includes 29 regular volumes and has reprinted over 20,000 documents, totalling almost 40,000 pages of text, making it the largest historical documentary project in Canada. The subject of this special volume, the Arctic, has an ever-growing importance for Canada as we approach our federation's 150th anniversary. This volume illuminates how and why Canada asserted its sovereignty over the Far North between 1874 and 1949, and it demonstrates how much Canadians today owe to the nation builders of the past”--Preface, p. [vi].
First edition published on the occasion of an exhibition of R.L. Griffith's paintings at Wallack Galleries in Ottawa: The Estate Collection: Pleasure and Solace,
works by Robert Lewis Griffiths.
This guide combines the knowledge gathered during my long career coordinating the Carleton
University Library exhibits program and my recent sabbatical research on exhibits and events in
academic libraries. Between 1983, when I was hired as Exhibits Librarian at Carleton University
Library, and 2002, when the Library had little space available for exhibits and I became Head of
Access Services, I was responsible for running the Library’s exhibits program. After the latest
renovation to MacOdrum Library was completed in the Fall of 2013 and included dedicated
space for exhibits, I was once again asked to coordinate and produce exhibits for the Library.
During my 2014/2015 sabbatical I investigated the current state of exhibits and events in
academic libraries through literature and Web searches and site visits to a number of universities.
The end result is this guide, which I hope is both practical and inspirational.
The Iowa Gamb
ling Task (IGT) is widely used to assess the role of
emotion in decision making. However, there is only indirect
evidence to support that the task measures emotion. There are
inconsistencies in performance within in healthy populations who
display risk tak
ing traits. Two hundred and fifty participants were
assessed for psychopathy, sensation seeking, and impulsiveness.
The IGT was compared with modified versions that directly
manipulated emotion within in the task by indexing reward and
punishment cards wit
h images varying in emotional content.
Participants continued to learn to avoid risky decks in all versions
of the IGT. The manipulation of emotional content within the task
did affect performance: fearful images contributed to greater risky
deck selection
s. Across the tasks, psychopathy showed the
strongest relationship to risky deck selections, and lower levels of
psychopathy was associated decreased risky deck selections.
However, psychopathy did not affect learning in the modified
versions. Exploratory analysis on image valance found that
negative images (compared to neutral) benefited learning for
individuals with higher levels of psychopathy. Discussion will
center on the benefits of manipulating emotion directly within the
task as a means to assess th
e validity of the IGT.
The debate surrounding how emotion and c
ognition
are
organized in the brain often lead
s to
Damasio’s Somatic
Marker Hypothesis. This theory endorses a highly interactive
process between emotion and cognition, but has been
criticized for being too broad to capture the specific links
between the t
wo. It also implies that emotion operates from a
neural architecture that is dissociable from cognition.
Although empirical findings from the Iowa Gambling Task
lend support for the theory, this can promote a false
dichotomy between emotion and cognition. Issues will be
raised regarding
the
view
that the theory and the task are ill
-
formulated to account for the phases of decision making.
Further theoretical work may be required to align the task
with Damasio’s view of emotion as integrated with cognition.
My study attempted to find out if the old part of our brain (limbic system) had a
significant role in influencing how we detect the valence of blurry words without
conscious awareness of what the words are. 10 participants were shown blurry words that
could not be read and were asked to guess valence, without a time limit. The hypotheses
for this study was that participants would be accurate in detecting valence of blurred
words and that participants would rate negative words the most accurately. I also
predicted that participants would attempt to read words before rating valence and they
would attempt to read the words only in the beginning. The stimuli were shown to the
participants on printed-paper. There were 10 blurred words per page with accompanying
5-point Likert scales by each blurred word with a reference scale at the top of every page.
My research data found that there was a significant statistical difference between people’s
ability to detect the valence of blurred words compared to the normal ability (which is
100% accuracy). The comparison showed that the participants were significantly worse at
detecting the valence of blurred words than unblurred words. There was no significant
statistical difference between people’s ability to detect the valence of blurry neutral
words compared to the valence of blurry nonsensical words. Participants were equally
accurate at both of these word-types. Participant responses also showed that they were
statistically better at detecting the valence of negative blurry words than positive blurry
words. So they were better at detecting negative valence than those of other valences.
Resource Description and Access is the new content standard coming Spring 2013, with national libraries using RDA effective March 30, 2013. Libraries need to address training for staff in all departments on how to interpret, catalogue and use RDA records.
Carleton University Library has an innovative staff development program to expand the skill set of e-book cataloguers to provide a comprehensive service to manage and expand access e-books. In 2009 Carleton University Library hired its first e-book cataloguer in response to the rapid growth of digital resources in the Library collection; a second position was added in 2011. These positions have successfully evolved to incorporate a wide variety of duties related to e-books in response to rapidly changing digital environment. Conference poster presented at the CLA annual conference, June 3 to 5, 2015 in Ottawa, Ontario.
Police in schools In an era where the costs of policing are constantly under scrutiny from governing municipalities, the time has come for police agencies to re-evaluate the services they provide. To do this, they need to answer questions relating to the value that different activities they perform create in the communities they serve. In other words, they need to change the focus of the conversation from “what does this service cost” to “what value does this service provide.”
This document summarizes key findings from a longitudinal (2014-2017), multi-method (quantitative, qualitative, and ethnographic analysis, along with a Social Return on Investment [SROI] analysis) case study undertaken to identify the value of School Resource Officers (SROs) that are employed by Peel Regional Police and work in the service’s Neighborhood Police Unit (NPU). Of note is the application of SROI techniques in this evaluation process. SROI, a methodology that emerged from the not-for-profit sector, helps researchers identify sources of value outside of those considered through traditional valuation techniques, such as cost-benefit analysis.
Evaluation of Peel Police’s SRO program was motivated by a number of factors. First, the costs of this program are both easy to identify and significant (just over $9 million per year). Second, it is very challenging to identify the value that this program provides to students and the community. The challenges of quantifying the value offered by assigning full-time SROs to Canadian high schools is evidenced by the fact that such programs are rare, as police services around the world have responded to pressures to economize by removing officers from schools and either eliminating the role of the SRO or having one officer attend to many schools.
Net zero energy (NZE) communities are becoming pivotal to the energy vision of developers. Communities that produce as much energy as they consume provide many benefits, such as reducing life-cycle costs and better resilience to grid outages. If deployed using smart-grid technology, NZE communities can act as a grid node and aid in balancing electrical demand. However, identifying cost-effective pathways to NZE requires detailed energy and economic models. Information required to build such models is not typically available at the early master-planning stages, where the largest energy and economic saving opportunities exist. Methodologies that expedite and streamline energy and economic modeling could facilitate early decision making. This paper describes a reproducible methodology that aids modelers in identifying energy and economic savings opportunities in the early community design stages. As additional information becomes available, models can quickly be recreated and evaluated. The proposed methodology is applied to the first-phase design of a NZE community under development in Southwestern Ontario.
This paper presents a multi-objective redesign case study of an archetype solar house based on a near net zero energy (NZE) demonstration home located in Eastman, Quebec. Using optimization techniques, pathways are identified from the original design to both cost and energy optimal designs. An evolutionary algorithm is used to optimize trade-offs between passive solar gains and active solar generation, using two objective functions: net-energy consumption and life-cycle cost over a thirty-year life cycle. In addition, this paper explores different pathways to net zero energy based on economic incentives, such as feed-in tariffs for on-site electricity production from renewables. The main objective is to identify pathways to net zero energy that will facilitate the future systematic design of similar homes based on the concept of the archetype that combines passive solar design; energy-efficiency measures, including a geothermal heat pump; and a building-integrated photovoltaic system. Results from this paper can be utilized as follows: (1) systematic design improvements and applications of lessons learned from a proven NZE home design concept, (2) use of a methodology to understand pathways to cost and energy optimal building designs, and (3) to aid in policy development on economic incentives that can positively influence optimized home design.
Net-zero energy is an influential idea in guiding the building stock towards renewable
energy resources. Increasingly, this target is scaled to entire communities
which may include dozens of buildings in each new development phase.
Although building energy modelling processes and codes have been well developed
to guide decision making, there is a lack of methodologies for community
integrated energy masterplanning. The problem is further complicated by the
availability of district systems which better harvest and store on-site renewable
energy. In response to these challenges, this paper contributes an energy modelling
methodology which helps energy masterplanners determine trade-offs between
building energy saving measures and district system design. Furthermore,
this paper shows that it is possible to mitigate electrical and thermal peaks of a
net-zero energy community using minimal district equipment. The methodology
is demonstrated using a cold-climate case-study with both significant heating/
cooling loads and solar energy resources.
Energy models are commonly used to examine the multitude of pathways to improve building performance. As presently practiced, a deterministic approach is used to evaluate incremental design improvements to achieve performance targets. However, significant insight can be gained by examining the implications of modeling assumptions using a probabilistic approach. Analyzing the effect of small perturbations on the inputs of energy and economic models can improve decision making and modeler confidence in building simulation results. This paper describes a reproducible methodology which AIDS modelers in identifying energy and economic uncertainties caused by variabilities in solar exposure. Using an optimization framework, uncertainty is quantified across the entire simulation solution space. This approach improves modeling outcomes by factoring in the effect of variability in assumptions and improves confidence in simulation results. The methodology is demonstrated using a net zero energy commercial office building case study.
An earlier version of this paper was prepared for a Symposium, Cultural Policies in Regional Integration, sponsored by the Center for the Study of Western Hemispheric Trade and the Mexican Center of the Institute of Latin American Studies, The University of Texas at Austin, February 2, 1998.
The article scrutinizes the complex entanglement of cyberurban spaces in the making and development of contemporary social movement by analyzing its imaginaries, practices, and trajectories.
This issue of New Geographies, “Geographies of Information” (edited by Taraneh Meskhani & Ali Fard), presents a new set of frameworks that refrain from generalizations to highlight the many facets of the socio-technical constructions, processes, and practices that form the spaces of information and communication. In addition to Lim, contributors of the issue include prominent thinkers and scholars in various related disciplines such as Rob Kitchin (critical data), Stephen Graham (urbanism) and Malcolm McCullough (architecture/urban computing).
The analysis of official development assistance has always struggled with the contradiction between its more altruistic motivations for global development and its easy adaptation as an instrument for the donor’s pursuit of self-interested foreign policy objectives. In the international system foreign aid may thus become a forum for both cooperative and competitive interactions between donors. This chapter explores the interdependence of aid by reviewing the literature on donor interdependence, with a particular focus on donor competition for influence in recipient states. We then present a simple theoretical framework to examine donor competition, and provide some preliminary empirical testing of resulting hypotheses. We conclude that while the evidence about competition is fixed, the behaviour of some donors is consistent with their pursuit of influence in certain recipient states.
Since the early 2000s the Internet has become particularly crucial for the global jihadist movement. Nowhere has the Internet been more important in the movement’s development than in the West. While dynamics differ from case to case, it is fair to state that almost all recent cases of radicalization in the West involve at least some digital footprint. Jihadists, whether structured groups or unaffiliated sympathizers, have long understood the importance of the Internet in general and social media, in particular. Zachary Chesser, one of the individuals studied in this report, fittingly describes social media as “simply the most dynamic and convenient form of media there is.” As the trend is likely to increase, understanding how individuals make the leap to actual militancy is critically important.
This study is based on the analysis of the online activities of seven individuals. They share several key traits. All seven were born or raised in the United States. All seven were active in online and offline jihadist scene around the same time (mid‐ to late 2000s and early 2010s). All seven were either convicted for terrorism‐related offenses (or, in the case of two of the seven, were killed in terrorism‐related incidents.)
The intended usefulness of this study is not in making the case for monitoring online social media for intelligence purpose—an effort for which authorities throughout the West need little encouragement. Rather, the report is meant to provide potentially useful pointers in the field of counter‐radicalization. Over the past ten years many Western countries have devised more or less extensive strategies aimed at preventing individuals from embracing radical ideas or de‐radicalizing (or favoring the disengagement) of committed militants. (Canada is also in the process of establishing its own counter‐radicalization strategy.)
REPORT HIGHLIGHTS
- Opportunity for on-site food production comes from public and political support for ‘local food’, combined with a shortage of land for new producers
- GIS study of Ontario healthcare properties shows 217 with more than one acre of arable land available, and 54 with more than five acres
- Case studies demonstrate the benefits of a ‘farmer’— independent, staff member or community group—and/or labour force dedicated to the project
- Initial and on-going viability correlates to the extent of institutional support, particularly staff time for project coordination
- Institutional motivations for on-site food production initiatives vary, include mental and physical therapeutic benefits
See more at the Project SOIL website.
This study uses an exploratory qualitative design to examine the lived experience of one group of service users on community treatment orders (CTOs). The study was designed and completed by four graduate students at Carleton University School of Social Work.
Despite the unique features of CTO legislation in Ontario, many findings from this study are remarkably similar to findings of research conducted in other jurisdictions. What is unique in our findings is the lack of focus on the actual conditions and provision of the CTO. The issue for our participants was less about the CTO itself, and more about the labels, control and discrimination associated with severe mental illness.
Cette étude utilise un concept qualitatif et exploratoire pour examiner les expériences vécues d’un groupe qui utilise les ordonnances de traitement en milieu communautaire (OTMC). Cette étude a été designée et complétée par 4 étudiants de l’école de service social de l’université Carleton.
Malgré les nombreux aspects uniques de la loi gérant les OTMC de l’Ontario, plusieurs résultats de cette étude sont remarquablement similaires aux résultats découverts dans de différentes juridictions. L’élément unique de cette recherche est le manque de focus sur les conditions véritables et les provisions des OTMC. La problématique encourue par les participants n’était pas au sujet des OTMC en soi, mais plus tôt au sujet de l’étiquetage, du contrôle, et de la discrimination associé aux troubles de santé mentale sévères.
The study was initiated as Canada’s contribution to the Wilson’s Centre Global Women’s Leadership Initiative Women in Public Service Project initiated by Hilary Clinton when she was Secretary of State. In partnership with the Centre for Women in Politics and Public Leadership, Gender Equality Measurement Initiative, and Centre for Research on Women and Work at Carleton University and the Public Service Commission of Canada.
Abstract:
This study was undertaken to determine whether women in leadership positions in the Canadian federal Public Service (PS) have had an impact on policy, programs, operations, administration or workplace conditions, what that impact might be, and how to measure it. Drawing from qualitative interviews with current and retired Executives and Deputy Ministers in the Canadian federal public service, it provides recommendations and considerations around gender and impact moving forward.
This report provides key findings and recommendations from a study of work-life conflict and employee well-being that involved 4500 police officers working for 25 police forces across Canada. Findings from this study should help police forces across Canada implement policies and practices that will help them thrive in a "sellers market for labour."
The study examined work-life experiences of 25,000 Canadians who were employed full time in 71 public, private and not-for-profit organizations across all provinces and territories between June 2011 and June 2012. Two-thirds of survey respondents had incomes of $60,000 or more a year and two-thirds were parents.
Previous studies were conducted in 1991 and 2001.
“It is fascinating to see what has changed over time and what hasn’t,’’ said Duxbury.
Among the findings:
Most Canadian employees still work a fixed nine-to-five schedule – about two-thirds.
Overall, the typical employee spends 50.2 hours in work-related activities a week. Just over half of employees take work home to complete outside regular hours.
The use of flexible work arrangements such as a compressed work week (15 per cent) and flexible schedules (14 per cent) is much less common.
Fifty-seven per cent of those surveyed reported high levels of stress.
One-third of working hours are spent using email.
Employees in the survey were twice as likely to let work interfere with family as the reverse.
Work-life conflict was associated with higher absenteeism and lower productivity.
Succession planning, knowledge transfer and change management are likely to be a problem for many Canadian organizations.
There has been little career mobility within Canadian firms over the past several years.