Given a connected geometric graph G, we consider the problem of constructing a t-spanner of G having the minimum number of edges. We prove that for every t with 1 1+1/t) edges. This bound almost matches the known upper bound, which states that every connected weighted graph with n vertices contains a t-spanner with O(tn1+2/(t+1)) edges. We also prove that the problem of deciding whether a given geometric graph contains a t-spanner with at most K edges is NP-hard. Previously, this NP-hardness result was only known for non-geometric graphs.
The verification of non-functional requirements of software models (such as performance, reliability, scalability, security, etc.) requires the transformation of UML models into different analysis models such as Petri nets, queueing networks, formal logic, etc., which represent the system at a higher level of abstraction. The paper proposes a new "abstraction-raising" transformation approach for generating analysis models from UML models. In general, such transformations must bridge a large semantic gap between the source and the target model. The proposed approach is illustrated by a transformation from UML to Klaper (Kernel LAnguage for PErformance and Reliability analysis of component-based systems).
We consider the rendezvous problem for identical mobile agents (i.e., running the same deterministic algorithm) with tokens in a synchronous torus with a sense of direction and show that there is a striking computational difference between one and more tokens. More specifically, we show that 1) two agents with a constant number of unmovable tokens, or with one movable token, each cannot rendezvous if they have o(log n) memory, while they can perform rendezvous with detection as long as they have one unmovable token and O(log n) memory; in contrast, 2) when two agents have two movable tokens each then rendezvous (respectively, rendezvous with detection) is possible with constant memory in an arbitrary n × m (respectively, n × n) torus; and finally, 3) two agents with three movable tokens each and constant memory can perform rendezvous with detection in a n × m torus. This is the first publication in the literature that studies tradeoffs between the number of tokens, memory and knowledge the agents need in order to meet in such a network.
We present a tradeoff between the expected time for two identical agents to rendez-vous on a synchronous, anonymous, oriented ring and the memory requirements of the agents. In particular, we show that there exists a 2t state agent, which can achieve rendez-vous on an n node ring in expected time O( n 2/2 t ∈+∈2 t ) and that any t/2 state agent requires expected time Ω( n 2/2 t ). As a corollary we observe that Θ(loglogn) bits of memory are necessary and sufficient to achieve rendez-vous in linear time.
We prove that for all 0 ≤ t ≤ k and d ≥ 2k, every graph G with treewidth at most k has a 'large' induced subgraph H, where H has treewidth at most t and every vertex in H has degree at most d in G, The order of H depends on t, k, d, and the order of G. With t = k, we obtain large sets of bounded degree vertices. With t = 0, we obtain large independent sets of bounded degree. In both these cases, our bounds on the order of H are tight. For bounded degree independent sets in trees, we characterise the extremal graphs. Finally, we prove that an interval graph with maximum clique size k has a maximum independent set in which every vertex has degree at most 2k.
We consider a problem which can greatly enhance the areas of cursive script recognition and the recognition of printed character sequences. This problem involves recognizing words/strings by processing their noisy subsequences. Let X* be any unknown word from a finite dictionary H. Let U be any arbitrary subsequence of X*. We study the problem of estimating X* by processing Y, a noisy version of U. Y contains substitution, insertion, deletion and generalized transposition errors — the latter occurring when transposed characters are themselves subsequently substituted. We solve the noisy subsequence recognition problem by defining and using the constrained edit distance between X ε H and Y subject to any arbitrary edit constraint involving the number and type of edit operations to be performed. An algorithm to compute this constrained edit distance has been presented. Using these algorithms we present a syntactic Pattern Recognition (PR) scheme which corrects noisy text containing all these types of errors. Experimental results which involve strings of lengths between 40 and 80 with an average of 30.24 deleted characters and an overall average noise of 68.69 % demonstrate the superiority of our system over existing methods.
We motivate, formalize and investigate the notions of data quality assessment and data quality query answering as context dependent activities. Contexts for the assessment and usage of a data source at hand are modeled as collections of external databases, that can be materialized or virtual, and mappings within the collections and with the data source at hand. In this way, the context becomes "the complement" of the data source wrt a data integration system. The proposed model allows for natural extensions, like considering data quality predicates, and even more expressive ontologies for data quality assessment.
A collection of n anonymous mobile robots is deployed on a unit-perimeter ring or a unit-length line segment. Every robot starts moving at constant speed, and bounces each time it meets any other robot or segment endpoint, changing its walk direction. We study the problem of position discovery, in which the task of each robot is to detect the presence and the initial positions of all other robots. The robots cannot communicate or perceive information about the environment in any way other than by bouncing. Each robot has a clock allowing it to observe the times of its bounces. The robots have no control on their walks, which are determined by their initial positions and the starting directions. Each robot executes the same position detection algorithm, which receives input data in real-time about the times of the bounces, and terminates when the robot is assured about the existence and the positions of all the robots. Some initial configuration of robots are shown to be infeasible - no position detection algorithm exists for them. We give complete characterizations of all infeasible initial configurations for both the ring and the segment, and we design optimal position detection algorithms for all feasible configurations. For the case of the ring, we show that all robot configurations in which not all the robots have the same initial direction are feasible. We give a position detection algorithm working for all feasible configurations. The cost of our algorithm depends on the number of robots starting their movement in each direction. If the less frequently used initial direction is given to k ≤ n/2 robots, the time until completion of the algorithm by the last robot is 1/2 ⌈n/k⌉. We prove that this time is optimal. By contrast to the case of the ring, for the unit segment we show that the family of infeasible configurations is exactly the set of so-called symmetric configurations. We give a position detection algorithm which works for all feasible configurations on the segment in time 2, and this algorithm is also proven to be optimal.
Matching Dependencies (MDs) are a recent proposal for declarative entity resolution. They are rules that specify, given the similarities satisfied by values in a database, what values should be considered duplicates, and have to be matched. On the basis of a chase-like procedure for MD enforcement, we can obtain clean (duplicate-free) instances; actually possibly several of them. The clean answers to queries (which we call the resolved answers) are invariant under the resulting class of instances. In this paper, we investigate a query rewriting approach to obtaining the resolved answers (for certain classes of queries and MDs). The rewritten queries are specified in stratified Datalog not,s with aggregation. In addition to the rewriting algorithm, we discuss the semantics of the rewritten queries, and how they could be implemented by means of a DBMS.
Let P be a simple polygon with m vertices and let be a set of n points in P. We consider the points of to be users. We consider a game with two players and. In this game, places a point facility inside P, after which places another point facility inside P. We say that a user is served by its nearest facility, where distances are measured by the geodesic distance in P. The objective of each player is to maximize the number of users they serve. We show that for any given placement of a facility by, an optimal placement for can be computed in O(m + n(logn + logm)) time. We also provide a polynomial-time algorithm for computing an optimal placement for.
We present results related to satisfying shortest path queries on a planar graph stored in external memory. In particular, we show how to store rooted trees in external memory so that bottom-up paths can be traversed I/O-efficiently, and we present I/O-efficient algorithms for triangulating planar graphs and computing small separators of such graphs. Using these techniques, we can construct a data structure that allows for answering shortest path queries on a planar graph I/O-efficiently.
The time required for a sequence of operations on a data structure is usually measured in terms of the worst possible such sequence. This, however, is often an overestimate of the actual time required. Distribution-sensitive data structures attempt to take advantage of underlying patterns in a sequence of operations in order to reduce time complexity, since access patterns are non-random in many applications. Unfortunately, many of the distribution- sensitive structures in the literature require a great deal of space overhead in the form of pointers. We present a dictionary data structure that makes use of both randomization and existing space-efficient data structures to yield very low space overhead while maintaining distribution sensitivity in the expected sense.
We present I/O-efficient algorithms to construct planar Steiner spanners for point sets and sets of polygonal obstacles in the plane, and for constructing the “dumbbell” spanner of [6] for point sets in higher dimensions. As important ingredients to our algorithms, we present I/O efficient algorithms to color the vertices of a graph of bounded degree, answer binary search queries on topology buffer trees, and preprocess a rooted tree for answering prioritized ancestor queries.
We present a succinct representation of a set of n points on an n×n grid using bits to support orthogonal range counting in time, and range reporting in time, where k is the size of the output. This achieves an improvement on query time by a factor of upon the previous result of Mäkinen and Navarro [1], while using essentially the information-theoretic minimum space. Our data structure not only can be used as a key component in solutions to the general orthogonal range search problem to save storage cost, but also has applications in text indexing. In particular, we apply it to improve two previous space-efficient text indexes that support substring search [2] and position-restricted substring search [1]. We also use it to extend previous results on succinct representations of sequences of small integers, and to design succinct data structures supporting certain types of orthogonal range query in the plane.
A Semi-Separated Pair Decomposition (SSPD), with parameter s > 1, of a set is a set {(A i ,B i )} of pairs of subsets of S such that for each i, there are balls and containing A i and B i respectively such that min ( radius ) , radius ), and for any two points p, q S there is a unique index i such that p A i and q B i or vice-versa. In this paper, we use the SSPD to obtain the following results: First, we consider the construction of geometric t-spanners in the context of imprecise points and we prove that any set of n imprecise points, modeled as pairwise disjoint balls, admits a t-spanner with edges which can be computed in time. If all balls have the same radius, the number of edges reduces to . Secondly, for a set of n points in the plane, we design a query data structure for half-plane closest-pair queries that can be built in time using space and answers a query in time, for any ε> 0. By reducing the preprocessing time to and using space, the query can be answered in time. Moreover, we improve the preprocessing time of an existing axis-parallel rectangle closest-pair query data structure from quadratic to near-linear. Finally, we revisit some previously studied problems, namely spanners for complete k-partite graphs and low-diameter spanners, and show how to use the SSPD to obtain simple algorithms for these problems.
A black hole is a highly harmful host that disposes of visiting agents upon their arrival. It is known that it is possible for a team of mobile agents to locate a black hole in an asynchronous ring network if each node is equipped with a whiteboard of at least O(log n) dedicated bits of storage. In this paper, we consider the less powerful token model: each agent has has available a bounded number of tokens that can be carried, placed on a node or removed from it. All tokens are identical (i.e., indistinguishable) and no other form of communication or coordination is available to the agents. We first of all prove that a team of two agents is sufficient to locate the black hole in finite time even in this weaker coordination model. Furthermore, we prove that this can be accomplished using only O(nlogn) moves in total, which is optimal, the same as with whiteboards. Finally, we show that to achieve this result the agents need to use only O(1) tokens each.
Let (S,d) be a finite metric space, where each element p S has a non-negative weight w(p). We study spanners for the set S with respect to weighted distance function d w , where d w (p,q) is w(p)+d(p,q)+wq if p≠q and 0 otherwise. We present a general method for turning spanners with respect to the d-metric into spanners with respect to the d w -metric. For any given ε>0, we can apply our method to obtain (5+ε)-spanners with a linear number of edges for three cases: points in Euclidean space ℝ d , points in spaces of bounded doubling dimension, and points on the boundary of a convex body in ℝ d where d is the geodesic distance function. We also describe an alternative method that leads to (2+ε)-spanners for points in ℝ d and for points on the boundary of a convex body in ℝ d . The number of edges in these spanners is O(nlogn). This bound on the stretch factor is nearly optimal: in any finite metric space and for any ε>0, it is possible to assign weights to the elements such that any non-complete graph has stretch factor larger than 2-ε.
In this paper, we present a novel semidefinite programming approach for multiple-instance learning. We first formulate the multiple-instance learning as a combinatorial maximum margin optimization problem with additional instance selection constraints within the framework of support vector machines. Although solving this primal problem requires non-convex programming, we nevertheless can then derive an equivalent dual formulation that can be relaxed into a novel convex semidefinite programming (SDP). The relaxed SDP has free parameters where T is the number of instances, and can be solved using a standard interior-point method. Empirical study shows promising performance of the proposed SDP in comparison with the support vector machine approaches with heuristic optimization procedures.
We study the feasibility and time of communication in random geometric radio networks, where nodes fail randomly with positive correlation. We consider a set of radio stations with the same communication range, distributed in a random uniform way on a unit square region. In order to capture fault dependencies, we introduce the ranged spot model in which damaging events, called spots, occur randomly and independently on the region, causing faults in all nodes located within distance s from them. Node faults within distance 2s become dependent in this model and are positively correlated. We investigate the impact of the spot arrival rate on the feasibility and the time of communication in the fault-free part of the network. We provide an algorithm which broadcasts correctly with probability 1 - ε in faulty random geometric radio networks of diameter D in time O(D + log1/ε).
Current research depicts suburbs as becoming more heterogeneous in terms of socio-economic status. Providing a novel analysis, this paper engages with that research by operationalising suburban ways of living (homeownership, single-family dwelling occupancy and automobile use) and relating them to the geography of income across 26 Canadian metropolitan areas. We find that suburban ways of living exist in new areas and remain associated with higher incomes even as older suburbs, as places, have become more diverse. In the largest cities the relationship between income and suburban ways of living is weaker due to the growth of condominiums in downtowns that allow higher income earners to live urban lifestyles. Homeownership is overwhelmingly more important than other variables in explaining the geography of income across 26 metropolitan areas.
There is a paradoxical relationship between the density of solar housing and net household energy use. The amount of solar energy available per person decreases as density increases. At the same time, transportation energy, and to some extent, household operating energy decreases. Thus, an interesting question is posed: how does net energy use vary with housing density? This study attempts to provide insight into this question by examining three housing forms: low-density detached homes, medium-density townhouses, and high-density high-rise apartments in Toronto. The three major quantities of energy that are summed for each are building operational energy use, solar energy availability, and personal transportation energy use. Solar energy availability is determined on the basis of an effective annual collector efficiency. The results show that under the base case in which solar panels are applied to conventional homes, the high-density development uses one-third less energy than the low-density one. Improving the efficiency of the homes results in a similar trend. Only when the personal vehicle fleet or solar collectors are made to be extremely efficient does the trend reverse-the low-density development results in lower net energy.
This article interrogates the question of what it means to be a scholar-commentator in the digital age. Deploying an autoethnographic style, the essay asks about the role of power and responsibility in teaching, research, and public commentary, particularly in the context of studying and engaging in Jewish politics. The article addresses questions about the proper role of the scholar in the academy and the role of subjectivity and political commitments in structuring scholarship, pedagogy, and public engagement. It also examines how one’s view of the profession can seem to shift through the emergence of new writing outlets and new forums for public engagement. Finally, the author investigates how a scholar’s own political commitments can shift over time, how one seeks to shore up identification on social media while trying to change hearts and minds through the op-ed pages, and how community identification can serve as a buffer and motivator for particular forms of research and political action.
This paper analyzes how the “particular symbolic fortunes” of Canada’s most widely recognized literary prize, the Scotiabank Giller Prize, undergo what James English calls “capital intraconversion”––how they are “culturally ‘laundered’” through their association with Frontier College, Canada’s longest-running adult literacy organization. While the Giller initially benefitted from fashioning itself as the private, industry-driven alternative to state-sponsored culture in Canada, increasing criticism of its corporate sponsorship has led, in the past decade, to a rebranding effort. This effort, I contend, seeks to benefit from two key terms––multiculturalism and literacy. Associated as the discourse of multiculturalism and the figure of the literate citizen are with the strong publics of the western, liberal-democratic nation-state, they possess a remarkable ability to accentuate the symbolic capital of Canada’s most widely recognized literary prize.
Ca-ATPase activity in sarcoplasmic reticulum (SR) membranes isolated from skeletal muscles of the typical hibernator, the ground squirrel Spermophilus undulatus, is about 2-fold lower than that in SR membranes of rats and rabbits and is further decreased 2-fold during hibernation. The use of carbocyanine anionic dye Stains-All has revealed that Ca-binding proteins of SR membranes, histidine-rich Ca-binding protein and sarcalumenin, in ground squirrel, rat, and rabbit SR have different electrophoretic mobility corresponding to apparent molecular masses 165, 155, and 170 kDa and 130, 145, and 160 kDa, respectively; the electrophoretic mobility of calsequestrin (63 kDa) is the same in all preparations. The content of these Ca-binding proteins in SR membranes of the ground squirrels is decreased 3–4 fold and the content of 55, 30, and 22 kDa proteins is significantly increased during hibernation.
A novel technique for increasing the sensitivity of tilted fibre Bragg grating (TFBG) based refractometers is presented. The TFBG sensor was coated with chemically synthesized silver nanowires 100nm in diameter and several micrometres in length. A 3.5-fold increase in sensor sensitivity was obtained relative to the uncoated TFBG sensor. This increase is associated with the excitation of surface plasmons by orthogonally polarized fibre cladding modes at wavelengths near 1.5μm. Refractometric information is extracted from the sensor via the strong polarization dependence of the grating resonances using a Jones matrix analysis of the transmission spectrum of the fibre.
Social defeat in mice is a potent stressor that promotes the development of depressive- and anxiety-like behaviours, as well as variations of neuroendocrine and brain neurotransmitter activity. Although environmental enrichment may protect against some of the adverse behavioural and biological effects of social defeat, it seems that, among male group-housed mice maintained in an enriched environment (EE), aggressive behaviours may be more readily instigated, thus promoting distress and exacerbating psychopathological features. Thus, although an EE can potentially have numerous beneficial effects, these may depend on the general conditions in which mice were raised. It was observed in the current investigations that EE group-housed BALB/cByJ mice displayed increased anxiety-like behaviours compared to their counterparts maintained in a standard environment (SE). Furthermore, in response to social defeat, EE group-housed male mice exhibited decreased weight gain, exaggerated corticosterone elevations and altered hippocampal norepinephrine utilization compared to their SE counterparts. These effects were not apparent in the individually housed EE mice and, in fact, enrichment among these mice appeared to buffer against serotonin changes induced by social defeat. It is possible that some potentially beneficial effects of enrichment were precluded among group-housed mice, possibly owing to social disturbances that might occur in these conditions. In fact, even if social interaction is an essential feature of enrichment, it seems that some of the positive effects of this housing condition might be optimal when mice are housed individually, particularly with regard to buffering the effects of social defeat.
A 100-kDa protein that is a main component of the microsomal fraction from rabbit gastric mucosa is phosphorylated by cAMP-dependent protein kinase (PKA) in the presence of 0.2% Triton X-100. Microsomes from rabbit gastric mucosa possess activity of H,K-ATPase but not activity of Na,K-ATPase. Incubation of microsomes with 5 μM fluorescein 5′-isothiocyanate (FITC) results in both an inhibition of H,K-ATPase and labeling of a protein with an electrophoretic mobility corresponding to the mobility of the protein phosphorylated by PKA. The data suggest that the α-subunit of H,K-ATPase can be a potential target for PKA phosphorylation.
The electrical resistivity distribution at the base of La Soufrière of Guadeloupe lava dome is reconstructed by using transmission electrical resistivity data obtained by injecting an electrical current between two electrodes located on opposite sides of the volcano. Several pairs of injection electrodes are used in order to constitute a data set spanning the whole range of azimuths, and the electrical potential is measured along a cable covering an angular sector of ≈120? along the basis of the dome. The data are inverted to performa slice electrical resistivity tomography (SERT) with specific functions implemented in the EIDORS open source package dedicated to electrical impedance tomography applied to medicine and geophysics. The resulting image shows the presence of highly conductive regions separated by resistive ridges. The conductive regions correspond to unconsolidated material saturated by hydrothermal fluids. Two of them are associated with partial flank collapses and may represent large reservoirs that could have played an important role during past eruptive events. The resistive ridges may represent massive andesite and are expected to constitute hydraulic barriers.
Single-longitudinal-mode operation of Er3+-P2O5-codoped silica planar waveguide lasers which are equipped with integrated Bragg grating reflectors is demonstrated, with a polarized output of 340 μW at 1546 nm. The gratings are photo-imprinted using 193 nm light exposure through a phase mask in GeO2-free optical waveguides that have been sensitized by H2 loading.
The core refractive index of Corning SMF-28 optical fibre exposed to ArF laser pulses increases with the square of the fluence per pulse. Bragg gratings with a refractive index modulation amplitude higher than 10
-3 have been obtained. This is an order of magnitude improvement over previously reported values for this type of fibre in the absence of treatment to enhance the photosensitivity.
When hydrogen loading is used to enhance the photosensitivity of silica-based optical waveguides and fibres, the presence of molecular hydrogen dissolved in the glass matrix changes the effective index of propagation of guided optical modes by as much as 0.05%. Real-time monitoring of the reflectivity spectrum of Bragg gratings written in such conditions shows that the centre wavelength follows the changes in hydrogen concentration due to diffusion and reaction with glass defects.
An apodized chirped in-fibre Bragg grating that has a linear dispersion characteristic is reported. The frequency components of an optical pulse (centre wavelength 1551 nm; 10 GHz bandwidth) incident on the grating are reflected with a relative delay that varies linearly from 0 to 130 ps across the spectral width of the pulse. The dispersion compensator is used to correct for the dispersion in a 100 km link (nondispersion shifted fibre) operating at a 10 Gbit/s transmission rate and a wavelength of 1551 nm.
An apodized in-fibre Bragg grating reflector is fabricated using the phase mask photoimprinting technique. The reflector has a centre wavelength of 1550 nm, a bandwidth of 0.22 nm and a peak reflectivity of 90%. At 0.4 nm (50 GHz) from the centre wavelength the reflectivity is 40 dB lower than the peak reflectivity; this is an improvement of more than 20 dB over an unapodized Bragg grating reflector with similar bandwidth and peak reflectivity.
Random Forests variable importance measures are often used to rank variables by their relevance to a classification problem and subsequently reduce the number of model inputs in high-dimensional data sets, thus increasing computational efficiency. However, as a result of the way that training data and predictor variables are randomly selected for use in constructing each tree and splitting each node, it is also well known that if too few trees are generated, variable importance rankings tend to differ between model runs. In this letter, we characterize the effect of the number of trees (ntree) and class separability on the stability of variable importance rankings and develop a systematic approach to define the number of model runs and/or trees required to achieve stability in variable importance measures. Results demonstrate that both a large ntree for a single model run, or averaged values across multiple model runs with fewer trees, are sufficient for achieving stable mean importance values. While the latter is far more computationally efficient, both the methods tend to lead to the same ranking of variables. Moreover, the optimal number of model runs differs depending on the separability of classes. Recommendations are made to users regarding how to determine the number of model runs and/or trees that are required to achieve stable variable importance rankings.
The rise of game development and game studies on university campuses prompts academic libraries to consider how to support teaching and research in this area. This article examines current issues and challenges in the development of game collections at academic libraries. The gaming ecosystem has become more complex and libraries may need to move beyond collections largely based on console video games. This article will advance the discussion by considering emerging issues to support access to the full range of games. The article will use examples from Carleton University Library, Ottawa, which has been developing a game collection since 2008.
The design and analysis of community-scale energy systems and incentives is a non-trivial task. The challenge of such undertakings is the well-documented uncertainty of building occupant behaviours. This is especially true in the residential sector, where occupants are given more freedom of activity compared to work environments. Further complicating matters is the dearth of available measured data. Building performance simulation tools are one approach to community energy analysis, however such tools often lack realistic models for occupant-driven demands, such as appliance and lighting (AL) loads. For community-scale analysis, such AL models must also be able to capture the temporal and inter-dwelling variation to achieve realistic estimates of aggregate electrical demand. This work adapts the existing Centre for Renewable Energy Systems Technology (CREST) residential energy model to simulate Canadian residential AL demands. The focus of the analysis is to determine if the daily, seasonal, and inter-dwelling variation of AL demands estimated by the CREST model is realistic. An in-sample validation is conducted on the model using 22 high-resolution measured AL demand profiles from dwellings located in Ottawa, Canada. The adapted CREST model is shown to broadly capture the variation of AL demand variations observed in the measured data, however seasonal variation in daily AL demand behaviour was found to be under-estimated by the model. The average and variance of daily load factors was found to be similar between measured and modelled. The model was found to under-predict the daily coincidence factors of aggregated demands, although the variance of coincident factors was shown to be similar between measured and modelled. A stochastic baseload input developed for this work was found to improve estimates of the magnitude and variation of both baseload and peak demands.
This article describes the progress made toward implementing Resource Description and Access (RDA) in libraries across Canada, as of Fall 2013. Differences in the training experiences in the English-speaking cataloging communities and French-speaking cataloging communities are discussed. Preliminary results of a survey of implementation in English-Canadian libraries are included as well as a summary of the support provided for French-Canadian libraries. Data analysis includes an examination of the rate of adoption in Canada by region and by sector. Challenges in RDA training delivery in a Canadian context are identified, as well as opportunities for improvement and expansion of RDA training in the future.
we present a method of segmenting video to detect cuts with accuracy equal to or better than both histogram and other feature based methods. As well, the method is faster than other feature based methods. By utilizing feature tracking on corners, rather than lines, we are able to reliably detect features such as cuts, fades and salient frames. Experimental evidence shows that the method is able to withstand high motion situations better than existing methods. Initial implementations using full sized video frames are able to achieve processing rates of 10-30 frames per second depending on the level of motion and number of features being tracked; this includes the time to generate the MPEG decompressed frames.
We describe a novel Distributed Storage protocol in Disruption (Delay) Tolerant Networks (DTN). Since DTNs can not guarantee the connectivity of the network all the time, distributed data storage and look up has to be performed in a store-and-forward way. In this work, we define local distributed location regions which are called cells to facilitate the data storage and look up process. Nodes in a cell have high probability of moving within their cells. Our protocol resorts to storing data items in cells which have hierarchical structure to reduce routing information storage at nodes. Multiple copies of a data item may be stored at nodes to counter the adverse impact of the nature of DTNs. The cells are relatively stable regions and as a result, data exchange overheads among nodes are reduced. Through experimentation, we show that the proposed distributed storage protocol achieves higher successful data storage ratios with lower delays and limited data item exchange requirements than other protocols in the literature.
It has been observed in the literature that as the cardinality of the prescribed discrete input-output data set increases, the corresponding four-bar linkages that minimise the Euclidean norm of the design and structural errors tend to converge to the same linkage. The important implication is that minimising the Euclidean norm, or any p-norm, of the structural error, which leads to a nonlinear least-squares problem requiring iterative solutions, can be accomplished implicitly by minimising that of the design error, which leads to a linear least-squares problem that can be solved directly. Apropos, the goal of this paper is to take the first step towards proving that as the cardinality of the data set tends towards infinity the observation is indeed true. In this paper we will integrate the synthesis equations in the range between minimum and maximum input values, thereby reposing the discrete approximate synthesis problem as a continuous one. Moreover, we will prove that a lower bound of the Euclidean norm, and indeed of any p-norm, of the design error for planar RRRR function-generating linkages exists and is attained with continuous approximate synthesis.