Search Constraints
Number of results to display per page
Search Results
- Resource Type:
- Conference Proceeding
- Creator:
- Barbeau, Michel, Kranakis, Evangelos, and Garcia-Alfaro, Joaquin
- Abstract:
- The design and implementation of security threat mitigation mechanisms in RFID systems, specially in low-cost RFID tags, are gaining great attention in both industry and academia. One main focus of research interests is the authentication and privacy techniques to prevent attacks targeting the insecure wireless channel of these systems. Cryptography is a key tool to address these threats. Nevertheless, strong hardware constraints, such as production costs, power consumption, time of response, and regulations compliance, makes the use of traditional cryptography in these systems a very challenging problem. The use of low-overhead procedures becomes the main approach to solve these challenging problems where traditional cryptography cannot fit. Recent results and trends, with an emphasis on lightweight techniques for addressing critical threats against low-cost RFID systems, are surveyed.
- Date Created:
- 2010-05-03
- Resource Type:
- Conference Proceeding
- Creator:
- Czyzowicz, Jurek, Opatrny, Jaroslav, Kranakis, Evangelos, Narayanan, Lata, Krizanc, Danny, Stacho, Ladislav, Urrutia, Jorge, Yazdani, Mohammadreza, and Lambadaris, Ioannis
- Abstract:
- A set of sensors establishes barrier coverage of a given line segment if every point of the segment is within the sensing range of a sensor. Given a line segment I, n mobile sensors in arbitrary initial positions on the line (not necessarily inside I) and the sensing ranges of the sensors, we are interested in finding final positions of sensors which establish a barrier coverage of I so that the sum of the distances traveled by all sensors from initial to final positions is minimized. It is shown that the problem is NP complete even to approximate up to constant factor when the sensors may have different sensing ranges. When the sensors have an identical sensing range we give several efficient algorithms to calculate the final destinations so that the sensors either establish a barrier coverage or maximize the coverage of the segment if complete coverage is not feasible while at the same time the sum of the distances traveled by all sensors is minimized. Some open problems are also mentioned.
- Date Created:
- 2010-12-13
- Resource Type:
- Conference Proceeding
- Creator:
- Cervera, Gimer, Barbeau, Michel, Garcia-Alfaro, Joaquin, and Kranakis, Evangelos
- Abstract:
- The Hierarchical Optimized Link State Routing (HOLSR) protocol enhances the scalability and heterogeneity of traditional OLSR-based Mobile Ad-Hoc Networks (MANETs). It organizes the network in logical levels and nodes in clusters. In every cluster, it implements the mechanisms and algorithms of the original OLSR to generate and to distribute control traffic information. However, the HOLSR protocol was designed with no security in mind. Indeed, it both inherits, from OLSR, and adds new security threats. For instance, the existence of misbehaving nodes can highly affect important HOLSR operations, such as the cluster formation. Cluster IDentification (CID) messages are implemented to organize a HOLSR network in clusters. In every message, the hop count field indicates to the receiver the distance in hops to the originator. An attacker may maliciously alter the hop count field. As a consequence, a receiver node may join a cluster head farther away than it appears. Then, the scalability properties in a HOLSR network is affected by an unbalanced distribution of nodes per cluster. We present a solution based on the use of hash chains to protect mutable fields in CID messages. As a consequence, when a misbehaving node alters the hop count field in a CID message, the receiver nodes are able of detecting and discarding the invalid message.
- Date Created:
- 2012-01-27
- Resource Type:
- Conference Proceeding
- Creator:
- Van Walderveen, Freek, Davoodi, Pooya, and Smid, Michiel
- Abstract:
- Given a set of n points in the plane, range diameter queries ask for the furthest pair of points in a given axis-parallel rectangular range. We provide evidence for the hardness of designing space-efficient data structures that support range diameter queries by giving a reduction from the set intersection problem. The difficulty of the latter problem is widely acknowledged and is conjectured to require nearly quadratic space in order to obtain constant query time, which is matched by known data structures for both problems, up to polylogarithmic factors. We strengthen the evidence by giving a lower bound for an important subproblem arising in solutions to the range diameter problem: computing the diameter of two convex polygons, that are separated by a vertical line and are preprocessed independently, requires almost linear time in the number of vertices of the smaller polygon, no matter how much space is used. We also show that range diameter queries can be answered much more efficiently for the case of points in convex position by describing a data structure of size O(n log n) that supports queries in O(log n) time.
- Date Created:
- 2012-05-15
- Resource Type:
- Conference Proceeding
- Creator:
- Mannan, Mohammad, Barrera, David, Van Oorschot, Paul C., Lie, David, and Brown, Carson D.
- Abstract:
- Instead of allowing the recovery of original passwords, forgotten passwords are often reset using online mechanisms such as password verification questions (PVQ methods) and password reset links in email. These mechanisms are generally weak, exploitable, and force users to choose new passwords. Emailing the original password exposes the password to third parties. To address these issues, and to allow forgotten passwords to be securely restored, we present a scheme called Mercury. Its primary mode employs user-level public keys and a personal mobile device (PMD) such as a smart-phone, netbook, or tablet. A user generates a key pair on her PMD; the private key remains on the PMD and the public key is shared with different sites (e.g., during account setup). For password recovery, the site sends the (public key)-encrypted password to the user's pre-registered email address, or displays the encrypted password on a webpage, e.g., as a barcode. The encrypted password is then decrypted using the PMD and revealed to the user. A prototype implementation of Mercury is available as an Android application.
- Date Created:
- 2012-02-21
- Resource Type:
- Conference Proceeding
- Creator:
- Seidel, Raimund, Dehne, Frank, and Klein, Rolf
- Abstract:
- Given a set S of s points in the plane, where do we place a new point, p, in order to maximize the area of its region in the Voronoi diagram of S and p? We study the case where the Voronoi neighbors of p are in convex position, and prove that there is at most one local maximum.
- Date Created:
- 2002-12-01
- Resource Type:
- Conference Proceeding
- Creator:
- Bose, Prosenjit and Van Renssen, André
- Abstract:
- We present tight upper and lower bounds on the spanning ratio of a large family of constrained θ-graphs. We show that constrained θ-graphs with 4k2 (k≥ 1 and integer) cones have a tight spanning ratio of 1+2 sin(θ/2), where θ is 2 π/ (4k+2). We also present improved upper bounds on the spanning ratio of the other families of constrained θ-graphs.
- Date Created:
- 2014-01-01
- Resource Type:
- Conference Proceeding
- Creator:
- Peleg, David, Krizanc, Danny, Kirousis, Lefteris M., Kranakis, Evangelos, Kaklamanis, Christos, and Bose, Prosenjit
- Abstract:
- In wireless communication, the signal of a typical broadcast station is transmited from a broadcast center p and reaches objects at a distance, say, R from it. In addition there is a radius r, r < R, such that the signal originating from the center of the station is so strong that human habitation within distance r from the center p should be avoided. Thus every station determines a region which is an “annulus of permissible habitation". We consider the following station layout (SL) problem: Cover a given (say, rectangular) planar region which includes a collection of orthogonal buildings with a minimum number of stations so that every point in the region is within the reach of a station, while at the same time no building is within the dangerous range of a station. We give algorithms for computing such station layouts in both the one-and two-dimensional cases.
- Date Created:
- 1999-01-01
- Resource Type:
- Conference Proceeding
- Creator:
- Krizanc, Danny, Kranakis, Evangelos, and Kirousis, Lefteris M.
- Abstract:
- Let φ be a random Boolean formula that is an instance of 3-SAT. We consider the problem of computing the least real number such that if the ratio of the number of clauses over the number of variables of φ strictly exceeds κ, then φ is almost certainly unsatisfiable. By a well known and more or less straightforward argument, it can be shown that κ 3.
- Date Created:
- 1996-01-01
- Resource Type:
- Conference Proceeding
- Creator:
- Maheshwari, Anil, Sack, Jörg-Rüdiger, Lanthier, Mark, and Aleksandrov, Lyudmil
- Date Created:
- 1998-01-01
- Resource Type:
- Conference Proceeding
- Creator:
- Morin, Pat and Bose, Prosenjit
- Abstract:
- We consider online routing strategies for routing between the vertices of embedded planar straight line graphs. Our results include (1) two deterministic memoryless routing strategies, one that works for all Delaunay triangulations and the other that works for all regular triangulations, (2) a randomized memoryless strategy that works for all triangulations, (3) an O(1) memory strategy that works for all convex subdivisions, (4) an O(1) memory strategy that approximates the shortest path in Delaunay triangulations, and (5) theoretical and experimental results on the competitiveness of these strategies.
- Date Created:
- 1999-01-01
- Resource Type:
- Conference Proceeding
- Creator:
- Maheshwari, Anil and Zeh, Norbert
- Abstract:
- We present external memory algorithms for outerplanarity testing, embedding outerplanar graphs, breadth-first search (BFS) and depth-first search (DFS) in outerplanar graphs, and finding a2-separator of size 2 for a given outerplanar graph. Our algorithms take O(sort(N)) I/Os and can easily be improved to take O (perm (N)) I/Os, as all these problems have linear time solutions in internal memory. For BFS, DFS, and outerplanar embedding we show matching lower bounds.
- Date Created:
- 1999-01-01
- Resource Type:
- Conference Proceeding
- Creator:
- Prencipe, Giuseppe, Cáceres, Edson, Chan, Albert, and Dehne, Frank
- Abstract:
- In this paper, we present parallel algorithms for the coarse grained multicomputer (CGM) and the bulk synchronous parallel computer (BSP) for solving two well known graph problems: (1) determining whether a graph G is bipartite, and (2) determining whether a bipartite graph G is convex. Our algorithms require O(log p) and O(log2 p) communication rounds, respectively, and linear sequential work per round on a CGM with p processors and N/p local memory per processor, N=|G|. The algorithms assume that N/ p ≥ p€ for some fixed€ > 0, which is true for all commercially available multiprocessors. Our results imply BSP algorithms with O(log p) and O(log2 p) supersteps, respectively, O(g log(p) N p) communication time, and O(log(p) N p) local computation time. Our algorithm for determining whether a bipartite graph is convex includes a novel, coarse grained parallel, version of the PQ tree data structure introduced by Booth and Lueker. Hence, our algorithm also solves, with the same time complexity as indicated above, the problem of testing the consecutive-ones property for (0, 1) matrices as well as the chordal graph recognition problem. These, in turn, have numerous applications in graph theory, DNA sequence assembly, database theory, and other areas.
- Date Created:
- 2000-01-01
- Resource Type:
- Conference Proceeding
- Creator:
- White, Anthony and Salehi-Abari, Amirali
- Abstract:
- Autonomous agents require trust and reputation concepts in order to identify communities of agents with which to interact reliably in ways analogous to humans. Agent societies are invariably heterogeneous, with multiple decision making policies and actions governing their behaviour. Through the introduction of naive agents, this paper shows empirically that while learning agents can identify malicious agents through direct interaction, naive agents compromise utility through their inability to discern malicious agents. Moreover, the impact of the proportion of naive agents on the society is analyzed. The paper demonstrates that there is a need for witness interaction trust to detect naive agents in addition to the need for direct interaction trust to detect malicious agents. By proposing a set of policies, the paper demonstrates how learning agents can isolate themselves from naive and malicious agents.
- Date Created:
- 2010-07-20
- Resource Type:
- Conference Proceeding
- Creator:
- Lanthier, Mark, Velazquez, Elio, and Santoro, Nicola
- Abstract:
- This paper proposes a pro-active solution to the Frugal Feeding Problem (FFP) in Wireless Sensor Networks. The FFP attempts to find energy-efficient routes for a mobile service entity to rendezvous with each member of a team of mobile robots. Although the complexity of the FFP is similar to the Traveling Salesman Problem (TSP), we propose an efficient solution, completely distributed and localized for the case of a fixed rendezvous location (i.e., service facility with limited number of docking ports) and mobile capable entities (sensors). Our pro-active solution reduces the FFP to finding energy-efficient routes in a dynamic Compass Directed unit Graph (CDG). The proposed CDG incorporates ideas from forward progress routing and the directionality of compass routing in an energy-aware unit sub-graph. Navigating the CDG guarantees that each sensor will reach the rendezvous location in a finite number of steps. The ultimate goal of our solution is to achieve energy equilibrium (i.e., no further sensor losses due to energy starvation) by optimizing the use of the shared resource (recharge station). We also examine the impact of critical parameters such as transmission range, cost of mobility and sensor knowledge in the overall performance.
- Date Created:
- 2011-11-14
- Resource Type:
- Conference Proceeding
- Creator:
- Guo, Yuhong and Li, Xin
- Abstract:
- Multi-label classification is a central problem in many application domains. In this paper, we present a novel supervised bi-directional model that learns a low-dimensional mid-level representation for multi-label classification. Unlike traditional multi-label learning methods which identify intermediate representations from either the input space or the output space but not both, the mid-level representation in our model has two complementary parts that capture intrinsic information of the input data and the output labels respectively under the autoencoder principle while augmenting each other for the target output label prediction. The resulting optimization problem can be solved efficiently using an iterative procedure with alternating steps, while closed-form solutions exist for one major step. Our experiments conducted on a variety of multi-label data sets demonstrate the efficacy of the proposed bi-directional representation learning model for multi-label classification.
- Date Created:
- 2014-01-01
- Resource Type:
- Conference Proceeding
- Creator:
- Dujmović, Vida, De Carufel, Jean-Lou, Bose, Prosenjit, and Paradis, Frédérik
- Abstract:
- The well-separated pair decomposition (WSPD) of the complete Euclidean graph defined on points in ℝ2 (Callahan and Kosaraju [JACM, 42 (1): 67-90, 1995]) is a technique for partitioning the edges of the complete graph based on length into a linear number of sets. Among the many different applications of WSPDs, Callahan and Kosaraju proved that the sparse subgraph that results by selecting an arbitrary edge from each set (called WSPD-spanner) is a 1 + 8/(s − 4)-spanner, where s > 4 is the separation ratio used for partitioning the edges. Although competitive local-routing strategies exist for various spanners such as Yao-graphs, Θ-graphs, and variants of Delaunay graphs, few local-routing strategies are known for any WSPD-spanner. Our main contribution is a local-routing algorithm with a near-optimal competitive routing ratio of 1 + O(1/s) on a WSPD-spanner. Specifically, we present a 2-local and a 1-local routing algorithm on a WSPD-spanner with competitive routing ratios of 1+6/(s−2)+4/s and 1+6/(s−2)+ 6/s + 4/(s2 − 2s) + 8/s2respectively.
- Date Created:
- 2017-01-01
- Resource Type:
- Conference Proceeding
- Creator:
- Bertossi, Leopoldo
- Abstract:
- A correspondence between database tuples as causes for query answers in databases and tuple-based repairs of inconsistent databases with respect to denial constraints has already been established. In this work, answer-set programs that specify repairs of databases are used as a basis for solving computational and reasoning problems about causes. Here, causes are also introduced at the attribute level by appealing to a both null-based and attribute-based repair semantics. The corresponding repair programs are presented, and they are used as a basis for computation and reasoning about attribute-level causes.
- Date Created:
- 2018-01-01
- Resource Type:
- Conference Proceeding
- Creator:
- Kim, Sang-Woon and Oommen, B. John
- Abstract:
- The Maximum Likelihood (ML) and Bayesian estimation paradigms work within the model that the data, from which the parameters are to be estimated, is treated as a set rather than as a sequence. The pioneering paper that dealt with the field of sequence-based estimation [2] involved utilizing both the information in the observations and in their sequence of appearance. The results of [2] introduced the concepts of Sequence Based Estimation (SBE) for the Binomial distribution, where the authors derived the corresponding MLE results when the samples are taken two-at-a-time, and then extended these for the cases when they are processed three-at-a-time, four-at-a-time etc. These results were generalized for the multinomial “two-at-a-time” scenario in [3]. This paper (This paper is dedicated to the memory of Dr. Mohamed Kamel, who was a close friend of the first author.) now further generalizes the results found in [3] for the multinomial case and for subsequences of length 3. The strategy used in [3] (and also here) involves a novel phenomenon called “Occlusion” that has not been reported in the field of estimation. The phenomenon can be described as follows: By occluding (hiding or concealing) certain observations, we map the estimation problem onto a lower-dimensional space, i.e., onto a binomial space. Once these occluded SBEs have been computed, the overall Multinomial SBE (MSBE) can be obtained by combining these lower-dimensional estimates. In each case, we formally prove and experimentally demonstrate the convergence of the corresponding estimates.
- Date Created:
- 2016-01-01
- Resource Type:
- Conference Proceeding
- Creator:
- Maheshwari, Anil, Nandy, Ayan, Smid, Michiel, and Das, Sandip
- Abstract:
- Consider a line segment R consisting of n facilities. Each facility is a point on R and it needs to be assigned exactly one of the colors from a given palette of c colors. At an instant of time only the facilities of one particular color are 'active' and all other facilities are 'dormant'. For the set of facilities of a particular color, we compute the one dimensional Voronoi diagram, and find the cell, i.e, a segment of maximum length. The users are assumed to be uniformly distributed over R and they travel to the nearest among the facilities of that particular color that is active. Our objective is to assign colors to the facilities in such a way that the length of the longest cell is minimized. We solve this optimization problem for various values of n and c. We propose an optimal coloring scheme for the number of facilities n being a multiple of c as well as for the general case where n is not a multiple of c. When n is a multiple of c, we compute an optimal scheme in Θ(n) time. For the general case, we propose a coloring scheme that returns the optimal in O(n2logn) time.
- Date Created:
- 2014-01-01
- Resource Type:
- Conference Proceeding
- Creator:
- Oommen, B. John and Kim, Sang-Woon
- Abstract:
- This paper deals with the relatively new field of sequencebased estimation which involves utilizing both the information in the observations and in their sequence of appearance. Our intention is to obtain Maximum Likelihood estimates by “extracting” the information contained in the observations when perceived as a sequence rather than as a set. The results of [15] introduced the concepts of Sequence Based Estimation (SBE) for the Binomial distribution. This current paper generalizes these results for the multinomial “two-at-a-time” scenario. We invoke a novel phenomenon called “Occlusion” that can be described as follows: By “concealing” certain observations, we map the estimation problem onto a lower-dimensional binomial space. Once these occluded SBEs have been computed, we demonstrate how the overall Multinomial SBE (MSBE) can be obtained by mapping several lower-dimensional estimates onto the original higher-dimensional space. We formally prove and experimentally demonstrate the convergence of the corresponding estimates.
- Date Created:
- 2016-01-01
- Resource Type:
- Conference Proceeding
- Creator:
- Labiche, Yvan and Barros, Márcio
- Date Created:
- 2015-01-01
- Resource Type:
- Conference Proceeding
- Creator:
- Polk, Spencer and Oommen, B. John
- Abstract:
- This paper pioneers the avenue of enhancing a well-known paradigm in game playing, namely the use of History-based heuristics, with a totally-unrelated area of computer science, the field of Adaptive Data Structures (ADSs). It is a well-known fact that highly-regarded game playing strategies, such as alpha-beta search, benefit strongly from proper move ordering, and from this perspective, the History heuristic is, probably, one of the most acclaimed techniques used to achieve AI-based game playing. Recently, the authors of this present paper have shown that techniques derived from the field of ADSs, which are concerned with query optimization in a data structure, can be applied to move ordering in multi-player games. This was accomplished by ranking opponent threat levels. The work presented in this paper seeks to extend the utility of ADS-based techniques to two-player and multi-player games, through the development of a new move ordering strategy that incorporates the historical advantages of the moves. The resultant technique, the History-ADS heuristic, has been found to produce substantial (i.e, even up to 70%) savings in a variety of two-player and multi-player games, at varying ply depths, and at both initial and midgame board states. As far as we know, results of this nature have not been reported in the literature before.
- Date Created:
- 2015-01-01
- Resource Type:
- Conference Proceeding
- Creator:
- Oommen, B. John and Astudillo, César A.
- Abstract:
- We present a method that employs a tree-based Neural Network (NN) for performing classification. The novel mechanism, apart from incorporating the information provided by unlabeled and labeled instances, re-arranges the nodes of the tree as per the laws of Adaptive Data Structures (ADSs). Particularly, we investigate the Pattern Recognition (PR) capabilities of the Tree-Based Topology-Oriented SOM (TTOSOM) when Conditional Rotations (CONROT) [8] are incorporated into the learning scheme. The learning methodology inherits all the properties of the TTOSOM-based classifier designed in [4]. However, we now augment it with the property that frequently accessed nodes are moved closer to the root of the tree. Our experimental results show that on average, the classification capabilities of our proposed strategy are reasonably comparable to those obtained by some of the state-of-the-art classification schemes that only use labeled instances during the training phase. The experiments also show that improved levels of accuracy can be obtained by imposing trees with a larger number of nodes.
- Date Created:
- 2015-01-01
- Resource Type:
- Conference Proceeding
- Creator:
- Tavasoli, Hanane, Oommen, B. John, and Yazidi, Anis
- Abstract:
- In this paper, we propose a novel online classifier for complex data streams which are generated from non-stationary stochastic properties. Instead of using a single training model and counters to keep important data statistics, the introduced online classifier scheme provides a real-time self-adjusting learning model. The learning model utilizes the multiplication-based update algorithm of the Stochastic Learning Weak Estimator (SLWE) at each time instant as a new labeled instance arrives. In this way, the data statistics are updated every time a new element is inserted, without requiring that we have to rebuild its model when changes occur in the data distributions. Finally, and most importantly, the model operates with the understanding that the correct classes of previously-classified patterns become available at a later juncture subsequent to some time instances, thus requiring us to update the training set and the training model. The results obtained from rigorous empirical analysis on multinomial distributions, is remarkable. Indeed, it demonstrates the applicability of our method on synthetic datasets, and proves the advantages of the introduced scheme.
- Date Created:
- 2016-01-01
- Resource Type:
- Conference Proceeding
- Creator:
- Yazidi, Anis, Oommen, B. John, and Hammer, Hugo Lewi
- Abstract:
- The problem of clustering, or unsupervised classification, has been solved by a myriad of techniques, all of which depend, either directly or implicitly, on the Bayesian principle of optimal classification. To be more specific, within a Bayesian paradigm, if one is to compare the testing sample with only a single point in the feature space from each class, the optimal Bayesian strategy would be to achieve this based on the distance from the corresponding means or central points in the respective distributions. When this principle is applied in clustering, one would assign an unassigned sample into the cluster whose mean is the closest, and this can be done in either a bottom-up or a top-down manner. This paper pioneers a clustering achieved in an “Anti-Bayesian” manner, and is based on the breakthrough classification paradigm pioneered by Oommen et al. The latter relies on a radically different approach for classifying data points based on the non-central quantiles of the distributions. Surprisingly and counter-intuitively, this turns out to work equally or close-to-equally well to an optimal supervised Bayesian scheme, which thus begs the natural extension to the unexplored arena of clustering. Our algorithm can be seen as the Anti-Bayesian counter-part of the wellknown k-means algorithm (The fundamental Anti-Bayesian paradigm need not just be used to the k-means principle. Rather, we hypothesize that it can be adapted to any of the scores of techniques that is indirectly based on the Bayesian paradigm.), where we assign points to clusters using quantiles rather than the clusters’ centroids. Extensive experimentation (This paper contains the prima facie results of experiments done on one and two-dimensional data. The extensions to multi-dimensional data are not included in the interest of space, and would use the corresponding multi-dimensional Anti-Na¨ıve-Bayes classification rules given in [1].) demonstrates that our Anti-Bayesian clustering converges fast and with precision results competitive to a k-means clustering.
- Date Created:
- 2015-01-01
- Resource Type:
- Conference Proceeding
- Creator:
- Oommen, B. John and Polk, Spencer
- Abstract:
- The field of game playing is a particularly well-studied area within the context of AI, leading to the development of powerful techniques, such as the alpha-beta search, capable of achieving competitive game play against an intelligent opponent. It is well known that tree pruning strategies, such as alpha-beta, benefit strongly from proper move ordering, that is, searching the best element first. Inspired by the formerly unrelated field of Adaptive Data Structures (ADSs), we have previously introduced the History-ADS technique, which employs an adaptive list to achieve effective and dynamic move ordering, in a domain independent fashion, and found that it performs well in a wide range of cases. However, previous work did not compare the performance of the History-ADS heuristic to any established move ordering strategy. In an attempt to address this problem, we present here a comparison to two well-known, acclaimed strategies, which operate on a similar philosophy to the History-ADS, the History Heuristic, and the Killer Moves technique. We find that, in a wide range of two-player and multi-player games, at various points in the game’s progression, the History-ADS performs at least as well as these strategies, and, in fact, outperforms them in the majority of cases.
- Date Created:
- 2016-01-01
- Resource Type:
- Conference Proceeding
- Creator:
- Guo, Yuhong and Li, Xin
- Abstract:
- Semantic scene classification is a challenging problem in computer vision. In this paper, we present a novel multi-level active learning approach to reduce the human annotation effort for training robust scene classification models. Different from most existing active learning methods that can only query labels for selected instances at the target categorization level, i.e., the scene class level, our approach establishes a semantic framework that predicts scene labels based on a latent object-based semantic representation of images, and is capable to query labels at two different levels, the target scene class level (abstractive high level) and the latent object class level (semantic middle level). Specifically, we develop an adaptive active learning strategy to perform multi-level label query, which maintains the default label query at the target scene class level, but switches to the latent object class level whenever an "unexpected" target class label is returned by the labeler. We conduct experiments on two standard scene classification datasets to investigate the efficacy of the proposed approach. Our empirical results show the proposed adaptive multi-level active learning approach can outperform both baseline active learning methods and a state-of-the-art multi-level active learning method.
- Date Created:
- 2014-01-01
- Resource Type:
- Conference Proceeding
- Creator:
- Peng, Mengfei, Shi, Wei, Croft, William Lee, and Corriveau, Jean-Pierre
- Abstract:
- New threats to networks are constantly arising. This justifies protecting network assets and mitigating the risk associated with attacks. In a distributed environment, researchers aim, in particular, at eliminating faulty network entities. More specifically, much research has been conducted on locating a single static black hole, which is defined as a network site whose existence is known a priori and that disposes of any incoming data without leaving any trace of this occurrence. However, the prevalence of faulty nodes requires an algorithm able to (a) identify faulty nodes that can be repaired without human intervention and (b) locate black holes, which are taken to be faulty nodes whose repair does require human intervention. In this paper, we consider a specific attack model that involves multiple faulty nodes that can be repaired by mobile software agents, as well as a virus v that can infect a previously repaired faulty node and turn it into a black hole. We refer to the task of repairing multiple faulty nodes and pointing out the location of the black hole as the Faulty Node Repair and Dynamically Spawned Black Hole Search. Wefirst analyze the attack model we put forth. We then explain (a) how to identify whether a node is either (1) a normal node or (2) a repairable faulty node or (3) the black hole that has been infected by virus v during the search/repair process and, (b) how to perform the correct relevant actions. These two steps constitute a complex task, which, we explain, significantly differs from the traditional Black Hole Search. We continue by proposing an algorithm to solve this problem in an asynchronous ring network with only one whiteboard (which resides in a node called the homebase). We prove the correctness of our solution and analyze its complexity by both theoretical analysis and experiment evaluation. We conclude that, using our proposed algorithm, b + 4 agents can repair all faulty nodes and locate the black hole infected by a virus v within finite time. Our algorithm works even when the number of faulty nodes b is unknown a priori.
- Date Created:
- 2017-01-01
- Resource Type:
- Article
- Creator:
- Sack, Jörg-Rüdiger, Maheshwari, Anil, and Lingas, A.
- Abstract:
- We provide optimal parallel solutions to several link-distance problems set in trapezoided rectilinear polygons. All our main parallel algorithms are deterministic and designed to run on the exclusive read exclusive write parallel random access machine (EREW PRAM). Let P be a trapezoided rectilinear simple polygon with n vertices. In O(log n) time using O(n/log n) processors we can optimally compute: 1. Minimum réctilinear link paths, or shortest paths in the L1 metric from any point in P to all vertices of P. 2. Minimum rectilinear link paths from any segment inside P to all vertices of P. 3. The rectilinear window (histogram) partition of P. 4. Both covering radii and vertex intervals for any diagonal of P. 5. A data structure to support rectilinear link-distance queries between any two points in P (queries can be answered optimally in O(log n) time by uniprocessor). Our solution to 5 is based on a new linear-time sequential algorithm for this problem which is also provided here. This improves on the previously best-known sequential algorithm for this problem which used O(n log n) time and space.5 We develop techniques for solving link-distance problems in parallel which are expected to find applications in the design of other parallel computational geometry algorithms. We employ these parallel techniques, for example, to compute (on a CREW PRAM) optimally the link diameter, the link center, and the central diagonal of a rectilinear polygon.
- Date Created:
- 1995-09-01
- Resource Type:
- Article
- Creator:
- Bose, Prosenjit, Overmars, M., Wilfong, G., Toussaint, G., Garcia-Lopez, J., Zhu, B., Asberg, B., and Blanco, G.
- Abstract:
- We study the feasibility of design for a layer-deposition manufacturing process called stereolithography which works by controlling a vertical laser beam which when targeted on a photocurable liquid causes the liquid to harden. In order to understand the power as well as the limitations of this manufacturing process better, we define a mathematical model of stereolithography (referred to as vertical stereolithography) and analyze the class of objects that can be constructed under the assumptions of the model. Given an object (modeled as a polygon or a polyhedron), we give algorithms that decide in O(n) time (where n is the number of vertices in the polygon or polyhedron) whether or not the object can be constructed by vertical stereolithography. If the answer is in the affirmative, the algorithm reports a description of all the orientations in which the object can be made. We also show that the objects built with vertical stereolithography are precisely those that can be made with a 3-axis NC machine. We then define a more flexible model that more accurately reflects the actual capabilities of stereolithography (referred to as variable-angle stereolithography) and again study the class of feasible objects for this model. We give an O(n)-time algorithm for polygons and O(n log n)- as well as O(n)-time algorithms for polyhedra. We show that objects formed with variable-angle stereolithography can also be constructed using another manufacturing process known as gravity casting. Furthermore, we show that the polyhedral objects formed by vertical stereolithography are closely related to polyhedral terrains which are important structures in geographic information systems (GIS) and computational geometry. In fact, an object built with variable-angle stereolithography resembles a terrain with overhangs, thus initiating the study of more realistic terrains than the standard ones considered in geographic information systems. Finally, we relate our results to the area of grasping in robotics by showing that the polygonal and polyhedral objects that can be built by vertical stereolithography can be clamped by parallel jaw grippers with any positive-sized gripper.
- Date Created:
- 1997-01-01
- Resource Type:
- Article
- Creator:
- Yan, Donghang, Wang, Zhiyuan, Yu, Hongan, Wu, Xianguo, and Zhang, Jidong
- Abstract:
- A near infrared (NIR) electrochromic attenuator based on a dinuclear ruthenium complex and polycrystalline tungsten oxide was fabricated and characterized. The results show that the use of the NIR-absorbing ruthenium complex as a counter electrode material can improve the device performance. By replacing the visible electrochromic ferrocene with the NIR-absorbing ruthenium complex, the optical attenuation at 1550 nm was enhanced from 19.1 to 30.0 dB and color efficiency also increased from 29.2 to 121.2 cm2/C.
- Date Created:
- 2005-12-01
- Resource Type:
- Article
- Creator:
- Wiener, Michael J., Van Oorschot, Paul C., and Diffie, Whitfield
- Abstract:
- We discuss two-party mutual authentication protocols providing authenticated key exchange, focusing on those using asymmetric techniques. A simple, efficient protocol referred to as the station-to-station (STS) protocol is introduced, examined in detail, and considered in relation to existing protocols. The definition of a secure protocol is considered, and desirable characteristics of secure protocols are discussed.
- Date Created:
- 1992-06-01
- Resource Type:
- Article
- Creator:
- Wiener, Michael J. and Van Oorschot, Paul C.
- Abstract:
- A simple new technique of parallelizing methods for solving search problems which seek collisions in pseudorandom walks is presented. This technique can be adapted to a wide range of cryptanalytic problems which can be reduced to finding collisions. General constructions are given showing how to adapt the technique to finding discrete logarithms in cyclic groups, finding meaningful collisions in hash functions, and performing meet-in-the-middle attacks such as a known-plaintext attack on double encryption. The new technique greatly extends the reach of practical attacks, providing the most cost-effective means known to date for defeating: the small subgroup used in certain schemes based on discrete logarithms such as Schnorr, DSA, and elliptic curve cryptosystems; hash functions such as MD5, RIPEMD, SHA-1, MDC-2, and MDC-4; and double encryption and three-key triple encryption. The practical significance of the technique is illustrated by giving the design for three $10 million custom machines which could be built with current technology: one finds elliptic curve logarithms in GF(2155) thereby defeating a proposed elliptic curve cryptosystem in expected time 32 days, the second finds MD5 collisions in expected time 21 days, and the last recovers a double-DES key from two known plaintexts in expected time 4 years, which is four orders of magnitude faster than the conventional meet-in-the-middle attack on double-DES. Based on this attack, double-DES offers only 17 more bits of security than single-DES.
- Date Created:
- 1999-01-01
- Resource Type:
- Article
- Creator:
- Morin, Pat, Hurtado, Ferran, Bose, Prosenjit, and Carmi, Paz
- Abstract:
- We prove that, for every simple polygon P having k ≥ 1 reflex vertices, there exists a point q ε P such that every half-polygon that contains q contains nearly 1/2(k + 1) times the area of P. We also give a family of examples showing that this result is the best possible.
- Date Created:
- 2011-04-01
- Resource Type:
- Article
- Creator:
- Hayes, M. John, Langlois, Robert, and Weiss, Abraham
- Abstract:
- Conventional training simulators commonly use a hexapod configuration to provide motion cues. While widely used, studies have shown that hexapods are incapable of producing the range of motion required to achieve high fidelity simulation required in many applications. A novel alternative is the Atlas motion platform. This paper presents a new generalized kinematic model of the platform which can be applied to any spherical platform actuated by three omnidirectional wheels. In addition, conditions for slip-free and singularity-free motions are identified. Two illustrative examples are given for different omnidirectional wheel configurations.
- Date Created:
- 2011-02-01
- Resource Type:
- Article
- Creator:
- Adler, Andy, Loyka, Sergey, and Youmaran, Richard
- Date Created:
- 2009-01-01
- Resource Type:
- Article
- Creator:
- Lever, Rosemary, Ouellette, Gene, Pagan, Stephanie, and Sénéchal, Monique
- Abstract:
- The goal of the present intervention research was to test whether guided invented spelling would facilitate entry into reading for at-risk kindergarten children. The 56 participating children had poor phoneme awareness, and as such, were at risk of having difficulty acquiring reading skills. Children were randomly assigned to one of three training conditions: invented spelling, phoneme segmentation, or storybook reading. All children participated in 16 small group sessions over eight weeks. In addition, children in the three training conditions received letter-knowledge training and worked on the same 40 stimulus words that were created from an array of 14 letters. The findings were clear: on pretest, there were no differences between the three conditions on measures of early literacy and vocabulary, but, after training, invented spelling children learned to read more words than did the other children. As expected, the phoneme-segmentation and invented-spelling children were better on phoneme awareness than were the storybook-reading children. Most interesting, however, both the invented spelling and the phoneme-segmentation children performed similarly on phoneme awareness suggesting that the differential effect on learning to read was not due to phoneme awareness per se. As such, the findings support the view that invented spelling is an exploratory process that involves the integration of phoneme and orthographic representations. With guidance and developmentally appropriate feedback, invented spelling provides a milieu for children to explore the relation between oral language and written symbols that can facilitate their entry in reading.
- Date Created:
- 2012-04-01
- Resource Type:
- Article
- Creator:
- Shao, Li-Yang, Albert, Jacques, Coyle, Jason P., and Barry, Seán T.
- Abstract:
- The conformal coating of a 50 nm-thick layer of copper nanoparticles deposited with pulse chemical vapor deposition of a copper (I) guanidinate precursor on the cladding of a single mode optical fiber was monitored by using a tilted fiber Bragg grating (TFBG) photo-inscribed in the fiber core. The pulse-per-pulse growth of the copper nanoparticles is readily obtained from the position and amplitudes of resonances in the reflection spectrum of the grating. In particular, we confirm that the real part of the effective complex permittivity of the deposited nano-structured copper layer is an order of magnitude larger than that of a bulk copper film at an optical wavelength of 1550 nm. We further observe a transition in the growth behavior from granular to continuous film (as determined from the complex material permittivity) after approximately 20 pulses (corresponding to an effective thickness of 25 nm). Finally, despite the remaining granularity of the film, the final copper-coated optical fiber is shown to support plasmon waves suitable for sensing, even after the growth of a thin oxide layer on the copper surface.
- Date Created:
- 2011-06-01
- Resource Type:
- Article
- Creator:
- Albert, Jacques, Dakka, Milad A., Shevchenko, Yanina, and Chen, Chengkun
- Abstract:
- We show that the tilted-grating-assisted excitation of surface plasmon polaritons on gold coated single-mode optical fibers depends strongly on the state of polarization of the core-guided light, even in fibers with cylindrical symmetry. Rotating the linear polarization of the guided light by 90° relative to the grating tilt plane is sufficient to turn the plasmon resonances on and off with more than 17 dB of extinction ratio. By monitoring the amplitude changes of selected individual cladding mode resonances we identify what we believe to be a new refractive index measurement method that is shown to be accurate to better than 5 × 10-5.
- Date Created:
- 2010-03-01
- Resource Type:
- Article
- Creator:
- LeFevre, Jo-Anne and Sénéchal, Monique
- Abstract:
- One hundred and ten English-speaking children schooled in French were followed from kindergarten to Grade 2 (Mage: T1 = 5;6, T2 = 6;4, T3 = 6;11, T4 = 7;11). The findings provided strong support for the Home Literacy Model (Sénéchal & LeFevre, 2002) because in this sample the home language was independent of the language of instruction. The informal literacy environment at home predicted growth in English receptive vocabulary from kindergarten to Grade 1, whereas parent reports of the formal literacy environment in kindergarten predicted growth in children's English early literacy between kindergarten and Grade 1 and growth in English word reading during Grade 1. Furthermore, 76% of parents adjusted their formal literacy practices according to the reading performance of their child, in support of the presence of a responsive home literacy curriculum among middle-class parents.
- Date Created:
- 2014-01-01
- Resource Type:
- Article
- Creator:
- Driedger, Michael and Wolfart, Johannes
- Abstract:
- In this special issue of Nova Religio four historians of medieval and early modern Christianities offer perspectives on basic conceptual frameworks widely employed in new religions studies, including modernization and secularization, radicalism/violent radicalization, and diversity/diversification. Together with a response essay by J. Gordon Melton, these articles suggest strong possibilities for renewed and ongoing conversation between scholars of "old" and "new" religions. Unlike some early discussions, ours is not aimed simply at questioning the distinction between old and new religions itself. Rather, we think such conversation between scholarly fields holds the prospect of productive scholarly surprise and perspectival shifts, especially via the disciplinary practice of historiographical criticism.
- Date Created:
- 2018-05-01
- Resource Type:
- Article
- Creator:
- Fast, Stewart, Saner, Marc, and Brklacich, Michael
- Abstract:
- The new renewable fuels standard (RFS 2) aims to distinguish corn-ethanol that achieves a 20% reduction in greenhouse gas (GHG) emissions compared with gasoline. Field data from Kim et al. (2009) and from our own study suggest that geographic variability in the GHG emissions arising from corn production casts considerable doubt on the approach used in the RFS 2 to measure compliance with the 20% target. If regulators wish to require compliance of fuels with specific GHG emission reduction thresholds, then data from growing biomass should be disaggregated to a level that captures the level of variability in grain corn production and the application of life cycle assessment to biofuels should be modified to capture this variability.
- Date Created:
- 2012-05-01
- Resource Type:
- Article
- Creator:
- Quastel, Noah and Mendez, Pablo
- Abstract:
- This article draws on Margaret Radin's theorization of 'contested commodities' to explore the process whereby informal housing becomes formalized while also being shaped by legal regulation. In seeking to move once-informal housing into the domain of official legality, cities can seldom rely on a simple legal framework of private-law principles of property and contract. Instead, they face complex trade-offs between providing basic needs and affordability and meeting public-law norms around living standards, traditional neighbourhood feel and the environment. This article highlights these issues through an examination of the uneven process of legal formalization of basement apartments in Vancouver, Canada. We chose a lengthy period-from 1928 to 2009-to explore how basement apartments became a vital source of housing often at odds with city planning that has long favoured a low-density residential built form. We suggest that Radin's theoretical account makes it possible to link legalization and official market construction with two questions: whether to permit commodification and how to permit commodification. Real-world commodification processes-including legal sanction-reflect hybridization, pragmatic decision making and regulatory compromise. The resolution of questions concerning how to legalize commodification are also intertwined with processes of market expansion.
- Date Created:
- 2015-11-01
- Resource Type:
- Article
- Creator:
- Khalaf, Lynda A., Kichian, Maral, Bernard, Jean-Thomas, and Dufour, Jean-Marie
- Abstract:
- We test for the presence of time-varying parameters (TVP) in the long-run dynamics of energy prices for oil, natural gas and coal, within a standard class of mean-reverting models. We also propose residual-based diagnostic tests and examine out-of-sample forecasts. In-sample LR tests support the TVP model for coal and gas but not for oil, though companion diagnostics suggest that the model is too restrictive to conclusively fit the data. Out-of-sample analysis suggests a random-walk specification for oil price, and TVP models for both real-time forecasting in the case of gas and long-run forecasting in the case of coal.
- Date Created:
- 2012-06-01
- Resource Type:
- Article
- Creator:
- Mendez, Pablo
- Abstract:
- This paper asks whether age at arrival matters when it comes to home-ownership attainment among immigrants, paying particular attention to householders' self-identification as a visible minority. Combining methods that were developed separately in the immigrant housing and the immigrant offspring literatures, this study shows the importance of recognising generational groups based on age at arrival, while also accounting for the interacting effects of current age (or birth cohorts) and arrival cohorts. The paper advocates a (quasi-)longitudinal approach to studying home-ownership attainment among immigrants and their foreign-born offspring. Analysis of data from the Canadian Census reveals that foreign-born householders who immigrated as adults in the 1970s and the 1980s are more likely to be home-owners than their counterparts who immigrated at a younger age when they self-identify as South Asian or White, but not always so when they self-identify as Chinese or as ‘other visible minority’. The same bifurcated pattern recurs between householders who immigrated at secondary-school age and those who were younger upon arrival. Age at arrival therefore emerges as a variable of significance to help explain differences in immigrant housing outcomes, and should be taken into account in future studies of immigrant home-ownership attainment. Copyright © 2009 John Wiley & Sons, Ltd.
- Date Created:
- 2009-01-01
- Resource Type:
- Article
- Creator:
- Urrutia, J., Opatrny, J., Chávez, E., Dobrev, S., Stacho, L., and Kranakis, Evangelos
- Abstract:
- We address the problem of discovering routes in strongly connected planar geometric networks with directed links. Motivated by the necessity for establishing communication in wireless ad hoc networks in which the only information available to a vertex is its immediate neighborhood, we are considering routing algorithms that use the neighborhood information of a vertex for routing with constant memory only. We solve the problem for three types of directed planar geometric networks: Eulerian (in which every vertex has the same number of incoming and outgoing edges), Outerplanar (in which a single face contains all vertices of the network), and Strongly Face Connected, a new class of geometric networks that we define in the article, consisting of several faces, each face being a strongly connected outerplanar graph.
- Date Created:
- 2006-08-01
- Resource Type:
- Report
- Creator:
- Labiche, Yvan and Shafique, Muhammad
- Abstract:
- Model-based testing (MBT) is about testing a software system by using a model of its behaviour. To benefit fully from MBT, automation support is required. This paper presents a systematic review of prominent MBT tool support where we focus on tools that rely on state-based models. The systematic review protocol precisely describes the scope of the search and the steps involved in tool selection. Precisely defined criteria are used to compare selected tools and comprise support for test coverage criteria, level of automation for various testing activities, and support for the construction of test scaffolding. The results of this review should be of interest to a wide range of stakeholders: software companies interested in selecting the most appropriate MBT tool for their needs; organizations willing to invest into creating MBT tool support; researchers interested in setting research directions.
- Date Created:
- 2010-05-01
- Resource Type:
- Conference Proceeding
- Creator:
- Whitehead, Anthony D.
- Abstract:
- There have been a number of steganography embedding techniques proposed over the past few years. In turn, there has been great interest in steganalysis techniques as the embedding techniques improve. Specifically, universal steganalysis techniques have become more attractive since they work independently of the embedding technique. In this work, we examine the effectiveness of a basic universal technique that relies on some knowledge about the cover media, but not the embedding technique. We consider images as a cover media, and examine how a single technique that we call steganographic sanitization performs on 26 different steganography programs that are publicly available on the Internet. Our experiments are completed using a number of secret messages and a variety of different levels of sanitization. However, since our intent is to remove covert communication, and not authentication information, we examine how well the sanitization process preserves authentication information such as watermarks and digital fingerprints.
- Date Created:
- 2005-12-01
- Resource Type:
- Article
- Creator:
- Kovalio, Jacob
- Date Created:
- 2001-07-01