In order for computer generated imagery to recreate the characteristic visual appearance of phenomena such as smoke and fog it is necessary to compute the way in which light interacts with participating media. In this work we present a novel technique for computing volumetric single scattering lighting solutions for particle-based inhomogeneous participating media data sets. We seek to calculate volumetric lighting solutions for particle-based data sets as such data sets have the advantage of being spatially unbounded and relatively unrestricted with regard to memory as compared to uniform grids. In order to perform the calculations which are required for computing such a lighting solution, we introduce a novel octree based data structure. We refer to this new data structure as a density octree. The design of the density octree allows for efficiently computing light attenuation throughout the spatial extent. Using our data structure, we are able to produce high quality output imagery of arbitrary particle data sets in the presence of arbitrary numbers of lights.
There have been a number of steganography embedding techniques proposed over the past few years. In turn, there has been great interest in steganalysis techniques as the embedding techniques improve. Specifically, universal steganalysis techniques have become more attractive since they work independently of the embedding technique. In this work, we examine the effectiveness of a basic universal technique that relies on some knowledge about the cover media, but not the embedding technique. We consider images as a cover media, and examine how a single technique that we call steganographic sanitization performs on 26 different steganography programs that are publicly available on the Internet. Our experiments are completed using a number of secret messages and a variety of different levels of sanitization. However, since our intent is to remove covert communication, and not authentication information, we examine how well the sanitization process preserves authentication information such as watermarks and digital fingerprints.
In this work we discuss our efforts to use the ubiquity of smart phone systems and the mobility they provide to stream historical information about your current place on the earth to the end user. We propose the concept of timescapes to portray this historical significance of where they are standing and allow a brief travel through time. By combining GPS location, with a rich media interpretation of existing historical documents, historical facts become an on-demand resource available to travellers, school children, historians and any interested 3rd party. To our knowledge this is the first introduction of the term timescape to be used in the context of historical information pull. Copyright
we present a method of segmenting video to detect cuts with accuracy equal to or better than both histogram and other feature based methods. As well, the method is faster than other feature based methods. By utilizing feature tracking on corners, rather than lines, we are able to reliably detect features such as cuts, fades and salient frames. Experimental evidence shows that the method is able to withstand high motion situations better than existing methods. Initial implementations using full sized video frames are able to achieve processing rates of 10-30 frames per second depending on the level of motion and number of features being tracked; this includes the time to generate the MPEG decompressed frames.