Program for Research in Computing and Information Sciences and Engineering

print this page


CISE Technical Lecture Series

CISE Lecture I – Dr. Yi Qian
January 10, 2003

Presentation Title: Resource Management and QoS Control for Next Generation Wireless networks and High-Speed Networks

There are many system proposals for wireless-based multimedia communication networks that promise high capacity, high speed, and ease of access. Future generation wireless packet networks will support multimedia applications with diverse quality-of-service (QoS) requirements. Because the wireless access channels are usually the bottleneck of the end-to-end communication links for next generation wireless packet networks, there is a need to develop resource management and control schemes for wireless channels that provide QoS guarantees for heterogeneous traffic. The QoS control schemes also need to be simple to implement and manage. In this presentation, a brief survey for different wireless multimedia systems and the medium access controls will be given at first. Then an example on wireless resource management and QoS control will be illustrated. In this example, it will focus on packet scheduling on a reservation-based on-demand TDMA/TDD wireless uplink channel to service integrated real-time traffic. A token bank fair queuing (TBFQ) packet scheduling mechanism will be presented that integrates the policing and servicing functions and keeps track of the usage of each connection. The performance of the TBFQ scheme is evaluated using computer simulations. It is shown that the proposed scheme results in low packet delay, jitter and violation probability. The trade-offs between the parameters are examined. It is also demonstrated that TBFQ performs rather well even when traffic conditions deviate from the established contracts. Related publications and patents on this topic from the same author will be briefly presented after the detailed TBFQ packet-scheduling example. The presentation will conclude with possible future research topics in the areas of resource management and QoS control for wireless local area networks, broadband multimedia satellite networks, 3G wireless networks and high-speed networks, and the beyond.


CISE Lecture II – Dr. Miguel Velez
January 30, 2003

Unsupervised Feature Subset Selection using Matrix Factorization Methods with Application to Hyperspectral Imagery Band Subset Selection

Imaging Spectrometry data or Hyperspectral Imagery is an imaging technology with potential applications in environmental remote sensing, biomedical imaging and drug monitoring in pharmaceutical industries. In a Hyperspectral Image, we effectively have an image where at each pixel we get a high spectral resolution sample of the spectral response of the object under the field of view of the sensor. Therefore a hypespectral image contains spectral as well as morphological information of the scene under study. A problem in developing information extraction algorithms for this type of data is the high dimensionality of the feature vector resulting from the data collection process. A single spectral signature sample consists of several hundreds of elements and potential ultraspectral sensors are expected to have few thousand samples. From a statistical modeling point of view we will need significant mount of data to estimate the parameters of distributions used to model data variability. For instance, if we use a gaussian distribution, the number of parameters to fit is proportional to the square of the dimension of the feature vector. In a typical, hypespectral image of 200 bands we have over 20,000 parameters to estimate. A way to deal with the dimensionality problem is dimensionality reduction. Typical dimensionality reduction in multivariate statistical analysis is done by principal component analysis (PCA) which results in linear transformation of the data which, in the case of imaging spectrometry, might not be desirable since we loose physical interpretability of the data in the transformation. Here we look at band subset selection, which is a special case of feature subset selection. In general, feature subset selection is a combinatorial optimization problem whose optimal solution is not practical to compute in this application. Our work is focusing on developing dimensionality reduction methods that can give a sub optimal solution (but still acceptable) in a short amount of time. In this presentation, we show how by using singular value decomposition and rank revealing QR factorizations we can select a subset of bands that are a good approximation to principal components in a sense of minimizing the canonical correlation between principal components and the selected bands (or features). Experimental results are presented with data from AVIRIS and Landsat Satellite Sensors.


CISE Lecture III – Dr. Luis O. Jimenez
February 13, 2002

Unsupervised Feature Extraction Techniques for Hyperspectral Data and its Effects on Unsupervised Classification

Feature extraction, implemented as a linear projection from a higher dimensional space to a lower dimensional subspace, is a very important issue in hyperspectral data analysis. The projection must be done in a matter that minimizes the redundancy, maintaining the information content. In hyperspectral data analysis, a relevant objective of feature extraction is to reduce the dimensionality of the data maintaining the capability of discriminating object of interest from the cluttered background. This presentation exposes a comparative study of different unsupervised feature extraction mechanisms and shows their effects on unsupervised detection and classification. The mechanisms implemented and compared are an unsupervised SVD based band subset selection mechanism, Projection Pursuit, and Principal Component Analysis. For purposes of validating the unsupervised methods, supervised mechanisms as Discriminant Analysis and a supervised band subset selection using Bhattacharyya distance were implemented and its results were compared with the unsupervised methods. Unsupervised band subset selection chooses automatically the most independent set of bands. Projection Pursuit based feature extraction algorithm automatically searches for projections that optimize a projection index. The projection index we optimized is one that measures the information divergence between the probability density function of the projected data and the Gaussian probability density function. This produces a projection where the probability density function of the whole data set is multi-modal, instead of a Gaussian uni-modal distribution. This augments the separability of the unknown clusters in the lower dimensional space. Finally they were compared with well-known and used Principal Component Analysis. The methods were tested using synthetic as well as remotely sensed data obtained from AVIRIS and HYDICE. They were compared using unsupervised classification methods in a known ground truth area.


CISE-Lecture IV - Dr. Fernando C. Colon-Osorio
February 27, 2003

An Overview of the “System Security” Problem

An overview of the “System Security” Problem will be presented. Specifically, this talk will address the lack of quantitative measurements present when discussing the security of a system. For example, new Intrusion Detection and Countermeasure Ssyetms (IDCS systems) are implemented and deployed regularly, with little evidence presented, beyond anecdotal data, in support of their effectiveness. To the best of our knowledge, evaluation tools and methodologies to evaluate Intrusion Detection & Countermeasure Systems exists only in the form of limited statistical measures, such as in Lippman [17]. We propose here the concept of “System Security” and “System Resiliency” (analogous to the concepts of reliability and availability) as intrinsic properties of the system. In order to arrive at such measure, we define a set of functions such as MTBSI (Mean Time Between System Intrusion), and the mechanisms to compute them. Finally, as observed by practitioners and researchers, the threats to end-to-end security has grown exponentially in recent years. Within this context, the next generation of attacks, which are highly distributed, and notoriously difficult to counter, are already emerging. In this mode, sophisticated communities of hackers/crackers, such as BLACKHAT users, compromise a large number of unsuspecting (and unsuspected) home computers, and launch major coordinated attacks on government and corporate networks. We call such attacks “Swarm Attacks”, like a “swarm of bees”. To avert such attacks, an IDCS system is needed, that poses minimal overhead on the resources it is protecting. Such system must be capable of maintaining its performance characteristics under increasing loads and changes in the pattern of usage. We will conclude our presentation by introducing one such system, called SAFE that is capable of dealing with such attacks.

CISE-Lecture V -Dr. Manuel Rodriguez-Martinez
March 13, 2003

TerraScope: A Database Middleware System to Support Wide-Area Scientific Applications

The emergence of large-scale wide-area networks, particularly the Internet, has provided Earth Scientists with access to vast collections of satellite image, GIS data, measurements, and other useful data products. In addition, high-performance computers, remote sensing instruments, massive disk arrays, and other important computational equipment can be networked and accessed via the Internet. Currently, many research centers, universities, and government agencies world-wide provide some type of access to many of these data sets and computational resources. Typically, a Web-based interface is used to browse images, download remote sensing software, or submit requests to obtain computer time to perform customized data analysis. However, these solutions face major limitations and scalability problems when heterogeneous data and computational resources need to be integrated to support applications that must perform data analysis across different types of observational data (e.g. MODIS, Landsat 7).

We propose to develop and deploy the TerraScope Earth Science Information Management System to integrate and federate heterogeneous data collections stored at geographically distributed data centers. In addition, TerraScope will automate the process of finding, selecting and using adequate computational resources (e.g. computer cycles, disk storage) that might be available at cooperative sites. TerraScope is a scalable end-to-end middle-tier solution being developed by the Advanced Data Management (ADM) Group from the University of Puerto Rico, Mayagüez (UPRM). TerraScope is designed to support automated data ingestion, data cleansing, spatial indexing, dynamic image subsetting, parallel image processing, hypertext-based image visualization, and distributed data retrieval and query processing. In this talk, we shall provide an overview of the architecture of TerraScope and the various data processing that we have designed to support efficient system operation.

CISE-Lecture VI -Dr. Ivelise M. Rubio Canabal
March 27, 2003

Turbo Encoders with Interleavers Constructed Using Permutation Monomials

Error Control Codes are used in digital communication systems to protect information from errors that might occur during transmission. Turbo codes are especially suitable for satellite communication systems since they provide error control performance with a good reduction in the transmitter power levels. The interleaver is an essential component of a turbo encoder. The spreading and dispersion factors associated with the interleavers are important to obtain ``good'' turbo encoders. An article by Takeshita and Costello as well as data obtained by Corrada Bravo suggest that another important property of an interleaver is the length of the cycles of the permutation and its relation to the cycle length of the convolutional code. The best known interleavers have been obtained using semi-random constructions. Although good performance can be obtained with this type of construction, it is bad for implementation as well as for performance analysis. Most of the known methods for constructing interleavers algebraically do not produce interleavers with good properties. In this talk we will present some results on the cyclic decomposition of permutations obtained using monomials. We will also introduce Turbo codes; describe an algebraic construction for interleavers using permutation monomials and some preliminary results on their performance.

CISE Lecture VII –Dr. Jaime Ramirez-Vick
April 10, 2003

High-Tech Entrepreneurship: Adventures of an Academic Scientist

The days of the DOT COMs have come and gone, but they have left behind a new sense of possibilities for becoming economically self-reliant. This notion is very real to academic scientists developing high technology, whose dreams of commercializing their innovations has given way to many of the companies that currently support the World’s economy. This talk is a story about my adventures while following the path of an academic entrepreneur. From dreams of glory as a graduate student to the opportunities encountered as a postdoctoral scholar. It is a story about learning and discovering new ways of creative expression.

My goal is to motivate others who, like me, felt that the development and commercialization of product-oriented technologies constitutes another level in the development of a successful career. Hopefully, it will also motivate those who feel that becoming an academic entrepreneur tarnishes the purity of academic research by helping them realize that instead of shifting the goals of academic research from the acquisition of knowledge to the acquisition of wealth, entrepreneurship provides new means to develop universities, support basic and applied research, and nurture society’s intellectual wealth. It is thus important to strengthen the contribution of universities to research and development and innovation in areas with a potential for commercialization. This will bring a paradigm shift in the Island's economy from one depending on manufacturing to one based on technological innovation. The development of university spin-off companies is fundamental for this shift in our economic base to occur.

CISE Lecture VIII –Dr. Fernando Vega
May 8, 2003

On the use and architecture of Knowledge Management Systems in Higher Education

Networking is not just about establishing telecommunications or Computer connections. Rather, networking is about connections between people and is of paramount importance in Knowledge Management (KM). Along the same lines, people are not just users of information but, as part of the knowledge system in which they are immersed, they become the most valuable (the only?) knowledge resource of such system. In this presentation a model of a knowledge management system is discussed together with its potential use in the teaching-learning process in higher education. A survey among the professors and students of the Electrical and Computer Engineering Department at the University of Puerto Rico was conducted and, one hand, confirmed several suppositions about the use of and preference for some information resources. On the other hand, the survey also revealed current and potential information uses and sharing opportunities especially associated with the growing use of the Internet services. Information and knowledge sharing is latent in both the faculty and the student communities and represents a significant opportunity to build upon and become the core of networking in knowledge management. Based on the results of this survey, we have developed taxonomy of the information resources. This taxonomy is used for the structuring of the knowledge-base, and the construction of the ontology and the metadata associated with the information resources. As part of this work, an architecture that combines distributed agents and expert systems technology is presented. While the distributed agents modularize the design and facilitate the scaling up of the system, the expert system enables the development of a conversational interface between the knowledge base and the users for both, the analysis, cataloguing and storage of information resources as well as the computer assisted query and retrieval of information. This model is based on the claim that the same kind of interaction can be used with the users when they generate information or when they want to retrieve information resources for a particular purpose. The ontology serves as the semantic network that will guide the inference processes for instantiation of the metadata or the search for resources in the knowledge-base.

SPECIAL EDITIONS

CISE-LTS Special Edition & Doctoral CISE Seminar – Dr. Hugh B. Nicholas Jr.
February 26, 2003

Identifying Determinants of Biological Specificity Among Protein Families: A case study of Aldehyde Dehydrogenases and Glutathione S-Transferases

This lecture describes a computerized analytical protocol for determining the sequence features that are responsible for the specificity of action of the different families within a protein superfamily. Protein superfamilies consist divergent families that each carry out similar but distinct biochemical functions or physiological roles. These distinct functions and roles arise by the process of gene duplication followed by evolutionary divergence and selection for different functions and roles. The recent explosive increase in the amount of sequence data available makes it possible to systematically analyze a superfamily to identify the most likely candidates for the sequence elements that are responsible for the specificity of action of the different families. Earlier versions of this analysis have successfully identified specificity determinants in transfer RNAs. The most recent implementation has been applied to the aldehyde dehydrogenase superfamily and to the glutathione S-transferase superfamily, medically important superfamilies of proteins.


CISE-LTS Special Edition – Dr. H. F. Mattson, Jr.
March 20, 2003

The Mattson-Solomon Polynomial: An Elementary Introduction


CISE-LTS Special Edition – Dr. Yun Qing Shi
April 1, 2003

A New Approach to 2D/3D Interleaving and its application to Digital Image and Video Watermarking

A New Approach to 2-D/3-D Interleaving and Its Application in Digital Image/Video Watermarking Correction of two-dimensional (2-D) and three-dimensional (3-D) error bursts finds wide applications in secure data handling such as 2-D and 3-D magnetic and optical data storage, charged-coupled devices (CCDs), 2-D barcodes, and information hiding in digital images and video sequences, to name a few. In this talk, we present a new 2-D interleaving technique, based on a successive packing approach, to combat 2-D bursts of errors, which can be extended to multi-dimensional (M-D) interleaving. Square arrays of 2n 2n are considered. It is shown that the proposed successive packing technique can spread any burst of errors of 2k 2k (with 1 k n-1), 2k 2k +1 (with 0 k n-1), and 2k+1 2k (with 0 k n-1) effectively so that the error burst can be corrected with some simple random-error-correction code (provided the error correction code is available). It is further shown that the technique is optimal for combating all the above-mentioned error bursts in the sense that the interleaving degree reaches its lower bound. This implies that the algorithm needs to be implemented only once for a given 2-D array and is thereafter optimal for the set of error bursts having different sizes. A performance comparison between the proposed method and some existing techniques is given and the future research is discussed. At the last part of the talk, some promising recent results of applying the 2-D/3-D interleaving techniques to digital image/video watermarking are presented to demonstrate its effectiveness in combating error bursts.

 

 

 

About Precise Research Publications People CISE Technical Lecture Series Laboratories Ph.D. in CISE Computer Research Conference Reports Important Links